Science.gov

Sample records for global sensitivity analysis

  1. Sensitivity analysis, optimization, and global critical points

    SciTech Connect

    Cacuci, D.G. )

    1989-11-01

    The title of this paper suggests that sensitivity analysis, optimization, and the search for critical points in phase-space are somehow related; the existence of such a kinship has been undoubtedly felt by many of the nuclear engineering practitioners of optimization and/or sensitivity analysis. However, a unified framework for displaying this relationship has so far been lacking, especially in a global setting. The objective of this paper is to present such a global and unified framework and to suggest, within this framework, a new direction for future developments for both sensitivity analysis and optimization of the large nonlinear systems encountered in practical problems.

  2. Global sensitivity analysis in wind energy assessment

    NASA Astrophysics Data System (ADS)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present

  3. Global and Local Sensitivity Analysis Methods for a Physical System

    ERIC Educational Resources Information Center

    Morio, Jerome

    2011-01-01

    Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…

  4. Global and Local Sensitivity Analysis Methods for a Physical System

    ERIC Educational Resources Information Center

    Morio, Jerome

    2011-01-01

    Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…

  5. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  6. Optimizing human activity patterns using global sensitivity analysis

    PubMed Central

    Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2014-01-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080

  7. Optimizing human activity patterns using global sensitivity analysis

    SciTech Connect

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  8. Optimizing human activity patterns using global sensitivity analysis

    DOE PAGES

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; ...

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less

  9. Optimizing human activity patterns using global sensitivity analysis.

    PubMed

    Fairchild, Geoffrey; Hickmann, Kyle S; Mniszewski, Susan M; Del Valle, Sara Y; Hyman, James M

    2014-12-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule's regularity for a population. We show how to tune an activity's regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  10. Global sensitivity analysis in stochastic simulators of uncertain reaction networks.

    PubMed

    Navarro Jimenez, M; Le Maître, O P; Knio, O M

    2016-12-28

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  11. A Global Sensitivity Analysis Methodology for Multi-physics Applications

    SciTech Connect

    Tong, C H; Graziani, F R

    2007-02-02

    Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.

  12. A global sensitivity analysis of crop virtual water content

    NASA Astrophysics Data System (ADS)

    Tamea, S.; Tuninetti, M.; D'Odorico, P.; Laio, F.; Ridolfi, L.

    2015-12-01

    The concepts of virtual water and water footprint are becoming widely used in the scientific literature and they are proving their usefulness in a number of multidisciplinary contexts. With such growing interest a measure of data reliability (and uncertainty) is becoming pressing but, as of today, assessments of data sensitivity to model parameters, performed at the global scale, are not known. This contribution aims at filling this gap. Starting point of this study is the evaluation of the green and blue virtual water content (VWC) of four staple crops (i.e. wheat, rice, maize, and soybean) at a global high resolution scale. In each grid cell, the crop VWC is given by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield, where evapotranspiration is determined with a detailed daily soil water balance and actual yield is estimated using country-based data, adjusted to account for spatial variability. The model provides estimates of the VWC at a 5x5 arc minutes and it improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The model is then used as the basis for a sensitivity analysis, in order to evaluate the role of model parameters in affecting the VWC and to understand how uncertainties in input data propagate and impact the VWC accounting. In each cell, small changes are exerted to one parameter at a time, and a sensitivity index is determined as the ratio between the relative change of VWC and the relative change of the input parameter with respect to its reference value. At the global scale, VWC is found to be most sensitive to the planting date, with a positive (direct) or negative (inverse) sensitivity index depending on the typical season of crop planting date. VWC is also markedly dependent on the length of the growing period, with an increase in length always producing an increase of VWC, but with higher spatial variability for rice than for

  13. Dynamic global sensitivity analysis in bioreactor networks for bioethanol production.

    PubMed

    Ochoa, M P; Estrada, V; Di Maggio, J; Hoch, P M

    2016-01-01

    Dynamic global sensitivity analysis (GSA) was performed for three different dynamic bioreactor models of increasing complexity: a fermenter for bioethanol production, a bioreactors network, where two types of bioreactors were considered: aerobic for biomass production and anaerobic for bioethanol production and a co-fermenter bioreactor, to identify the parameters that most contribute to uncertainty in model outputs. Sobol's method was used to calculate time profiles for sensitivity indices. Numerical results have shown the time-variant influence of uncertain parameters on model variables. Most influential model parameters have been determined. For the model of the bioethanol fermenter, μmax (maximum growth rate) and Ks (half-saturation constant) are the parameters with largest contribution to model variables uncertainty; in the bioreactors network, the most influential parameter is μmax,1 (maximum growth rate in bioreactor 1); whereas λ (glucose-to-total sugars concentration ratio in the feed) is the most influential parameter over all model variables in the co-fermentation bioreactor.

  14. Global sensitivity analysis of the radiative transfer model

    NASA Astrophysics Data System (ADS)

    Neelam, Maheshwari; Mohanty, Binayak P.

    2015-04-01

    With the recently launched Soil Moisture Active Passive (SMAP) mission, it is very important to have a complete understanding of the radiative transfer model for better soil moisture retrievals and to direct future research and field campaigns in areas of necessity. Because natural systems show great variability and complexity with respect to soil, land cover, topography, precipitation, there exist large uncertainties and heterogeneities in model input factors. In this paper, we explore the possibility of using global sensitivity analysis (GSA) technique to study the influence of heterogeneity and uncertainties in model inputs on zero order radiative transfer (ZRT) model and to quantify interactions between parameters. GSA technique is based on decomposition of variance and can handle nonlinear and nonmonotonic functions. We direct our analyses toward growing agricultural fields of corn and soybean in two different regions, Iowa, USA (SMEX02) and Winnipeg, Canada (SMAPVEX12). We noticed that, there exists a spatio-temporal variation in parameter interactions under different soil moisture and vegetation conditions. Radiative Transfer Model (RTM) behaves more non-linearly in SMEX02 and linearly in SMAPVEX12, with average parameter interactions of 14% in SMEX02 and 5% in SMAPVEX12. Also, parameter interactions increased with vegetation water content (VWC) and roughness conditions. Interestingly, soil moisture shows an exponentially decreasing sensitivity function whereas parameters such as root mean square height (RMS height) and vegetation water content show increasing sensitivity with 0.05 v/v increase in soil moisture range. Overall, considering the SMAPVEX12 fields to be water rich environment (due to higher observed SM) and SMEX02 fields to be energy rich environment (due to lower SM and wide ranges of TSURF), our results indicate that first order as well as interactions between the parameters change with water and energy rich environments.

  15. Simulation of the global contrail radiative forcing: A sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Yi, Bingqi; Yang, Ping; Liou, Kuo-Nan; Minnis, Patrick; Penner, Joyce E.

    2012-12-01

    The contrail radiative forcing induced by human aviation activity is one of the most uncertain contributions to climate forcing. An accurate estimation of global contrail radiative forcing is imperative, and the modeling approach is an effective and prominent method to investigate the sensitivity of contrail forcing to various potential factors. We use a simple offline model framework that is particularly useful for sensitivity studies. The most-up-to-date Community Atmospheric Model version 5 (CAM5) is employed to simulate the atmosphere and cloud conditions during the year 2006. With updated natural cirrus and additional contrail optical property parameterizations, the RRTMG Model (RRTM-GCM application) is used to simulate the global contrail radiative forcing. Global contrail coverage and optical depth derived from the literature for the year 2002 is used. The 2006 global annual averaged contrail net (shortwave + longwave) radiative forcing is estimated to be 11.3 mW m-2. Regional contrail radiative forcing over dense air traffic areas can be more than ten times stronger than the global average. A series of sensitivity tests are implemented and show that contrail particle effective size, contrail layer height, the model cloud overlap assumption, and contrail optical properties are among the most important factors. The difference between the contrail forcing under all and clear skies is also shown.

  16. Understanding earth system models: how Global Sensitivity Analysis can help

    NASA Astrophysics Data System (ADS)

    Pianosi, Francesca; Wagener, Thorsten

    2017-04-01

    Computer models are an essential element of earth system sciences, underpinning our understanding of systems functioning and influencing the planning and management of socio-economic-environmental systems. Even when these models represent a relatively low number of physical processes and variables, earth system models can exhibit a complicated behaviour because of the high level of interactions between their simulated variables. As the level of these interactions increases, we quickly lose the ability to anticipate and interpret the model's behaviour and hence the opportunity to check whether the model gives the right response for the right reasons. Moreover, even if internally consistent, an earth system model will always produce uncertain predictions because it is often forced by uncertain inputs (due to measurement errors, pre-processing uncertainties, scarcity of measurements, etc.). Lack of transparency about the scope of validity, limitations and the main sources of uncertainty of earth system models can be a strong limitation to their effective use for both scientific and decision-making purposes. Global Sensitivity Analysis (GSA) is a set of statistical analysis techniques to investigate the complex behaviour of earth system models in a structured, transparent and comprehensive way. In this presentation, we will use a range of examples across earth system sciences (with a focus on hydrology) to demonstrate how GSA is a fundamental element in advancing the construction and use of earth system models, including: verifying the consistency of the model's behaviour with our conceptual understanding of the system functioning; identifying the main sources of output uncertainty so to focus efforts for uncertainty reduction; finding tipping points in forcing inputs that, if crossed, would bring the system to specific conditions we want to avoid.

  17. Global in Time Analysis and Sensitivity Analysis for the Reduced NS- α Model of Incompressible Flow

    NASA Astrophysics Data System (ADS)

    Rebholz, Leo; Zerfas, Camille; Zhao, Kun

    2017-09-01

    We provide a detailed global in time analysis, and sensitivity analysis and testing, for the recently proposed (by the authors) reduced NS- α model. We extend the known analysis of the model to the global in time case by proving it is globally well-posed, and also prove some new results for its long time treatment of energy. We also derive PDE system that describes the sensitivity of the model with respect to the filtering radius parameter, and prove it is well-posed. An efficient numerical scheme for the sensitivity system is then proposed and analyzed, and proven to be stable and optimally accurate. Finally, two physically meaningful test problems are simulated: channel flow past a cylinder (including lift and drag calculations) and turbulent channel flow with {Re_{τ}=590}. The numerical results reveal that sensitivity is created near boundaries, and thus this is where the choice of the filtering radius is most critical.

  18. Global sensitivity analysis of DRAINMOD-FOREST, an integrated forest ecosystem model

    Treesearch

    Shiying Tian; Mohamed A. Youssef; Devendra M. Amatya; Eric D. Vance

    2014-01-01

    Global sensitivity analysis is a useful tool to understand process-based ecosystem models by identifying key parameters and processes controlling model predictions. This study reported a comprehensive global sensitivity analysis for DRAINMOD-FOREST, an integrated model for simulating water, carbon (C), and nitrogen (N) cycles and plant growth in lowland forests. The...

  19. How to assess the Efficiency and "Uncertainty" of Global Sensitivity Analysis?

    NASA Astrophysics Data System (ADS)

    Haghnegahdar, Amin; Razavi, Saman

    2016-04-01

    Sensitivity analysis (SA) is an important paradigm for understanding model behavior, characterizing uncertainty, improving model calibration, etc. Conventional "global" SA (GSA) approaches are rooted in different philosophies, resulting in different and sometime conflicting and/or counter-intuitive assessment of sensitivity. Moreover, most global sensitivity techniques are highly computationally demanding to be able to generate robust and stable sensitivity metrics over the entire model response surface. Accordingly, a novel sensitivity analysis method called Variogram Analysis of Response Surfaces (VARS) is introduced to overcome the aforementioned issues. VARS uses the Variogram concept to efficiently provide a comprehensive assessment of global sensitivity across a range of scales within the parameter space. Based on the VARS principles, in this study we present innovative ideas to assess (1) the efficiency of GSA algorithms and (2) the level of confidence we can assign to a sensitivity assessment. We use multiple hydrological models with different levels of complexity to explain the new ideas.

  20. Global sensitivity analysis of analytical vibroacoustic transmission models

    NASA Astrophysics Data System (ADS)

    Christen, Jean-Loup; Ichchou, Mohamed; Troclet, Bernard; Bareille, Olivier; Ouisse, Morvan

    2016-04-01

    Noise reduction issues arise in many engineering problems. One typical vibroacoustic problem is the transmission loss (TL) optimisation and control. The TL depends mainly on the mechanical parameters of the considered media. At early stages of the design, such parameters are not well known. Decision making tools are therefore needed to tackle this issue. In this paper, we consider the use of the Fourier Amplitude Sensitivity Test (FAST) for the analysis of the impact of mechanical parameters on features of interest. FAST is implemented with several structural configurations. FAST method is used to estimate the relative influence of the model parameters while assuming some uncertainty or variability on their values. The method offers a way to synthesize the results of a multiparametric analysis with large variability. Results are presented for transmission loss of isotropic, orthotropic and sandwich plates excited by a diffuse field on one side. Qualitative trends found to agree with the physical expectation. Design rules can then be set up for vibroacoustic indicators. The case of a sandwich plate is taken as an example of the use of this method inside an optimisation process and for uncertainty quantification.

  1. Uncertainty analysis and global sensitivity analysis of techno-economic assessments for biodiesel production.

    PubMed

    Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao

    2015-01-01

    There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Comparison of Applying FOUR Reduced Order Models to a Global Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Oladyshkin, S.; Liu, Y.; Pau, G. S. H.

    2014-12-01

    This study focuses on the comparison of applying four reduced order models (ROMs) to global sensitivity analysis (GSA). ROM is one way to improve computational efficiency in many-query applications such as optimization, uncertainty quantification, sensitivity analysis, inverse modeling where the computational demand can become large. The four ROM methods are: arbitrary Polynomial Chaos (aPC), Gaussian process regression (GPR), cut high dimensional model representation (HDMR), and random sample HDMR. The discussion is mainly based on a global sensitivity analysis performed for a hypothetical large-scale CO2 storage project. Pros and cons of each method will be discussed and suggestions on how each method should be applied individually or combined will be made.

  3. Global sensitivity analysis in wastewater treatment plant model applications: prioritizing sources of uncertainty.

    PubMed

    Sin, Gürkan; Gernaey, Krist V; Neumann, Marc B; van Loosdrecht, Mark C M; Gujer, Willi

    2011-01-01

    This study demonstrates the usefulness of global sensitivity analysis in wastewater treatment plant (WWTP) design to prioritize sources of uncertainty and quantify their impact on performance criteria. The study, which is performed with the Benchmark Simulation Model no. 1 plant design, complements a previous paper on input uncertainty characterisation and propagation (Sin et al., 2009). A sampling-based sensitivity analysis is conducted to compute standardized regression coefficients. It was found that this method is able to decompose satisfactorily the variance of plant performance criteria (with R(2) > 0.9) for effluent concentrations, sludge production and energy demand. This high extent of linearity means that the plant performance criteria can be described as linear functions of the model inputs under the defined plant conditions. In effect, the system of coupled ordinary differential equations can be replaced by multivariate linear models, which can be used as surrogate models. The importance ranking based on the sensitivity measures demonstrates that the most influential factors involve ash content and influent inert particulate COD among others, largely responsible for the uncertainty in predicting sludge production and effluent ammonium concentration. While these results were in agreement with process knowledge, the added value is that the global sensitivity methods can quantify the contribution of the variance of significant parameters, e.g., ash content explains 70% of the variance in sludge production. Further the importance of formulating appropriate sensitivity analysis scenarios that match the purpose of the model application needs to be highlighted. Overall, the global sensitivity analysis proved a powerful tool for explaining and quantifying uncertainties as well as providing insight into devising useful ways for reducing uncertainties in the plant performance. This information can help engineers design robust WWTP plants.

  4. A sensitivity analysis of key natural factors in the modeled global acetone budget

    NASA Astrophysics Data System (ADS)

    Brewer, J. F.; Bishop, M.; Kelp, M.; Keller, C. A.; Ravishankara, A. R.; Fischer, E. V.

    2017-02-01

    Acetone is one of the most abundant carbonyl compounds in the atmosphere, and it serves as an important source of HOx (OH + HO2) radicals in the upper troposphere and a precursor for peroxyacetyl nitrate. We present a global sensitivity analysis targeted at several major natural source and sink terms in the global acetone budget to find the input factor or factors to which the simulated acetone mixing ratio was most sensitive. The ranges of input factors were taken from literature. We calculated the influence of these factors in terms of their elementary effects on model output. Of the six factors tested here, the four factors with the highest contribution to total global annual model sensitivity are direct emissions of acetone from the terrestrial biosphere, acetone loss to photolysis, the concentration of acetone in the ocean mixed layer, and the dry deposition of acetone to ice-free land. The direct emissions of acetone from the terrestrial biosphere are globally important in determining acetone mixing ratios, but their importance varies seasonally outside the tropics. Photolysis is most influential in the upper troposphere. Additionally, the influence of the oceanic mixed layer concentrations are relatively invariant between seasons, compared to the other factors tested. Monoterpene oxidation in the troposphere, despite the significant uncertainties in acetone yield in this process, is responsible for only a small amount of model uncertainty in the budget analysis.

  5. A methodology for global-sensitivity analysis of time-dependent outputs in systems biology modelling

    PubMed Central

    Sumner, T.; Shephard, E.; Bogle, I. D. L.

    2012-01-01

    One of the main challenges in the development of mathematical and computational models of biological systems is the precise estimation of parameter values. Understanding the effects of uncertainties in parameter values on model behaviour is crucial to the successful use of these models. Global sensitivity analysis (SA) can be used to quantify the variability in model predictions resulting from the uncertainty in multiple parameters and to shed light on the biological mechanisms driving system behaviour. We present a new methodology for global SA in systems biology which is computationally efficient and can be used to identify the key parameters and their interactions which drive the dynamic behaviour of a complex biological model. The approach combines functional principal component analysis with established global SA techniques. The methodology is applied to a model of the insulin signalling pathway, defects of which are a major cause of type 2 diabetes and a number of key features of the system are identified. PMID:22491976

  6. An Extended Global Sensitivity Analysis Implemented on a 1D Land Biosphere Model

    NASA Astrophysics Data System (ADS)

    Ioannou-Katidis, Pavlos; Petropoulos, George; Griffiths, Hywel; Bevan, Rhodri

    2014-05-01

    The implementation of sophisticated mathematical models is undoubtedly becoming increasingly widely used in a variety of fields in geosciences. SimSphere belongs to a special category of land biosphere models called Soil Vegetation Atmosphere Transfer (SVAT) models. Those provide representations, in a vertical profile, of the physical mechanisms controlling the physical interactions occurring in the soil/vegetation/atmosphere continuum at a temporal resolution that is in good agreement with the dynamic timescale of the atmospheric and surface processes. This study builds on previous works conducted by the authors and aims at extending our understanding of this model structure and further establishing its coherence. Herein we present the results from a thorough sensitivity analysis (SA) performed on SimSphere using a cutting edge and robust Global Sensitivity Analysis (GSA) approach, based on the use of the Gaussian Emulation Machine for Sensitivity Analysis (GEM-SA) tool. In particular, the sensitivity of selected key variables characterising land surface interactions simulated by SimSphere were evaluated at different times of model output. All model inputs were assumed to be normally distributed with their probability distribution functions (PDFs) defined using mean and variance taken from the entire theoretical range that these inputs can take in SimSphere. The sensitivity of the following SimSphere outputs was evaluated: Daily Average Net Radiation, Daily Average Latent Heat flux, Daily Average Sensible Heat flux, Daily Average Air Temperature , Daily Average Radiometric Temperature, Daily Average Surface Moisture Availability, Daily Average Evaporative Fraction and Daily Average Non-Evaporative Fraction. Our results showed largely comparable trends in terms of identifying the most sensitive model inputs in respect to the model outputs examined. In addition, a high percentage of first order interactions between the model inputs were reported, suggesting strong

  7. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    SciTech Connect

    Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; Turner, Adrian Keith; Jeffery, Nicole

    2016-04-01

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.

  8. A Methodology For Performing Global Uncertainty And Sensitivity Analysis In Systems Biology

    PubMed Central

    Marino, Simeone; Hogue, Ian B.; Ray, Christian J.; Kirschner, Denise E.

    2008-01-01

    Accuracy of results from mathematical and computer models of biological systems is often complicated by the presence of uncertainties in experimental data that are used to estimate parameter values. Current mathematical modeling approaches typically use either single-parameter or local sensitivity analyses. However, these methods do not accurately assess uncertainty and sensitivity in the system as, by default they hold all other parameters fixed at baseline values. Using techniques described within we demonstrate how a multi-dimensional parameter space can be studied globally so all uncertainties can be identified. Further, uncertainty and sensitivity analysis techniques can help to identify and ultimately control uncertainties. In this work we develop methods for applying existing analytical tools to perform analyses on a variety of mathematical and computer models. We compare two specific types of global sensitivity analysis indexes that have proven to be among the most robust and efficient. Through familiar and new examples of mathematical and computer models, we provide a complete methodology for performing these analyses, both in deterministic and stochastic settings, and propose novel techniques to handle problems encountered during this type of analyses. PMID:18572196

  9. Global Sensitivity Analysis with Small Sample Sizes: Ordinary Least Squares Approach

    SciTech Connect

    Davis, Michael J.; Liu, Wei; Sivaramakrishnan, Raghu

    2016-12-21

    A new version of global sensitivity analysis is developed in this paper. This new version coupled with tools from statistics, machine learning, and optimization can devise small sample sizes that allow for the accurate ordering of sensitivity coefficients for the first 10-30 most sensitive chemical reactions in complex chemical-kinetic mechanisms, and is particularly useful for studying the chemistry in realistic devices. A key part of the paper is calibration of these small samples. Because these small sample sizes are developed for use in realistic combustion devices, the calibration is done over the ranges of conditions in such devices, with a test case being the operating conditions of a compression ignition engine studied earlier. Compression ignition engines operate under low-temperature combustion conditions with quite complicated chemistry making this calibration difficult, leading to the possibility of false positives and false negatives in the ordering of the reactions. So an important aspect of the paper is showing how to handle the trade-off between false positives and false negatives using ideas from the multiobjective optimization literature. The combination of the new global sensitivity method and the calibration are sample sizes a factor of approximately 10 times smaller than were available with our previous algorithm.

  10. Development and sensitivity analysis of a global drinking water quality index.

    PubMed

    Rickwood, C J; Carr, G M

    2009-09-01

    The UNEP GEMS/Water Programme is the leading international agency responsible for the development of water quality indicators and maintains the only global database of water quality for inland waters (GEMStat). The protection of source water quality for domestic use (drinking water, abstraction etc) was identified by GEMS/Water as a priority for assessment. A composite index was developed to assess source water quality across a range of inland water types, globally, and over time. The approach for development was three-fold: (1) Select guidelines from the World Health Organisation that are appropriate in assessing global water quality for human health, (2) Select variables from GEMStat that have an appropriate guideline and reasonable global coverage, and (3) determine, on an annual basis, an overall index rating for each station using the water quality index equation endorsed by the Canadian Council of Ministers of the Environment. The index allowed measurements of the frequency and extent to which variables exceeded their respective WHO guidelines, at each individual monitoring station included within GEMStat, allowing both spatial and temporal assessment of global water quality. Development of the index was followed by preliminary sensitivity analysis and verification of the index against real water quality data.

  11. Global sensitivity analysis of the dispersion maximum position of the PCFs with circular holes

    NASA Astrophysics Data System (ADS)

    Guryev, Igor; Sukhoivanov, Igor; Andrade Lucio, Jose A.; Vargas Rodrigues, Everardo; Shulika, Oleksiy; Mata Chavez, Ruth I.; Baca Montero, Eric R.

    2015-08-01

    Microstructured fibers have recently become popular due to their numerous applications for fiber lasers,1 super-continuum generationi2 and pulse reshaping.3 One of the most important properties of such fibers that is taken into account is its dispersion. Fine tuning of the dispersion (i.e. dispersion management) is one of the crucial peculiarities of the photonic crystal fibers (PCFs)4 that are particular case of the microstructured fibers. During last years, there have been presented various designs of the PCFs possessing specially-designed dispersion shapes. 5-7 However, no universal technique exists which would allow tuning the PCF dispersion without using optimization methods. In our work, we investigate the sensitivity of the PCF dispersion as respect to variation of its basic parameters. This knowledge allows fine-tuning the position of local maximum of the PCF dispersion while maintaining other properties unchanged. The work is organized as follows. In the first section we discuss the dispersion computation method that is suitable for the global sensitivity analysis. The second section presents the global sensitivity analysis for this specific case. We also discuss there possible selection of the variable parameters.

  12. Global sensitivity analysis of ozone, HO2, and OH during ARCTAS campaign

    NASA Astrophysics Data System (ADS)

    Christian, K. E.; Mao, J.; Brune, W. H.

    2015-12-01

    Modeling the chemical state of the atmosphere is a complicated endeavor due to the complex, non-linear interactions between meteorology, emissions, and kinetics that govern trace gas concentrations. Given the rapid environmental changes taking place, the Arctic is one area of particular interest with regards to climate and atmospheric composition. To observe these changes to the Arctic atmosphere, NASA funded the Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) campaign (2008). As part of the mission, measurements of oxidative factors (hydroxyl (OH) and hydroperoxyl (HO2) abundances) were taken using the Airborne Tropospheric Hydrogen Oxides Sensor (ATHOS) aboard the NASA DC-8. Using GEOS-Chem, a popular global chemical transport model, we perform a global sensitivity analysis for the period of the ARCTAS campaign, allowing for non-linear interactions between input factors to be accounted and quantified in the analysis. Sensitivities are determined for around 50 model input factors and for combinations of pairs of input factors using the Random Sampling - High Dimensional Model Representation (RS-HDMR) method. We calculate the uncertainty in these oxidative factors, and in ozone, ozone production rate, and hydroxyl production rate and find the sensitivity of these oxidative factors and the differences between the measured and modeled oxidative factors to model inputs in meteorology, emissions, and chemistry. This presentation will include a solid estimate of GEOS-Chem model uncertainty for the period of the ARCTAS campaign, the emissions, meteorology, or chemistry to which oxidative properties are most sensitive for these periods, and the factors to which the differences between the modeled and measured oxidative factors are most sensitive.

  13. SAFE(R): A Matlab/Octave Toolbox (and R Package) for Global Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Pianosi, Francesca; Sarrazin, Fanny; Gollini, Isabella; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis (GSA) is increasingly used in the development and assessment of hydrological models, as well as for dominant control analysis and for scenario discovery to support water resource management under deep uncertainty. Here we present a toolbox for the application of GSA, called SAFE (Sensitivity Analysis For Everybody) that implements several established GSA methods, including method of Morris, Regional Sensitivity Analysis, variance-based sensitivity Analysis (Sobol') and FAST. It also includes new approaches and visualization tools to complement these established methods. The Toolbox is released in two versions, one running under Matlab/Octave (called SAFE) and one running in R (called SAFER). Thanks to its modular structure, SAFE(R) can be easily integrated with other toolbox and packages, and with models running in a different computing environment. Another interesting feature of SAFE(R) is that all the implemented methods include specific functions for assessing the robustness and convergence of the sensitivity estimates. Furthermore, SAFE(R) includes numerous visualisation tools for the effective investigation and communication of GSA results. The toolbox is designed to make GSA accessible to non-specialist users, and to provide a fully commented code for more experienced users to complement their own tools. The documentation includes a set of workflow scripts with practical guidelines on how to apply GSA and how to use the toolbox. SAFE(R) is open source and freely available from the following website: http://bristol.ac.uk/cabot/resources/safe-toolbox/ Ultimately, SAFE(R) aims at improving the diffusion and quality of GSA practice in the hydrological modelling community.

  14. A Protocol for the Global Sensitivity Analysis of Impact Assessment Models in Life Cycle Assessment.

    PubMed

    Cucurachi, S; Borgonovo, E; Heijungs, R

    2016-02-01

    The life cycle assessment (LCA) framework has established itself as the leading tool for the assessment of the environmental impact of products. Several works have established the need of integrating the LCA and risk analysis methodologies, due to the several common aspects. One of the ways to reach such integration is through guaranteeing that uncertainties in LCA modeling are carefully treated. It has been claimed that more attention should be paid to quantifying the uncertainties present in the various phases of LCA. Though the topic has been attracting increasing attention of practitioners and experts in LCA, there is still a lack of understanding and a limited use of the available statistical tools. In this work, we introduce a protocol to conduct global sensitivity analysis in LCA. The article focuses on the life cycle impact assessment (LCIA), and particularly on the relevance of global techniques for the development of trustable impact assessment models. We use a novel characterization model developed for the quantification of the impacts of noise on humans as a test case. We show that global SA is fundamental to guarantee that the modeler has a complete understanding of: (i) the structure of the model and (ii) the importance of uncertain model inputs and the interaction among them. © 2015 Society for Risk Analysis.

  15. A New Framework for Effective and Efficient Global Sensitivity Analysis of Earth and Environmental Systems Models

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin

    2015-04-01

    Earth and Environmental Systems (EES) models are essential components of research, development, and decision-making in science and engineering disciplines. With continuous advances in understanding and computing power, such models are becoming more complex with increasingly more factors to be specified (model parameters, forcings, boundary conditions, etc.). To facilitate better understanding of the role and importance of different factors in producing the model responses, the procedure known as 'Sensitivity Analysis' (SA) can be very helpful. Despite the availability of a large body of literature on the development and application of various SA approaches, two issues continue to pose major challenges: (1) Ambiguous Definition of Sensitivity - Different SA methods are based in different philosophies and theoretical definitions of sensitivity, and can result in different, even conflicting, assessments of the underlying sensitivities for a given problem, (2) Computational Cost - The cost of carrying out SA can be large, even excessive, for high-dimensional problems and/or computationally intensive models. In this presentation, we propose a new approach to sensitivity analysis that addresses the dual aspects of 'effectiveness' and 'efficiency'. By effective, we mean achieving an assessment that is both meaningful and clearly reflective of the objective of the analysis (the first challenge above), while by efficiency we mean achieving statistically robust results with minimal computational cost (the second challenge above). Based on this approach, we develop a 'global' sensitivity analysis framework that efficiently generates a newly-defined set of sensitivity indices that characterize a range of important properties of metric 'response surfaces' encountered when performing SA on EES models. Further, we show how this framework embraces, and is consistent with, a spectrum of different concepts regarding 'sensitivity', and that commonly-used SA approaches (e.g., Sobol

  16. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    DOE PAGES

    Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; ...

    2016-04-01

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less

  17. Global analysis of parametric sensitivity of precipitation in the Community Atmosphere Model (CAM5)

    NASA Astrophysics Data System (ADS)

    Qian, Y.; Yan, H.; Zhao, C.; Hou, Z.; Wang, H.; Rasch, P. J.; Klein, S. A.; Lucas, D.; Tannahill, J.

    2013-12-01

    In this study, we investigate the sensitivity of precipitation characteristics, including mean, extreme and diurnal cycle, to dozens of uncertain parameters mainly related to cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube sampling and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations (1356 in total). The CAM5 ensemble simulates the mean precipitation reasonably well, but fails to capture the diurnal cycle of precipitation over land. The phase of diurnal precipitation associated with the convection propagation over Central US seems to be more related to model structural errors rather than the parametric uncertainties. Parametric calibration could possibly improve CAM5 precipitation over regions, such as Tropical Western Pacific, having relatively weak diurnal cycle and high model parameter identifiability. The precipitation variance is large and the diurnal cycle is strong over South America and Central Africa, where parametric calibration can possibly improve the model prediction of mean precipitation but not the diurnal cycle. Variance-based sensitivity analysis using a generalized linear model (GLM) is conducted to examine the relative contributions of individual parameter perturbations and their interactions to the global and regional precipitation. We characterize the global spatial distribution as well as scale (global vs. local) and seasonal dependence of parametric sensitivity of precipitation, and identify a few parameters that dominate the behavior of the mean, extremes or diurnal cycle of precipitation, respectively. Results suggest that the model-simulated precipitation is remarkably sensitive to a few cloud-related parameters, while aerosols have minor impact on the diurnal cycle of precipitation in the current CAM5. The interactions among the selected parameters contribute a relatively small portion to

  18. Decomposition method of complex optimization model based on global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Qiu, Qingying; Li, Bing; Feng, Peien; Gao, Yu

    2014-07-01

    The current research of the decomposition methods of complex optimization model is mostly based on the principle of disciplines, problems or components. However, numerous coupling variables will appear among the sub-models decomposed, thereby make the efficiency of decomposed optimization low and the effect poor. Though some collaborative optimization methods are proposed to process the coupling variables, there lacks the original strategy planning to reduce the coupling degree among the decomposed sub-models when we start decomposing a complex optimization model. Therefore, this paper proposes a decomposition method based on the global sensitivity information. In this method, the complex optimization model is decomposed based on the principle of minimizing the sensitivity sum between the design functions and design variables among different sub-models. The design functions and design variables, which are sensitive to each other, will be assigned to the same sub-models as much as possible to reduce the impacts to other sub-models caused by the changing of coupling variables in one sub-model. Two different collaborative optimization models of a gear reducer are built up separately in the multidisciplinary design optimization software iSIGHT, the optimized results turned out that the decomposition method proposed in this paper has less analysis times and increases the computational efficiency by 29.6%. This new decomposition method is also successfully applied in the complex optimization problem of hydraulic excavator working devices, which shows the proposed research can reduce the mutual coupling degree between sub-models. This research proposes a decomposition method based on the global sensitivity information, which makes the linkages least among sub-models after decomposition, and provides reference for decomposing complex optimization models and has practical engineering significance.

  19. Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities

    NASA Astrophysics Data System (ADS)

    Esposito, Gaetano

    Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and

  20. Visualization of Global Sensitivity Analysis Results Based on a Combination of Linearly Dependent and Independent Directions

    NASA Technical Reports Server (NTRS)

    Davies, Misty D.; Gundy-Burlet, Karen

    2010-01-01

    A useful technique for the validation and verification of complex flight systems is Monte Carlo Filtering -- a global sensitivity analysis that tries to find the inputs and ranges that are most likely to lead to a subset of the outputs. A thorough exploration of the parameter space for complex integrated systems may require thousands of experiments and hundreds of controlled and measured variables. Tools for analyzing this space often have limitations caused by the numerical problems associated with high dimensionality and caused by the assumption of independence of all of the dimensions. To combat both of these limitations, we propose a technique that uses a combination of the original variables with the derived variables obtained during a principal component analysis.

  1. Global Sensitivity analysis of atmospheric chemistry models using emulator-based and emulator-free methods

    NASA Astrophysics Data System (ADS)

    Ryan, Edmund; Wild, Oliver; O'Connor, Fiona; Voulgarakis, Apostolos; Lee, Lindsay

    2017-04-01

    Carrying out global sensitivity analysis (GSA) for a numerical model is critical in determining which inputs (e.g. parameters, driving data) most affect the model output. This informs us of which inputs to include: (i) for model calibration; (ii) when quantifying the uncertainty in the output given the uncertainty in the inputs. It is also used to diagnose differences in outputs between models. GSA quantifies the sensitivity index (SI) of a particular input - the percentage of the total variability in the output attributed to the changes in that input - by averaging over the other inputs, rather than fixing the other inputs at particular values as done in one-at-a-time sensitivity analysis. Traditional means of computing the SIs involve running the model thousands of times, but this becomes infeasible when the computational cost is high. GSA methods which use a surrogate of the model, called an emulator, are popular as they typically require far fewer runs of the model. Here we consider methods that would further reduce the computational burden of sensitivity analysis. When the output of a model is non-scalar, it is standard practice with an emulator-based GSA method to build a separate emulator for each dimension of the output space. An alternative is to apply principal component analysis (PCA) to reduce the output dimension and then build an emulator for each of the transformed outputs. We consider here a global map of methane lifetimes from our chemistry models. This requires 2000 emulators for the emulator-based GSA methods, but only 10-50 emulators for the PCA-emulator hybrid approach, reducing the computation of the SIs from 1 hour to 3 minutes on a desktop computer. The other benefit of PCA is that the transformed outputs are orthogonal, and thus building separate emulators is appropriate. Results show that very similar maps of SIs are produced whether the emulator-only or emulator-PCA hybrid approach is used. Another avenue to reducing the computational

  2. What do we mean by sensitivity analysis? The need for comprehensive characterization of "global" sensitivity in Earth and Environmental systems models

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin V.

    2015-05-01

    Sensitivity analysis is an essential paradigm in Earth and Environmental Systems modeling. However, the term "sensitivity" has a clear definition, based in partial derivatives, only when specified locally around a particular point (e.g., optimal solution) in the problem space. Accordingly, no unique definition exists for "global sensitivity" across the problem space, when considering one or more model responses to different factors such as model parameters or forcings. A variety of approaches have been proposed for global sensitivity analysis, based on different philosophies and theories, and each of these formally characterizes a different "intuitive" understanding of sensitivity. These approaches focus on different properties of the model response at a fundamental level and may therefore lead to different (even conflicting) conclusions about the underlying sensitivities. Here we revisit the theoretical basis for sensitivity analysis, summarize and critically evaluate existing approaches in the literature, and demonstrate their flaws and shortcomings through conceptual examples. We also demonstrate the difficulty involved in interpreting "global" interaction effects, which may undermine the value of existing interpretive approaches. With this background, we identify several important properties of response surfaces that are associated with the understanding and interpretation of sensitivities in the context of Earth and Environmental System models. Finally, we highlight the need for a new, comprehensive framework for sensitivity analysis that effectively characterizes all of the important sensitivity-related properties of model response surfaces.

  3. LCA of emerging technologies: addressing high uncertainty on inputs' variability when performing global sensitivity analysis.

    PubMed

    Lacirignola, Martino; Blanc, Philippe; Girard, Robin; Pérez-López, Paula; Blanc, Isabelle

    2017-02-01

    In the life cycle assessment (LCA) context, global sensitivity analysis (GSA) has been identified by several authors as a relevant practice to enhance the understanding of the model's structure and ensure reliability and credibility of the LCA results. GSA allows establishing a ranking among the input parameters, according to their influence on the variability of the output. Such feature is of high interest in particular when aiming at defining parameterized LCA models. When performing a GSA, the description of the variability of each input parameter may affect the results. This aspect is critical when studying new products or emerging technologies, where data regarding the model inputs are very uncertain and may cause misleading GSA outcomes, such as inappropriate input rankings. A systematic assessment of this sensitivity issue is now proposed. We develop a methodology to analyze the sensitivity of the GSA results (i.e. the stability of the ranking of the inputs) with respect to the description of such inputs of the model (i.e. the definition of their inherent variability). With this research, we aim at enriching the debate on the application of GSA to LCAs affected by high uncertainties. We illustrate its application with a case study, aiming at the elaboration of a simple model expressing the life cycle greenhouse gas emissions of enhanced geothermal systems (EGS) as a function of few key parameters. Our methodology allows identifying the key inputs of the LCA model, taking into account the uncertainty related to their description.

  4. A Global Analysis of CYP51 Diversity and Azole Sensitivity in Rhynchosporium commune.

    PubMed

    Brunner, Patrick C; Stefansson, Tryggvi S; Fountaine, James; Richina, Veronica; McDonald, Bruce A

    2016-04-01

    CYP51 encodes the target site of the azole class of fungicides widely used in plant protection. Some ascomycete pathogens carry two CYP51 paralogs called CYP51A and CYP51B. A recent analysis of CYP51 sequences in 14 European isolates of the barley scald pathogen Rhynchosporium commune revealed three CYP51 paralogs, CYP51A, CYP51B, and a pseudogene called CYP51A-p. The same analysis showed that CYP51A exhibits a presence/absence polymorphism, with lower sensitivity to azole fungicides associated with the presence of a functional CYP51A. We analyzed a global collection of nearly 400 R. commune isolates to determine if these findings could be extended beyond Europe. Our results strongly support the hypothesis that CYP51A played a key role in the emergence of azole resistance globally and provide new evidence that the CYP51A gene in R. commune has further evolved, presumably in response to azole exposure. We also present evidence for recent long-distance movement of evolved CYP51A alleles, highlighting the risk associated with movement of fungicide resistance alleles among international trading partners.

  5. Global sensitivity analysis, probabilistic calibration, and predictive assessment for the data assimilation linked ecosystem carbon model

    SciTech Connect

    Safta, C.; Ricciuto, Daniel M.; Sargsyan, Khachik; Debusschere, B.; Najm, H. N.; Williams, M.; Thornton, Peter E.

    2015-07-01

    In this paper we propose a probabilistic framework for an uncertainty quantification (UQ) study of a carbon cycle model and focus on the comparison between steady-state and transient simulation setups. A global sensitivity analysis (GSA) study indicates the parameters and parameter couplings that are important at different times of the year for quantities of interest (QoIs) obtained with the data assimilation linked ecosystem carbon (DALEC) model. We then employ a Bayesian approach and a statistical model error term to calibrate the parameters of DALEC using net ecosystem exchange (NEE) observations at the Harvard Forest site. The calibration results are employed in the second part of the paper to assess the predictive skill of the model via posterior predictive checks.

  6. Global sensitivity analysis, probabilistic calibration, and predictive assessment for the data assimilation linked ecosystem carbon model

    DOE PAGES

    Safta, C.; Ricciuto, Daniel M.; Sargsyan, Khachik; ...

    2015-07-01

    In this paper we propose a probabilistic framework for an uncertainty quantification (UQ) study of a carbon cycle model and focus on the comparison between steady-state and transient simulation setups. A global sensitivity analysis (GSA) study indicates the parameters and parameter couplings that are important at different times of the year for quantities of interest (QoIs) obtained with the data assimilation linked ecosystem carbon (DALEC) model. We then employ a Bayesian approach and a statistical model error term to calibrate the parameters of DALEC using net ecosystem exchange (NEE) observations at the Harvard Forest site. The calibration results are employedmore » in the second part of the paper to assess the predictive skill of the model via posterior predictive checks.« less

  7. The impact of prior parameter ranges on model behaviour using Global Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Almeida, Susana; Nijzink, Remko C.; Pechlivanidis, Ilias; Capell, René; Gustafsson, David; Wagener, Thorsten; Freer, Jim; Parajka, Juraj; Hrachowitz, Markus; Arheimer, Berit; Savenije, Hubert; Han, Dawei

    2017-04-01

    Hydrological models are typically calibrated on available streamflow data or, more rarely on other hydrologic variables (i.e. soil moisture, groundwater dynamics, etc.). Whilst the literature is increasingly extensive on the value of different hydrologic variables in constraining model predictions, less attention has been given on how to define plausible parameter prior distributions or how much such priors impact the range of model behaviour before further conditioning. This can be relevant to the uncertainty bounds of any model prediction or in regard to the amount of sensitivity of the model parameters to the chosen model outputs. In this study, we combine four different conceptual hydrological models (HYPE, HYMOD, TUW, FLEX) with Global Sensitivity Analysis techniques to explore what are the most influential parameters and how the parameter priors impact model predictions. Our analysis focuses on 27 catchments across Europe, capturing a wide range of climates, vegetation and landscapes typologies in order to explore the effects of these physical and climatic properties on parameter prior distributions. Our findings provide new insights in the value of different sources of information for constraining hydrological model inputs, and for predicting water resource conditions in catchments worldwide.

  8. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis

    PubMed Central

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736

  9. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.

    PubMed

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.

  10. Spatial heterogeneity and sensitivity analysis of crop virtual water content at a global scale

    NASA Astrophysics Data System (ADS)

    Tuninetti, Marta; Tamea, Stefania; D'Odorico, Paolo; Laio, Francesco; Ridolfi, Luca

    2015-04-01

    In this study, the green and blue virtual water content (VWC) of four staple crops (i.e., wheat, rice, maize, and soybean) is quantified at a high resolution scale, for the period 1996-2005, and a sensitivity analysis is performed for model parameters. In each grid cell, the crop VWC is obtained by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield. The evapotranspiration is determined with a daily soil water balance that takes into account crop and soil properties, production conditions, and climate. The actual yield is estimated using country-based values provided by the FAOSTAT database multiplied by a coefficient adjusting for the spatial variability within countries. The model improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The overall water use (blue+green) for the global production of the four grains investigated is 2673 km3/yr. Food production almost entirely depends on green water (>90%), but, when applied, irrigation makes production more water efficient, thus requiring lower VWC. The spatial variability of the virtual water content is partly driven by the yield pattern with an average correlation coefficient of 0.83, and partly by reference evapotranspiration with correlation coefficient of 0.27. Wheat shows the highest spatial variability since it is grown under a wide range of climatic conditions, soil properties, and agricultural practices. The sensitivity analysis is performed to understand how uncertainties in input data propagate and impact the virtual water content accounting. In each cell fixed changes are introduced to one input parameters at a time, and a sensitivity index, SI, is determined as the ratio between the variation of VWC referred to its baseline value and the variation of the input parameter with respect to its reference value. VWC is found to be most sensitive to planting date (PD), followed by the length of

  11. Global sensitivity analysis of bandpass and antireflection coating manufacturing by numerical space filling designs.

    PubMed

    Vasseur, Olivier; Cathelinaud, Michel; Claeys-Bruno, Magalie; Sergent, Michelle

    2011-03-20

    We present the effectiveness of global sensitivity analyses of optical coatings manufacturing to assess the robustness of filters by computer experiments. The most critical interactions of layers are determined for a 29 quarter-wave layer bandpass filter and for an antireflection coating with eight non-quarter-wave layers. Two monitoring techniques with the associated production performances are considered, and their influence on the interactions classification is discussed. Global sensitivity analyses by numerical space filling designs give clues to improve filter manufacturing against error effects and to assess the potential robustness of the coatings.

  12. Maximising the value of computer experiments using multi-method global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Pianosi, F.; Iwema, J.; Rosolem, R.; Wagener, T.

    2015-12-01

    Global Sensitivity Analysis (GSA) is increasingly recognised as an essential technique for a structured and quantitative approach to the calibration and diagnostic evaluation of environmental models. However, the implementation and interpretation of GSA is complicated by a number of choices that users need to make and for which multiple, equally sensible, options are often available. These choices include in the first place the choice of the GSA method, as well as many implementation details like the definition of the sampling space and strategy. The issue is exacerbated by computational complexity, in terms of both computing time and storage space needed to run the model, which might strongly constrain the number of experiments that can be afforded. While several algorithmic improvements can be adopted to reduce the computing burden of specific GSA methods, in this talk we discuss how a multi-method approach can be established to maximise the information gathered from an individual sample of model evaluations. Using as an example the GSA of a land surface model, we show how different analytical and approximation techniques can be applied sequentially to the same sample of model inputs and outputs, providing complimentary information about the model behaviour from different angles, and allowing for testing the impact of the choices made to generate the sample. We further expand our analysis to show how GSA is interconnected with model calibration and uncertainty analysis, so that a careful design of the simulation experiment can be used to address different questions simultaneously.

  13. A global sensitivity analysis of the PlumeRise model of volcanic plumes

    NASA Astrophysics Data System (ADS)

    Woodhouse, Mark J.; Hogg, Andrew J.; Phillips, Jeremy C.

    2016-10-01

    Integral models of volcanic plumes allow predictions of plume dynamics to be made and the rapid estimation of volcanic source conditions from observations of the plume height by model inversion. Here we introduce PlumeRise, an integral model of volcanic plumes that incorporates a description of the state of the atmosphere, includes the effects of wind and the phase change of water, and has been developed as a freely available web-based tool. The model can be used to estimate the height of a volcanic plume when the source conditions are specified, or to infer the strength of the source from an observed plume height through a model inversion. The predictions of the volcanic plume dynamics produced by the model are analysed in four case studies in which the atmospheric conditions and the strength of the source are varied. A global sensitivity analysis of the model to a selection of model inputs is performed and the results are analysed using parallel coordinate plots for visualisation and variance-based sensitivity indices to quantify the sensitivity of model outputs. We find that if the atmospheric conditions do not vary widely then there is a small set of model inputs that strongly influence the model predictions. When estimating the height of the plume, the source mass flux has a controlling influence on the model prediction, while variations in the plume height strongly effect the inferred value of the source mass flux when performing inversion studies. The values taken for the entrainment coefficients have a particularly important effect on the quantitative predictions. The dependencies of the model outputs to variations in the inputs are discussed and compared to simple algebraic expressions that relate source conditions to the height of the plume.

  14. Global sensitivity analysis and Bayesian parameter inference for solute transport in porous media colonized by biofilms

    NASA Astrophysics Data System (ADS)

    Younes, A.; Delay, F.; Fajraoui, N.; Fahs, M.; Mara, T. A.

    2016-08-01

    The concept of dual flowing continuum is a promising approach for modeling solute transport in porous media that includes biofilm phases. The highly dispersed transit time distributions often generated by these media are taken into consideration by simply stipulating that advection-dispersion transport occurs through both the porous and the biofilm phases. Both phases are coupled but assigned with contrasting hydrodynamic properties. However, the dual flowing continuum suffers from intrinsic equifinality in the sense that the outlet solute concentration can be the result of several parameter sets of the two flowing phases. To assess the applicability of the dual flowing continuum, we investigate how the model behaves with respect to its parameters. For the purpose of this study, a Global Sensitivity Analysis (GSA) and a Statistical Calibration (SC) of model parameters are performed for two transport scenarios that differ by the strength of interaction between the flowing phases. The GSA is shown to be a valuable tool to understand how the complex system behaves. The results indicate that the rate of mass transfer between the two phases is a key parameter of the model behavior and influences the identifiability of the other parameters. For weak mass exchanges, the output concentration is mainly controlled by the velocity in the porous medium and by the porosity of both flowing phases. In the case of large mass exchanges, the kinetics of this exchange also controls the output concentration. The SC results show that transport with large mass exchange between the flowing phases is more likely affected by equifinality than transport with weak exchange. The SC also indicates that weakly sensitive parameters, such as the dispersion in each phase, can be accurately identified. Removing them from calibration procedures is not recommended because it might result in biased estimations of the highly sensitive parameters.

  15. Global sensitivity analysis and Bayesian parameter inference for solute transport in porous media colonized by biofilms.

    PubMed

    Younes, A; Delay, F; Fajraoui, N; Fahs, M; Mara, T A

    2016-08-01

    The concept of dual flowing continuum is a promising approach for modeling solute transport in porous media that includes biofilm phases. The highly dispersed transit time distributions often generated by these media are taken into consideration by simply stipulating that advection-dispersion transport occurs through both the porous and the biofilm phases. Both phases are coupled but assigned with contrasting hydrodynamic properties. However, the dual flowing continuum suffers from intrinsic equifinality in the sense that the outlet solute concentration can be the result of several parameter sets of the two flowing phases. To assess the applicability of the dual flowing continuum, we investigate how the model behaves with respect to its parameters. For the purpose of this study, a Global Sensitivity Analysis (GSA) and a Statistical Calibration (SC) of model parameters are performed for two transport scenarios that differ by the strength of interaction between the flowing phases. The GSA is shown to be a valuable tool to understand how the complex system behaves. The results indicate that the rate of mass transfer between the two phases is a key parameter of the model behavior and influences the identifiability of the other parameters. For weak mass exchanges, the output concentration is mainly controlled by the velocity in the porous medium and by the porosity of both flowing phases. In the case of large mass exchanges, the kinetics of this exchange also controls the output concentration. The SC results show that transport with large mass exchange between the flowing phases is more likely affected by equifinality than transport with weak exchange. The SC also indicates that weakly sensitive parameters, such as the dispersion in each phase, can be accurately identified. Removing them from calibration procedures is not recommended because it might result in biased estimations of the highly sensitive parameters. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Quantitative global sensitivity analysis of a biologically based dose-response pregnancy model for the thyroid endocrine system

    PubMed Central

    Lumen, Annie; McNally, Kevin; George, Nysia; Fisher, Jeffrey W.; Loizou, George D.

    2015-01-01

    A deterministic biologically based dose-response model for the thyroidal system in a near-term pregnant woman and the fetus was recently developed to evaluate quantitatively thyroid hormone perturbations. The current work focuses on conducting a quantitative global sensitivity analysis on this complex model to identify and characterize the sources and contributions of uncertainties in the predicted model output. The workflow and methodologies suitable for computationally expensive models, such as the Morris screening method and Gaussian Emulation processes, were used for the implementation of the global sensitivity analysis. Sensitivity indices, such as main, total and interaction effects, were computed for a screened set of the total thyroidal system descriptive model input parameters. Furthermore, a narrower sub-set of the most influential parameters affecting the model output of maternal thyroid hormone levels were identified in addition to the characterization of their overall and pair-wise parameter interaction quotients. The characteristic trends of influence in model output for each of these individual model input parameters over their plausible ranges were elucidated using Gaussian Emulation processes. Through global sensitivity analysis we have gained a better understanding of the model behavior and performance beyond the domains of observation by the simultaneous variation in model inputs over their range of plausible uncertainties. The sensitivity analysis helped identify parameters that determine the driving mechanisms of the maternal and fetal iodide kinetics, thyroid function and their interactions, and contributed to an improved understanding of the system modeled. We have thus demonstrated the use and application of global sensitivity analysis for a biologically based dose-response model for sensitive life-stages such as pregnancy that provides richer information on the model and the thyroidal system modeled compared to local sensitivity analysis

  17. Global sensitivity analysis of a local water balance model predicting evaporation, water yield and drought

    NASA Astrophysics Data System (ADS)

    Speich, Matthias; Zappa, Massimiliano; Lischke, Heike

    2017-04-01

    Evaporation and transpiration affect both catchment water yield and the growing conditions for vegetation. They are driven by climate, but also depend on vegetation, soil and land surface properties. In hydrological and land surface models, these properties may be included as constant parameters, or as state variables. Often, little is known about the effect of these variables on model outputs. In the present study, the effect of surface properties on evaporation was assessed in a global sensitivity analysis. To this effect, we developed a simple local water balance model combining state-of-the-art process formulations for evaporation, transpiration and soil water balance. The model is vertically one-dimensional, and the relative simplicity of its process formulations makes it suitable for integration in a spatially distributed model at regional scale. The main model outputs are annual total evaporation (TE, i.e. the sum of transpiration, soil evaporation and interception), and a drought index (DI), which is based on the ratio of actual and potential transpiration. This index represents the growing conditions for forest trees. The sensitivity analysis was conducted in two steps. First, a screening analysis was applied to identify unimportant parameters out of an initial set of 19 parameters. In a second step, a statistical meta-model was applied to a sample of 800 model runs, in which the values of the important parameters were varied. Parameter effect and interactions were analyzed with effects plots. The model was driven with forcing data from ten meteorological stations in Switzerland, representing a wide range of precipitation regimes across a strong temperature gradient. Of the 19 original parameters, eight were identified as important in the screening analysis. Both steps highlighted the importance of Plant Available Water Capacity (AWC) and Leaf Area Index (LAI). However, their effect varies greatly across stations. For example, while a transition from a

  18. Sensitivity analysis via kinetic global modeling of rotating spokes in HiPIMS

    NASA Astrophysics Data System (ADS)

    Gallian, Sara; Trieschmann, Jan; Mussenbrock, Thomas; Hitchon, William N. G.; Brinkmann, Ralf Peter

    2013-09-01

    High Power Impulse Magnetron Sputtering discharges are characterized by high density plasma (peak electron density 1018 - 1020 m-3) in a strong magnetic field (100 mT), with highly energetic secondary electrons (500 - 1000 eV). The combination of these factors results in a discharge showing a vast range of instabilities, in particular, a single rotating high emissivity region is often observed. This highly ionized region -or spoke- shows a stationary behavior in the current plateau region and rotates with Ω ~ kHz. We apply a global model that evolves the electron energy distribution function self-consistently with the rate equations for Ar and Al species. The volume average is performed only in the structure region and a net neutral flux term is imposed to model the spoke rotation. Outside the spoke region, the neutral densities are evolved according to a phenomenological fluid model. The model is solved using a relaxation method. We present a sensitivity analysis of the resulting steady state on the different physical mechanisms and comment on the anomalous electron transport observed. The authors gratefully acknowledge funding by the Deutsche Forschungsgemeinschaft within the frame of SFB-TR 87.

  19. A comparison of five forest interception models using global sensitivity and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Linhoss, Anna C.; Siegert, Courtney M.

    2016-07-01

    Interception by the forest canopy plays a critical role in the hydrologic cycle by removing a significant portion of incoming precipitation from the terrestrial component. While there are a number of existing physical models of forest interception, few studies have summarized or compared these models. The objective of this work is to use global sensitivity and uncertainty analysis to compare five mechanistic interception models including the Rutter, Rutter Sparse, Gash, Sparse Gash, and Liu models. Using parameter probability distribution functions of values from the literature, our results show that on average storm duration [Dur], gross precipitation [PG], canopy storage [S] and solar radiation [Rn] are the most important model parameters. On the other hand, empirical parameters used in calculating evaporation and drip (i.e. trunk evaporation as a proportion of evaporation from the saturated canopy [ɛ], the empirical drainage parameter [b], the drainage partitioning coefficient [pd], and the rate of water dripping from the canopy when canopy storage has been reached [Ds]) have relatively low levels of importance in interception modeling. As such, future modeling efforts should aim to decompose parameters that are the most influential in determining model outputs into easily measurable physical components. Because this study compares models, the choices regarding the parameter probability distribution functions are applied across models, which enables a more definitive ranking of model uncertainty.

  20. Using Global Sensitivity Analysis to Understand the Implications of Epistemic Uncertainty in Earth Systems Modelling

    NASA Astrophysics Data System (ADS)

    Wagener, T.; Pianosi, F.; Almeida, S.; Holcombe, E.

    2016-12-01

    We can define epistemic uncertainty as those uncertainties that are not well determined by historical observations. This lack of determination can be because the future is not like the past, because the historical data is unreliable (imperfectly recorded from proxies or missing), or because it is scarce (either because measurements are not available at the right scale or there is simply no observation or network available). This kind of uncertainty is typical for earth system modelling, but our approaches to address it are poorly developed. Because epistemic uncertainties cannot easily be characterised by probability distributions, traditional uncertainty analysis techniques based on Monte Carlo simulation and forward propagation of uncertainty are not adequate. Global Sensitivity Analysis (GSA) can provide an alternative approach where, rather than quantifying the impact of poorly defined or even unknown uncertainties on model predictions, one can investigate at what level such uncertainties would start to matter and whether this level is likely to be reached within the relevant time period analysed. The underlying objective of GSA in this case lies in mapping the uncertain input factors onto critical regions of the model output, e.g. when the output exceeds a certain thresholds. Methods to implement this mapping step have so far received less attention and significant improvement is needed. We will present an example from landslide modelling - a field where observations are scarce, sub-surface characteristics are poorly constrained, and potential future rainfall triggers can be highly uncertain due to climate change. We demonstrate an approach that combines GSA and advanced Classification and Regression Tress (CART) to understand the risk of slope failure for an application in the Caribbean. We close with a discussion of opportunities for further methodological advancement.

  1. Global sensitivity analysis for the geostatistical characterization of a regional-scale sedimentary aquifer

    NASA Astrophysics Data System (ADS)

    Bianchi Janetti, Emanuela; Riva, Monica; Guadagnini, Alberto

    2017-04-01

    We perform a variance-based global sensitivity analysis to assess the impact of the uncertainty associated with (a) the spatial distribution of hydraulic parameters, e.g., hydraulic conductivity, and (b) the conceptual model adopted to describe the system on the characterization of a regional-scale aquifer. We do so in the context of inverse modeling of the groundwater flow system. The study aquifer lies within the provinces of Bergamo and Cremona (Italy) and covers a planar extent of approximately 785 km2. Analysis of available sedimentological information allows identifying a set of main geo-materials (facies/phases) which constitute the geological makeup of the subsurface system. We parameterize the conductivity field following two diverse conceptual schemes. The first one is based on the representation of the aquifer as a Composite Medium. In this conceptualization the system is composed by distinct (five, in our case) lithological units. Hydraulic properties (such as conductivity) in each unit are assumed to be uniform. The second approach assumes that the system can be modeled as a collection of media coexisting in space to form an Overlapping Continuum. A key point in this model is that each point in the domain represents a finite volume within which each of the (five) identified lithofacies can be found with a certain volumetric percentage. Groundwater flow is simulated with the numerical code MODFLOW-2005 for each of the adopted conceptual models. We then quantify the relative contribution of the considered uncertain parameters, including boundary conditions, to the total variability of the piezometric level recorded in a set of 40 monitoring wells by relying on the variance-based Sobol indices. The latter are derived numerically for the investigated settings through the use of a model-order reduction technique based on the polynomial chaos expansion approach.

  2. Global sensitivity analysis for an integrated model for simulation of nitrogen dynamics under the irrigation with treated wastewater.

    PubMed

    Sun, Huaiwei; Zhu, Yan; Yang, Jinzhong; Wang, Xiugui

    2015-11-01

    As the amount of water resources that can be utilized for agricultural production is limited, the reuse of treated wastewater (TWW) for irrigation is a practical solution to alleviate the water crisis in China. The process-based models, which estimate nitrogen dynamics under irrigation, are widely used to investigate the best irrigation and fertilization management practices in developed and developing countries. However, for modeling such a complex system for wastewater reuse, it is critical to conduct a sensitivity analysis to determine numerous input parameters and their interactions that contribute most to the variance of the model output for the development of process-based model. In this study, application of a comprehensive global sensitivity analysis for nitrogen dynamics was reported. The objective was to compare different global sensitivity analysis (GSA) on the key parameters for different model predictions of nitrogen and crop growth modules. The analysis was performed as two steps. Firstly, Morris screening method, which is one of the most commonly used screening method, was carried out to select the top affected parameters; then, a variance-based global sensitivity analysis method (extended Fourier amplitude sensitivity test, EFAST) was used to investigate more thoroughly the effects of selected parameters on model predictions. The results of GSA showed that strong parameter interactions exist in crop nitrogen uptake, nitrogen denitrification, crop yield, and evapotranspiration modules. Among all parameters, one of the soil physical-related parameters named as the van Genuchten air entry parameter showed the largest sensitivity effects on major model predictions. These results verified that more effort should be focused on quantifying soil parameters for more accurate model predictions in nitrogen- and crop-related predictions, and stress the need to better calibrate the model in a global sense. This study demonstrates the advantages of the GSA on a

  3. Global Sensitivity Analysis for Large-scale Socio-hydrological Models using the Cloud

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Garcia-Cabrejo, O.; Cai, X.; Valocchi, A. J.; Dupont, B.

    2014-12-01

    In the context of coupled human and natural system (CHNS), incorporating human factors into water resource management provides us with the opportunity to understand the interactions between human and environmental systems. A multi-agent system (MAS) model is designed to couple with the physically-based Republican River Compact Administration (RRCA) groundwater model, in an attempt to understand the declining water table and base flow in the heavily irrigated Republican River basin. For MAS modelling, we defined five behavioral parameters (κ_pr, ν_pr, κ_prep, ν_prep and λ) to characterize the agent's pumping behavior given the uncertainties of the future crop prices and precipitation. κ and ν describe agent's beliefs in their prior knowledge of the mean and variance of crop prices (κ_pr, ν_pr) and precipitation (κ_prep, ν_prep), and λ is used to describe the agent's attitude towards the fluctuation of crop profits. Notice that these human behavioral parameters as inputs to the MAS model are highly uncertain and even not measurable. Thus, we estimate the influences of these behavioral parameters on the coupled models using Global Sensitivity Analysis (GSA). In this paper, we address two main challenges arising from GSA with such a large-scale socio-hydrological model by using Hadoop-based Cloud Computing techniques and Polynomial Chaos Expansion (PCE) based variance decomposition approach. As a result, 1,000 scenarios of the coupled models are completed within two hours with the Hadoop framework, rather than about 28days if we run those scenarios sequentially. Based on the model results, GSA using PCE is able to measure the impacts of the spatial and temporal variations of these behavioral parameters on crop profits and water table, and thus identifies two influential parameters, κ_pr and λ. The major contribution of this work is a methodological framework for the application of GSA in large-scale socio-hydrological models. This framework attempts to

  4. The analysis sensitivity to tropical winds from the Global Weather Experiment

    NASA Technical Reports Server (NTRS)

    Paegle, J.; Paegle, J. N.; Baker, W. E.

    1986-01-01

    The global scale divergent and rotational flow components of the Global Weather Experiment (GWE) are diagnosed from three different analyses of the data. The rotational flow shows closer agreement between the analyses than does the divergent flow. Although the major outflow and inflow centers are similarly placed in all analyses, the global kinetic energy of the divergent wind varies by about a factor of 2 between different analyses while the global kinetic energy of the rotational wind varies by only about 10 percent between the analyses. A series of real data assimilation experiments has been performed with the GLA general circulation model using different amounts of tropical wind data during the First Special Observing Period of the Global Weather Experiment. In exeriment 1, all available tropical wind data were used; in the second experiment, tropical wind data were suppressed; while, in the third and fourth experiments, only tropical wind data with westerly and easterly components, respectively, were assimilated. The rotational wind appears to be more sensitive to the presence or absence of tropical wind data than the divergent wind. It appears that the model, given only extratropical observations, generates excessively strong upper tropospheric westerlies. These biases are sufficiently pronounced to amplify the globally integrated rotational flow kinetic energy by about 10 percent and the global divergent flow kinetic energy by about a factor of 2. Including only easterly wind data in the tropics is more effective in controlling the model error than including only westerly wind data. This conclusion is especially noteworthy because approximately twice as many upper tropospheric westerly winds were available in these cases as easterly winds.

  5. Global Sensitivity Analysis for the determination of parameter importance in bio-manufacturing processes.

    PubMed

    Chhatre, Sunil; Francis, Richard; Newcombe, Anthony R; Zhou, Yuhong; Titchener-Hooker, Nigel; King, Josh; Keshavarz-Moore, Eli

    2008-10-01

    The present paper describes the application of GSA (Global Sensitivity Analysis) techniques to mathematical models of bioprocesses in order to rank inputs such as feed titres, flow rates and matrix capacities for the relative influence that each exerts upon outputs such as yield or throughput. GSA enables quantification of both the impact of individual variables on process outputs, as well as their interactions. These data highlight those attributes of a bioprocess which offer the greatest potential for achieving manufacturing improvements. Whereas previous GSA studies have been limited to individual unit operations, this paper extends the treatment to an entire downstream process and illustrates its utility by application to the production of a Fab-based rattlesnake antivenom called CroFab [(Crotalidae Polyvalent Immune Fab (Ovine); Protherics U.K. Limited]. Initially, hyperimmunized ovine serum containing rattlesnake antivenom IgG (product), other antibodies and albumin is applied to a synthetic affinity ligand adsorbent column to separate the antibodies from the albumin. The antibodies are papain-digested into Fab and Fc fragments, before concentration by ultrafiltration. Fc, residual IgG and albumin are eliminated by an ion-exchanger and then CroFab-specific affinity chromatography is used to produce purified antivenom. Application of GSA to the model of this process showed that product yield was controlled by IgG feed concentration and the synthetic-material affinity column's capacity and flow rate, whereas product throughput was predominantly influenced by the synthetic material's capacity, the ultrafiltration concentration factor and the CroFab affinity flow rate. Such information provides a rational basis for identifying the most promising strategies for delivering improvements to commercial-scale biomanufacturing processes.

  6. Global sensitivity analysis approach for input selection and system identification purposes--a new framework for feedforward neural networks.

    PubMed

    Fock, Eric

    2014-08-01

    A new algorithm for the selection of input variables of neural network is proposed. This new method, applied after the training stage, ranks the inputs according to their importance in the variance of the model output. The use of a global sensitivity analysis technique, extended Fourier amplitude sensitivity test, gives the total sensitivity index for each variable, which allows for the ranking and the removal of the less relevant inputs. Applied to some benchmarking problems in the field of features selection, the proposed approach shows good agreement in keeping the relevant variables. This new method is a useful tool for removing superfluous inputs and for system identification.

  7. A Peep into the Uncertainty-Complexity-Relevance Modeling Trilemma through Global Sensitivity and Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.

    2014-12-01

    Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping

  8. High-Throughput Analysis of Global DNA Methylation Using Methyl-Sensitive Digestion

    PubMed Central

    Feinweber, Carmen; Knothe, Claudia; Lötsch, Jörn; Thomas, Dominique; Geisslinger, Gerd; Parnham, Michael J.; Resch, Eduard

    2016-01-01

    DNA methylation is a major regulatory process of gene transcription, and aberrant DNA methylation is associated with various diseases including cancer. Many compounds have been reported to modify DNA methylation states. Despite increasing interest in the clinical application of drugs with epigenetic effects, and the use of diagnostic markers for genome-wide hypomethylation in cancer, large-scale screening systems to measure the effects of drugs on DNA methylation are limited. In this study, we improved the previously established fluorescence polarization-based global DNA methylation assay so that it is more suitable for application to human genomic DNA. Our methyl-sensitive fluorescence polarization (MSFP) assay was highly repeatable (inter-assay coefficient of variation = 1.5%) and accurate (r2 = 0.99). According to signal linearity, only 50–80 ng human genomic DNA per reaction was necessary for the 384-well format. MSFP is a simple, rapid approach as all biochemical reactions and final detection can be performed in one well in a 384-well plate without purification steps in less than 3.5 hours. Furthermore, we demonstrated a significant correlation between MSFP and the LINE-1 pyrosequencing assay, a widely used global DNA methylation assay. MSFP can be applied for the pre-screening of compounds that influence global DNA methylation states and also for the diagnosis of certain types of cancer. PMID:27749902

  9. High-Throughput Analysis of Global DNA Methylation Using Methyl-Sensitive Digestion.

    PubMed

    Shiratori, Hiromi; Feinweber, Carmen; Knothe, Claudia; Lötsch, Jörn; Thomas, Dominique; Geisslinger, Gerd; Parnham, Michael J; Resch, Eduard

    2016-01-01

    DNA methylation is a major regulatory process of gene transcription, and aberrant DNA methylation is associated with various diseases including cancer. Many compounds have been reported to modify DNA methylation states. Despite increasing interest in the clinical application of drugs with epigenetic effects, and the use of diagnostic markers for genome-wide hypomethylation in cancer, large-scale screening systems to measure the effects of drugs on DNA methylation are limited. In this study, we improved the previously established fluorescence polarization-based global DNA methylation assay so that it is more suitable for application to human genomic DNA. Our methyl-sensitive fluorescence polarization (MSFP) assay was highly repeatable (inter-assay coefficient of variation = 1.5%) and accurate (r2 = 0.99). According to signal linearity, only 50-80 ng human genomic DNA per reaction was necessary for the 384-well format. MSFP is a simple, rapid approach as all biochemical reactions and final detection can be performed in one well in a 384-well plate without purification steps in less than 3.5 hours. Furthermore, we demonstrated a significant correlation between MSFP and the LINE-1 pyrosequencing assay, a widely used global DNA methylation assay. MSFP can be applied for the pre-screening of compounds that influence global DNA methylation states and also for the diagnosis of certain types of cancer.

  10. Global Sensitivity Analysis of GEOS-Chem Modeled OH, HO2, and Ozone During the INTEX Campaigns (2004, 2006)

    NASA Astrophysics Data System (ADS)

    Christian, K. E.; Brune, W. H.; Mao, J.

    2016-12-01

    Developing predictive capability for future atmospheric oxidation capability requires a detailed analysis of model uncertainties and sensitivity of the modeled oxidation capacity to model input variables. We present results from a global sensitivity analysis of GEOS-Chem, a popular chemical transport model, using the Random Sampling-High Dimensional Model Representation (RS-HDMR) method. Analyzing the period of NASA's Intercontinental Chemical Transport Experiment (INTEX) A & B campaigns (2004, 2006) over North America and the Northern Pacific Ocean, the uncertainties and sensitivities of modeled OH, HO2, and ozone concentrations to model inputs perturbed simultaneously within their respective uncertainties will be presented. Sensitivities of the modeled oxidants were determined for around 50 model input factors using the RS-HDMR method. We have determined a solid estimate of GEOS-Chem model uncertainty for the period of the INTEX campaigns and the emissions, chemical, or meteorological factors to which OH, HO2, and ozone are most sensitive. This analysis indicates that OH, HO2, and ozone are most sensitive to a combination of emissions factors, specifically CO, isoprene, and lightning and anthropogenic NOx and to chemical factors, such as the NO2 + OH reaction rate, the photolysis of NO2 and ozone, and aerosol uptake of HO2. Comparing these sensitivities to measurements taken from the Airborne Tropospheric Hydrogen Oxides Sensor (ATHOS), we will show where the model disagrees with measurements and which emissions, chemical, and meteorological input factors contribute the most to this disagreement.

  11. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  12. A Bayesian Network Based Global Sensitivity Analysis Method for Identifying Dominant Processes in a Multi-physics Model

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2016-12-01

    Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can

  13. Global sensitivity analysis of the joint kinematics during gait to the parameters of a lower limb multi-body model.

    PubMed

    El Habachi, Aimad; Moissenet, Florent; Duprey, Sonia; Cheze, Laurence; Dumas, Raphaël

    2015-07-01

    Sensitivity analysis is a typical part of biomechanical models evaluation. For lower limb multi-body models, sensitivity analyses have been mainly performed on musculoskeletal parameters, more rarely on the parameters of the joint models. This study deals with a global sensitivity analysis achieved on a lower limb multi-body model that introduces anatomical constraints at the ankle, tibiofemoral, and patellofemoral joints. The aim of the study was to take into account the uncertainty of parameters (e.g. 2.5 cm on the positions of the skin markers embedded in the segments, 5° on the orientation of hinge axis, 2.5 mm on the origin and insertion of ligaments) using statistical distributions and propagate it through a multi-body optimisation method used for the computation of joint kinematics from skin markers during gait. This will allow us to identify the most influential parameters on the minimum of the objective function of the multi-body optimisation (i.e. the sum of the squared distances between measured and model-determined skin marker positions) and on the joint angles and displacements. To quantify this influence, a Fourier-based algorithm of global sensitivity analysis coupled with a Latin hypercube sampling is used. This sensitivity analysis shows that some parameters of the motor constraints, that is to say the distances between measured and model-determined skin marker positions, and the kinematic constraints are highly influencing the joint kinematics obtained from the lower limb multi-body model, for example, positions of the skin markers embedded in the shank and pelvis, parameters of the patellofemoral hinge axis, and parameters of the ankle and tibiofemoral ligaments. The resulting standard deviations on the joint angles and displacements reach 36° and 12 mm. Therefore, personalisation, customisation or identification of these most sensitive parameters of the lower limb multi-body models may be considered as essential.

  14. Economics in "Global Health 2035": a sensitivity analysis of the value of a life year estimates.

    PubMed

    Chang, Angela Y; Robinson, Lisa A; Hammitt, James K; Resch, Stephen C

    2017-06-01

    In "Global health 2035: a world converging within a generation," The Lancet Commission on Investing in Health (CIH) adds the value of increased life expectancy to the value of growth in gross domestic product (GDP) when assessing national well-being. To value changes in life expectancy, the CIH relies on several strong assumptions to bridge gaps in the empirical research. It finds that the value of a life year (VLY) averages 2.3 times GDP per capita for low- and middle-income countries (LMICs) assuming the changes in life expectancy they experienced from 2000 to 2011 are permanent. The CIH VLY estimate is based on a specific shift in population life expectancy and includes a 50 percent reduction for children ages 0 through 4. We investigate the sensitivity of this estimate to the underlying assumptions, including the effects of income, age, and life expectancy, and the sequencing of the calculations. We find that reasonable alternative assumptions regarding the effects of income, age, and life expectancy may reduce the VLY estimates to 0.2 to 2.1 times GDP per capita for LMICs. Removing the reduction for young children increases the VLY, while reversing the sequencing of the calculations reduces the VLY. Because the VLY is sensitive to the underlying assumptions, analysts interested in applying this approach elsewhere must tailor the estimates to the impacts of the intervention and the characteristics of the affected population. Analysts should test the sensitivity of their conclusions to reasonable alternative assumptions. More work is needed to investigate options for improving the approach.

  15. Global sensitivity analysis and uncertainties in SEA models of vibroacoustic systems

    NASA Astrophysics Data System (ADS)

    Christen, Jean-Loup; Ichchou, Mohamed; Troclet, Bernard; Bareille, Olivier; Ouisse, Morvan

    2017-06-01

    The effect of parametric uncertainties on the dispersion of Statistical Energy Analysis (SEA) models of structural-acoustic coupled systems is studied with the Fourier analysis sensitivity test (FAST) method. The method is firstly applied to an academic example representing a transmission suite, then to a more complex industrial structure from the space industry. Two sets of parameters are considered, namely error on the SEA model's coefficients, or directly the engineering parameters. The first case is an intrusive approach, but enables to identify the dominant phenomena taking place in a given configuration. The second is non-intrusive and appeals more to engineering considerations, by studying the effect of input parameters such as geometry or material characteristics on the SEA outputs. A study of the distribution of results in each frequency band with the same sampling shows some interesting features, such as bimodal repartitions in some ranges.

  16. The global burden of disease in 1990: summary results, sensitivity analysis and future directions.

    PubMed Central

    Murray, C. J.; Lopez, A. D.; Jamison, D. T.

    1994-01-01

    A basic requirement for evaluating the cost-effectiveness of health interventions is a comprehensive assessment of the amount of ill health (premature death and disability) attributable to specific diseases and injuries. A new indicator, the number of disability-adjusted life years (DALYs), was developed to assess the burden of disease and injury in 1990 for over 100 causes by age, sex and region. The DALY concept provides an integrative, comprehensive methodology to capture the entire amount of ill health which will, on average, be incurred during one's lifetime because of new cases of disease and injury in 1990. It differs in many respects from previous attempts at global and regional health situation assessment which have typically been much less comprehensive in scope, less detailed, and limited to a handful of causes. This paper summarizes the DALY estimates for 1990 by cause, age, sex and region. For the first time, those responsible for deciding priorities in the health sector have access to a disaggregated set of estimates which, in addition to facilitating cost-effectiveness analysis, can be used to monitor global and regional health progress for over a hundred conditions. The paper also shows how the estimates depend on particular values of the parameters involved in the calculation. PMID:8062404

  17. The application of Global Sensitivity Analysis to quantify the dominant input factors for hydraulic model simulations

    NASA Astrophysics Data System (ADS)

    Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2015-04-01

    Predicting flood inundation extents using hydraulic models is subject to a number of critical uncertainties. For a specific event, these uncertainties are known to have a large influence on model outputs and any subsequent analyses made by risk managers. Hydraulic modellers often approach such problems by applying uncertainty analysis techniques such as the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. However, these methods do not allow one to attribute which source of uncertainty has the most influence on the various model outputs that inform flood risk decision making. Another issue facing modellers is the amount of computational resource that is available to spend on modelling flood inundations that are 'fit for purpose' to the modelling objectives. Therefore a balance needs to be struck between computation time, realism and spatial resolution, and effectively characterising the uncertainty spread of predictions (for example from boundary conditions and model parameterisations). However, it is not fully understood how much of an impact each factor has on model performance, for example how much influence changing the spatial resolution of a model has on inundation predictions in comparison to other uncertainties inherent in the modelling process. Furthermore, when resampling fine scale topographic data in the form of a Digital Elevation Model (DEM) to coarser resolutions, there are a number of possible coarser DEMs that can be produced. Deciding which DEM is then chosen to represent the surface elevations in the model could also influence model performance. In this study we model a flood event using the hydraulic model LISFLOOD-FP and apply Sobol' Sensitivity Analysis to estimate which input factor, among the uncertainty in model boundary conditions, uncertain model parameters, the spatial resolution of the DEM and the choice of resampled DEM, have the most influence on a range of model outputs. These outputs include whole domain maximum

  18. Global sensitivity analysis of an in-sewer process model for the study of sulfide-induced corrosion of concrete.

    PubMed

    Donckels, B M R; Kroll, S; Van Dorpe, M; Weemaes, M

    2014-01-01

    The presence of high concentrations of hydrogen sulfide in the sewer system can result in corrosion of the concrete sewer pipes. The formation and fate of hydrogen sulfide in the sewer system is governed by a complex system of biological, chemical and physical processes. Therefore, mechanistic models have been developed to describe the underlying processes. In this work, global sensitivity analysis was applied to an in-sewer process model (aqua3S) to determine the most important model input factors with regard to sulfide formation in rising mains and the concrete corrosion rate downstream of a rising main. The results of the sensitivity analysis revealed the most influential model parameters, but also the importance of the characteristics of the organic matter, the alkalinity of the concrete and the movement of the sewer gas phase.

  19. GLSENS: A Generalized Extension of LSENS Including Global Reactions and Added Sensitivity Analysis for the Perfectly Stirred Reactor

    NASA Technical Reports Server (NTRS)

    Bittker, David A.

    1996-01-01

    A generalized version of the NASA Lewis general kinetics code, LSENS, is described. The new code allows the use of global reactions as well as molecular processes in a chemical mechanism. The code also incorporates the capability of performing sensitivity analysis calculations for a perfectly stirred reactor rapidly and conveniently at the same time that the main kinetics calculations are being done. The GLSENS code has been extensively tested and has been found to be accurate and efficient. Nine example problems are presented and complete user instructions are given for the new capabilities. This report is to be used in conjunction with the documentation for the original LSENS code.

  20. Parameter screening: the use of a dummy parameter to identify non-influential parameters in a global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Khorashadi Zadeh, Farkhondeh; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy

    2017-04-01

    Parameter estimation is a major concern in hydrological modeling, which may limit the use of complex simulators with a large number of parameters. To support the selection of parameters to include in or exclude from the calibration process, Global Sensitivity Analysis (GSA) is widely applied in modeling practices. Based on the results of GSA, the influential and the non-influential parameters are identified (i.e. parameters screening). Nevertheless, the choice of the screening threshold below which parameters are considered non-influential is a critical issue, which has recently received more attention in GSA literature. In theory, the sensitivity index of a non-influential parameter has a value of zero. However, since numerical approximations, rather than analytical solutions, are utilized in GSA methods to calculate the sensitivity indices, small but non-zero indices may be obtained for the indices of non-influential parameters. In order to assess the threshold that identifies non-influential parameters in GSA methods, we propose to calculate the sensitivity index of a "dummy parameter". This dummy parameter has no influence on the model output, but will have a non-zero sensitivity index, representing the error due to the numerical approximation. Hence, the parameters whose indices are above the sensitivity index of the dummy parameter can be classified as influential, whereas the parameters whose indices are below this index are within the range of the numerical error and should be considered as non-influential. To demonstrated the effectiveness of the proposed "dummy parameter approach", 26 parameters of a Soil and Water Assessment Tool (SWAT) model are selected to be analyzed and screened, using the variance-based Sobol' and moment-independent PAWN methods. The sensitivity index of the dummy parameter is calculated from sampled data, without changing the model equations. Moreover, the calculation does not even require additional model evaluations for the Sobol

  1. Mars approach for global sensitivity analysis of differential equation models with applications to dynamics of influenza infection.

    PubMed

    Lee, Yeonok; Wu, Hulin

    2012-01-01

    Differential equation models are widely used for the study of natural phenomena in many fields. The study usually involves unknown factors such as initial conditions and/or parameters. It is important to investigate the impact of unknown factors (parameters and initial conditions) on model outputs in order to better understand the system the model represents. Apportioning the uncertainty (variation) of output variables of a model according to the input factors is referred to as sensitivity analysis. In this paper, we focus on the global sensitivity analysis of ordinary differential equation (ODE) models over a time period using the multivariate adaptive regression spline (MARS) as a meta model based on the concept of the variance of conditional expectation (VCE). We suggest to evaluate the VCE analytically using the MARS model structure of univariate tensor-product functions which is more computationally efficient. Our simulation studies show that the MARS model approach performs very well and helps to significantly reduce the computational cost. We present an application example of sensitivity analysis of ODE models for influenza infection to further illustrate the usefulness of the proposed method.

  2. Global sensitivity analysis of the GEOS-Chem chemical transport model: ozone and hydrogen oxides during ARCTAS (2008)

    NASA Astrophysics Data System (ADS)

    Christian, Kenneth E.; Brune, William H.; Mao, Jingqiu

    2017-03-01

    Developing predictive capability for future atmospheric oxidation capacity requires a detailed analysis of model uncertainties and sensitivity of the modeled oxidation capacity to model input variables. Using oxidant mixing ratios modeled by the GEOS-Chem chemical transport model and measured on the NASA DC-8 aircraft, uncertainty and global sensitivity analyses were performed on the GEOS-Chem chemical transport model for the modeled oxidants hydroxyl (OH), hydroperoxyl (HO2), and ozone (O3). The sensitivity of modeled OH, HO2, and ozone to model inputs perturbed simultaneously within their respective uncertainties were found for the flight tracks of NASA's Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) A and B campaigns (2008) in the North American Arctic. For the spring deployment (ARCTAS-A), ozone was most sensitive to the photolysis rate of NO2, the NO2 + OH reaction rate, and various emissions, including methyl bromoform (CHBr3). OH and HO2 were overwhelmingly sensitive to aerosol particle uptake of HO2 with this one factor contributing upwards of 75 % of the uncertainty in HO2. For the summer deployment (ARCTAS-B), ozone was most sensitive to emission factors, such as soil NOx and isoprene. OH and HO2 were most sensitive to biomass emissions and aerosol particle uptake of HO2. With modeled HO2 showing a factor of 2 underestimation compared to measurements in the lowest 2 km of the troposphere, lower uptake rates (γHO2 < 0. 055), regardless of whether or not the product of the uptake is H2O or H2O2, produced better agreement between modeled and measured HO2.

  3. Global sensitivity analysis for model-based prediction of oxidative micropollutant transformation during drinking water treatment.

    PubMed

    Neumann, Marc B; Gujer, Willi; von Gunten, Urs

    2009-03-01

    This study quantifies the uncertainty involved in predicting micropollutant oxidation during drinking water ozonation in a pilot plant reactor. The analysis is conducted for geosmin, methyl tert-butyl ether (MTBE), isopropylmethoxypyrazine (IPMP), bezafibrate, beta-cyclocitral and ciprofloxazin. These compounds are representative for a wide range of substances with second order rate constants between 0.1 and 1.9x10(4)M(-1)s(-1) for the reaction with ozone and between 2x10(9) and 8x10(9)M(-1)s(-1) for the reaction with OH-radicals. Uncertainty ranges are derived for second order rate constants, hydraulic parameters, flow- and ozone concentration data, and water characteristic parameters. The uncertain model factors are propagated via Monte Carlo simulation and the resulting probability distributions of the relative residual micropollutant concentrations are assessed. The importance of factors in determining model output variance is quantified using Extended Fourier Amplitude Sensitivity Testing (Extended-FAST). For substances that react slowly with ozone (MTBE, IPMP, geosmin) the water characteristic R(ct)-value (ratio of ozone- to OH-radical concentration) is the most influential factor explaining 80% of the output variance. In the case of bezafibrate the R(ct)-value and the second order rate constant for the reaction with ozone each contribute about 30% to the output variance. For beta-cyclocitral and ciprofloxazin (fast reacting with ozone) the second order rate constant for the reaction with ozone and the hydraulic model structure become the dominating sources of uncertainty.

  4. Switch of sensitivity dynamics revealed with DyGloSA toolbox for dynamical global sensitivity analysis as an early warning for system's critical transition.

    PubMed

    Baumuratova, Tatiana; Dobre, Simona; Bastogne, Thierry; Sauter, Thomas

    2013-01-01

    Systems with bifurcations may experience abrupt irreversible and often unwanted shifts in their performance, called critical transitions. For many systems like climate, economy, ecosystems it is highly desirable to identify indicators serving as early warnings of such regime shifts. Several statistical measures were recently proposed as early warnings of critical transitions including increased variance, autocorrelation and skewness of experimental or model-generated data. The lack of automatized tool for model-based prediction of critical transitions led to designing DyGloSA - a MATLAB toolbox for dynamical global parameter sensitivity analysis (GPSA) of ordinary differential equations models. We suggest that the switch in dynamics of parameter sensitivities revealed by our toolbox is an early warning that a system is approaching a critical transition. We illustrate the efficiency of our toolbox by analyzing several models with bifurcations and predicting the time periods when systems can still avoid going to a critical transition by manipulating certain parameter values, which is not detectable with the existing SA techniques. DyGloSA is based on the SBToolbox2 and contains functions, which compute dynamically the global sensitivity indices of the system by applying four main GPSA methods: eFAST, Sobol's ANOVA, PRCC and WALS. It includes parallelized versions of the functions enabling significant reduction of the computational time (up to 12 times). DyGloSA is freely available as a set of MATLAB scripts at http://bio.uni.lu/systems_biology/software/dyglosa. It requires installation of MATLAB (versions R2008b or later) and the Systems Biology Toolbox2 available at www.sbtoolbox2.org. DyGloSA can be run on Windows and Linux systems, -32 and -64 bits.

  5. Switch of Sensitivity Dynamics Revealed with DyGloSA Toolbox for Dynamical Global Sensitivity Analysis as an Early Warning for System's Critical Transition

    PubMed Central

    Baumuratova, Tatiana; Dobre, Simona; Bastogne, Thierry; Sauter, Thomas

    2013-01-01

    Systems with bifurcations may experience abrupt irreversible and often unwanted shifts in their performance, called critical transitions. For many systems like climate, economy, ecosystems it is highly desirable to identify indicators serving as early warnings of such regime shifts. Several statistical measures were recently proposed as early warnings of critical transitions including increased variance, autocorrelation and skewness of experimental or model-generated data. The lack of automatized tool for model-based prediction of critical transitions led to designing DyGloSA – a MATLAB toolbox for dynamical global parameter sensitivity analysis (GPSA) of ordinary differential equations models. We suggest that the switch in dynamics of parameter sensitivities revealed by our toolbox is an early warning that a system is approaching a critical transition. We illustrate the efficiency of our toolbox by analyzing several models with bifurcations and predicting the time periods when systems can still avoid going to a critical transition by manipulating certain parameter values, which is not detectable with the existing SA techniques. DyGloSA is based on the SBToolbox2 and contains functions, which compute dynamically the global sensitivity indices of the system by applying four main GPSA methods: eFAST, Sobol's ANOVA, PRCC and WALS. It includes parallelized versions of the functions enabling significant reduction of the computational time (up to 12 times). DyGloSA is freely available as a set of MATLAB scripts at http://bio.uni.lu/systems_biology/software/dyglosa. It requires installation of MATLAB (versions R2008b or later) and the Systems Biology Toolbox2 available at www.sbtoolbox2.org. DyGloSA can be run on Windows and Linux systems, -32 and -64 bits. PMID:24367574

  6. Reducing Production Basis Risk through Rainfall Intensity Frequency (RIF) Indexes: Global Sensitivity Analysis' Implication on Policy Design

    NASA Astrophysics Data System (ADS)

    Muneepeerakul, Chitsomanus; Huffaker, Ray; Munoz-Carpena, Rafael

    2016-04-01

    The weather index insurance promises financial resilience to farmers struck by harsh weather conditions with swift compensation at affordable premium thanks to its minimal adverse selection and moral hazard. Despite these advantages, the very nature of indexing causes the presence of "production basis risk" that the selected weather indexes and their thresholds do not correspond to actual damages. To reduce basis risk without additional data collection cost, we propose the use of rain intensity and frequency as indexes as it could offer better protection at the lower premium by avoiding basis risk-strike trade-off inherent in the total rainfall index. We present empirical evidences and modeling results that even under the similar cumulative rainfall and temperature environment, yield can significantly differ especially for drought sensitive crops. We further show that deriving the trigger level and payoff function from regression between historical yield and total rainfall data may pose significant basis risk owing to their non-unique relationship in the insured range of rainfall. Lastly, we discuss the design of index insurance in terms of contract specifications based on the results from global sensitivity analysis.

  7. Position-independent geometric error identification and global sensitivity analysis for the rotary axes of five-axis machine tools

    NASA Astrophysics Data System (ADS)

    Guo, Shijie; Jiang, Gedong; Zhang, Dongsheng; Mei, Xuesong

    2017-04-01

    Position-independent geometric errors (PIGEs) are the fundamental errors of a five-axis machine tool. In this paper, to identify ten PIGEs peculiar to the rotary axes of five-axis machine tools with a tilting head, the mathematic model of the ten PIGEs is deduced and four measuring patterns are proposed. The measuring patterns and identifying method are validated on a five-axis machine tool with a tilting head, and the ten PIGEs of the machine tool are obtained. The sensitivities of the four adjustable PIGEs of the machine tool in different measuring patterns are analyzed by the Morris global sensitivity analysis method and the modifying method, and the procedure of the four adjustable PIGEs of the machine tool is given accordingly. Experimental results show that after and before modifying the four adjustable PIGEs, the average compensate rate reached 52.7%. It is proved that the proposed measuring, identifying, analyzing and modifying method are effective for error measurement and precision improvement of the five-axis machine tool.

  8. Quantifying the importance of spatial resolution and other factors through global sensitivity analysis of a flood inundation model

    NASA Astrophysics Data System (ADS)

    Thomas Steven Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2016-11-01

    Where high-resolution topographic data are available, modelers are faced with the decision of whether it is better to spend computational resource on resolving topography at finer resolutions or on running more simulations to account for various uncertain input factors (e.g., model parameters). In this paper we apply global sensitivity analysis to explore how influential the choice of spatial resolution is when compared to uncertainties in the Manning's friction coefficient parameters, the inflow hydrograph, and those stemming from the coarsening of topographic data used to produce Digital Elevation Models (DEMs). We apply the hydraulic model LISFLOOD-FP to produce several temporally and spatially variable model outputs that represent different aspects of flood inundation processes, including flood extent, water depth, and time of inundation. We find that the most influential input factor for flood extent predictions changes during the flood event, starting with the inflow hydrograph during the rising limb before switching to the channel friction parameter during peak flood inundation, and finally to the floodplain friction parameter during the drying phase of the flood event. Spatial resolution and uncertainty introduced by resampling topographic data to coarser resolutions are much more important for water depth predictions, which are also sensitive to different input factors spatially and temporally. Our findings indicate that the sensitivity of LISFLOOD-FP predictions is more complex than previously thought. Consequently, the input factors that modelers should prioritize will differ depending on the model output assessed, and the location and time of when and where this output is most relevant.

  9. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    SciTech Connect

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  10. Global sensitivity analysis of a SWAT model: comparison of the variance-based and moment-independent approaches

    NASA Astrophysics Data System (ADS)

    Khorashadi Zadeh, Farkhondeh; Sarrazin, Fanny; Nossent, Jiri; Pianosi, Francesca; van Griensven, Ann; Wagener, Thorsten; Bauwens, Willy

    2015-04-01

    Uncertainty in parameters is a well-known reason of model output uncertainty which, undermines model reliability and restricts model application. A large number of parameters, in addition to the lack of data, limits calibration efficiency and also leads to higher parameter uncertainty. Global Sensitivity Analysis (GSA) is a set of mathematical techniques that provides quantitative information about the contribution of different sources of uncertainties (e.g. model parameters) to the model output uncertainty. Therefore, identifying influential and non-influential parameters using GSA can improve model calibration efficiency and consequently reduce model uncertainty. In this paper, moment-independent density-based GSA methods that consider the entire model output distribution - i.e. Probability Density Function (PDF) or Cumulative Distribution Function (CDF) - are compared with the widely-used variance-based method and their differences are discussed. Moreover, the effect of model output definition on parameter ranking results is investigated using Nash-Sutcliffe Efficiency (NSE) and model bias as example outputs. To this end, 26 flow parameters of a SWAT model of the River Zenne (Belgium) are analysed. In order to assess the robustness of the sensitivity indices, bootstrapping is applied and 95% confidence intervals are estimated. The results show that, although the variance-based method is easy to implement and interpret, it provides wider confidence intervals, especially for non-influential parameters, compared to the density-based methods. Therefore, density-based methods may be a useful complement to variance-based methods for identifying non-influential parameters.

  11. Using global sensitivity analysis to evaluate the uncertainties of future shoreline changes under the Bruun rule assumption

    NASA Astrophysics Data System (ADS)

    Le Cozannet, Gonéri; Oliveros, Carlos; Castelle, Bruno; Garcin, Manuel; Idier, Déborah; Pedreros, Rodrigo; Rohmer, Jeremy

    2016-04-01

    Future sandy shoreline changes are often assed by summing the contributions of longshore and cross-shore effects. In such approaches, a contribution of sea-level rise can be incorporated by adding a supplementary term based on the Bruun rule. Here, our objective is to identify where and when the use of the Bruun rule can be (in)validated, in the case of wave-exposed beaches with gentle slopes. We first provide shoreline change scenarios that account for all uncertain hydrosedimentary processes affecting the idealized low- and high-energy coasts described by Stive (2004)[Stive, M. J. F. 2004, How important is global warming for coastal erosion? an editorial comment, Climatic Change, vol. 64, n 12, doi:10.1023/B:CLIM.0000024785.91858. ISSN 0165-0009]. Then, we generate shoreline change scenarios based on probabilistic sea-level rise projections based on IPCC. For scenario RCP 6.0 and 8.5 and in the absence of coastal defenses, the model predicts an observable shift toward generalized beach erosion by the middle of the 21st century. On the contrary, the model predictions are unlikely to differ from the current situation in case of scenario RCP 2.6. To get insight into the relative importance of each source of uncertainties, we quantify each contributions to the variance of the model outcome using a global sensitivity analysis. This analysis shows that by the end of the 21st century, a large part of shoreline change uncertainties are due to the climate change scenario if all anthropogenic greenhousegas emission scenarios are considered equiprobable. To conclude, the analysis shows that under the assumptions above, (in)validating the Bruun rule should be straightforward during the second half of the 21st century and for the RCP 8.5 scenario. Conversely, for RCP 2.6, the noise in shoreline change evolution should continue dominating the signal due to the Bruun effect. This last conclusion can be interpreted as an important potential benefit of climate change mitigation.

  12. Sensitivity Analysis in Engineering

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)

    1987-01-01

    The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.

  13. Global sensitivity analysis of the climate-vegetation system to astronomical forcing: an emulator-based approach

    NASA Astrophysics Data System (ADS)

    Bounceur, N.; Crucifix, M.; Wilkinson, R. D.

    2015-05-01

    A global sensitivity analysis is performed to describe the effects of astronomical forcing on the climate-vegetation system simulated by the model of intermediate complexity LOVECLIM in interglacial conditions. The methodology relies on the estimation of sensitivity measures, using a Gaussian process emulator as a fast surrogate of the climate model, calibrated on a set of well-chosen experiments. The outputs considered are the annual mean temperature and precipitation and the growing degree days (GDD). The experiments were run on two distinct land surface schemes to estimate the importance of vegetation feedbacks on climate variance. This analysis provides a spatial description of the variance due to the factors and their combinations, in the form of "fingerprints" obtained from the covariance indices. The results are broadly consistent with the current under-standing of Earth's climate response to the astronomical forcing. In particular, precession and obliquity are found to contribute in LOVECLIM equally to GDD in the Northern Hemisphere, and the effect of obliquity on the response of Southern Hemisphere temperature dominates precession effects. Precession dominates precipitation changes in subtropical areas. Compared to standard approaches based on a small number of simulations, the methodology presented here allows us to identify more systematically regions susceptible to experiencing rapid climate change in response to the smooth astronomical forcing change. In particular, we find that using interactive vegetation significantly enhances the expected rates of climate change, specifically in the Sahel (up to 50% precipitation change in 1000 years) and in the Canadian Arctic region (up to 3° in 1000 years). None of the tested astronomical configurations were found to induce multiple steady states, but, at low obliquity, we observed the development of an oscillatory pattern that has already been reported in LOVECLIM. Although the mathematics of the analysis are

  14. Mapping global sensitivity of cellular network dynamics: sensitivity heat maps and a global summation law.

    PubMed

    Rand, D A

    2008-08-06

    The dynamical systems arising from gene regulatory, signalling and metabolic networks are strongly nonlinear, have high-dimensional state spaces and depend on large numbers of parameters. Understanding the relation between the structure and the function for such systems is a considerable challenge. We need tools to identify key points of regulation, illuminate such issues as robustness and control and aid in the design of experiments. Here, I tackle this by developing new techniques for sensitivity analysis. In particular, I show how to globally analyse the sensitivity of a complex system by means of two new graphical objects: the sensitivity heat map and the parameter sensitivity spectrum. The approach to sensitivity analysis is global in the sense that it studies the variation in the whole of the model's solution rather than focusing on output variables one at a time, as in classical sensitivity analysis. This viewpoint relies on the discovery of local geometric rigidity for such systems, the mathematical insight that makes a practicable approach to such problems feasible for highly complex systems. In addition, we demonstrate a new summation theorem that substantially generalizes previous results for oscillatory and other dynamical phenomena. This theorem can be interpreted as a mathematical law stating the need for a balance between fragility and robustness in such systems.

  15. Quasi-Monte Carlo based global uncertainty and sensitivity analysis in modeling free product migration and recovery from petroleum-contaminated aquifers.

    PubMed

    He, Li; Huang, Gordon; Lu, Hongwei; Wang, Shuo; Xu, Yi

    2012-06-15

    This paper presents a global uncertainty and sensitivity analysis (GUSA) framework based on global sensitivity analysis (GSA) and generalized likelihood uncertainty estimation (GLUE) methods. Quasi-Monte Carlo (QMC) is employed by GUSA to obtain realizations of uncertain parameters, which are then input to the simulation model for analysis. Compared to GLUE, GUSA can not only evaluate global sensitivity and uncertainty of modeling parameter sets, but also quantify the uncertainty in modeling prediction sets. Moreover, GUSA's another advantage lies in alleviation of computational effort, since those globally-insensitive parameters can be identified and removed from the uncertain-parameter set. GUSA is applied to a practical petroleum-contaminated site in Canada to investigate free product migration and recovery processes under aquifer remediation operations. Results from global sensitivity analysis show that (1) initial free product thickness has the most significant impact on total recovery volume but least impact on residual free product thickness and recovery rate; (2) total recovery volume and recovery rate are sensitive to residual LNAPL phase saturations and soil porosity. Results from uncertainty predictions reveal that the residual thickness would remain high and almost unchanged after about half-year of skimmer-well scheme; the rather high residual thickness (0.73-1.56 m 20 years later) indicates that natural attenuation would not be suitable for the remediation. The largest total recovery volume would be from water pumping, followed by vacuum pumping, and then skimmer. The recovery rates of the three schemes would rapidly decrease after 2 years (less than 0.05 m(3)/day), thus short-term remediation is not suggested. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Analysis of Time step sensitivity in the Community Atmospheric Model using Single-Column and Global Simulations

    NASA Astrophysics Data System (ADS)

    Habtezion, B. L.; Caldwell, P.

    2013-12-01

    Global simulations of the Community Atmospheric Model (CAM) are shown to be very sensitive to physics time step. When the time step is decreased from its default value of 30 min to 7.5 min, cloud fraction, liquid and ice water path increase greatly. To better understand the time-convergence properties of the model and to identify sources of time-step sensitivity, we have conducted single column model simulations for a variety of cloud regimes. The sites used for these study include summertime mid-latitude continental convection (ARM95, ARM97), convection over the tropical-ocean (GATEIII, TOGAII, and TWP-ICE), mid-latitude cirrus (SPARTICUS), shallow convection (RICO), subtropical stratocumulus (DYCOMS2 RF01), and multi-level Arctic clouds (MPACE-A). Simulations at a variety of time steps were analyzed to quantify the magnitude of time truncation error in the default model and the time-step required to obtain a numerically accurate solution. The relation between single-column and global model results are explored by extracting nearest-neighbor grid-columns from climatological GCM experiments run at a variety of physics time steps; single-column and global results are found to be generally similar. Time-step sensitivity is found to result from numerical implementation issues and improved methods for model integration are discussed. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  17. Representing nighttime and minimum conductance in CLM4.5: global hydrology and carbon sensitivity analysis using observational constraints

    NASA Astrophysics Data System (ADS)

    Lombardozzi, Danica L.; Zeppel, Melanie J. B.; Fisher, Rosie A.; Tawfik, Ahmed

    2017-01-01

    The terrestrial biosphere regulates climate through carbon, water, and energy exchanges with the atmosphere. Land-surface models estimate plant transpiration, which is actively regulated by stomatal pores, and provide projections essential for understanding Earth's carbon and water resources. Empirical evidence from 204 species suggests that significant amounts of water are lost through leaves at night, though land-surface models typically reduce stomatal conductance to nearly zero at night. Here, we test the sensitivity of carbon and water budgets in a global land-surface model, the Community Land Model (CLM) version 4.5, to three different methods of incorporating observed nighttime stomatal conductance values. We find that our modifications increase transpiration by up to 5 % globally, reduce modeled available soil moisture by up to 50 % in semi-arid regions, and increase the importance of the land surface in modulating energy fluxes. Carbon gain declines by up to ˜ 4 % globally and > 25 % in semi-arid regions. We advocate for realistic constraints of minimum stomatal conductance in future climate simulations, and widespread field observations to improve parameterizations.

  18. Parametric Sensitivity Analysis of Precipitation at Global and Local Scales in the Community Atmosphere Model CAM5

    DOE PAGES

    Qian, Yun; Yan, Huiping; Hou, Zhangshuan; ...

    2015-04-10

    We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics.more » Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.« less

  19. A Sensitivity Analysis of the Impact of Rain on Regional and Global Sea-Air Fluxes of CO2

    PubMed Central

    Shutler, J. D.; Land, P. E.; Woolf, D. K.; Quartly, G. D.

    2016-01-01

    The global oceans are considered a major sink of atmospheric carbon dioxide (CO2). Rain is known to alter the physical and chemical conditions at the sea surface, and thus influence the transfer of CO2 between the ocean and atmosphere. It can influence gas exchange through enhanced gas transfer velocity, the direct export of carbon from the atmosphere to the ocean, by altering the sea skin temperature, and through surface layer dilution. However, to date, very few studies quantifying these effects on global net sea-air fluxes exist. Here, we include terms for the enhanced gas transfer velocity and the direct export of carbon in calculations of the global net sea-air fluxes, using a 7-year time series of monthly global climate quality satellite remote sensing observations, model and in-situ data. The use of a non-linear relationship between the effects of rain and wind significantly reduces the estimated impact of rain-induced surface turbulence on the rate of sea-air gas transfer, when compared to a linear relationship. Nevertheless, globally, the rain enhanced gas transfer and rain induced direct export increase the estimated annual oceanic integrated net sink of CO2 by up to 6%. Regionally, the variations can be larger, with rain increasing the estimated annual net sink in the Pacific Ocean by up to 15% and altering monthly net flux by > ± 50%. Based on these analyses, the impacts of rain should be included in the uncertainty analysis of studies that estimate net sea-air fluxes of CO2 as the rain can have a considerable impact, dependent upon the region and timescale. PMID:27673683

  20. A Sensitivity Analysis of the Impact of Rain on Regional and Global Sea-Air Fluxes of CO2.

    PubMed

    Ashton, I G; Shutler, J D; Land, P E; Woolf, D K; Quartly, G D

    The global oceans are considered a major sink of atmospheric carbon dioxide (CO2). Rain is known to alter the physical and chemical conditions at the sea surface, and thus influence the transfer of CO2 between the ocean and atmosphere. It can influence gas exchange through enhanced gas transfer velocity, the direct export of carbon from the atmosphere to the ocean, by altering the sea skin temperature, and through surface layer dilution. However, to date, very few studies quantifying these effects on global net sea-air fluxes exist. Here, we include terms for the enhanced gas transfer velocity and the direct export of carbon in calculations of the global net sea-air fluxes, using a 7-year time series of monthly global climate quality satellite remote sensing observations, model and in-situ data. The use of a non-linear relationship between the effects of rain and wind significantly reduces the estimated impact of rain-induced surface turbulence on the rate of sea-air gas transfer, when compared to a linear relationship. Nevertheless, globally, the rain enhanced gas transfer and rain induced direct export increase the estimated annual oceanic integrated net sink of CO2 by up to 6%. Regionally, the variations can be larger, with rain increasing the estimated annual net sink in the Pacific Ocean by up to 15% and altering monthly net flux by > ± 50%. Based on these analyses, the impacts of rain should be included in the uncertainty analysis of studies that estimate net sea-air fluxes of CO2 as the rain can have a considerable impact, dependent upon the region and timescale.

  1. Economics in “Global Health 2035”: a sensitivity analysis of the value of a life year estimates

    PubMed Central

    Chang, Angela Y; Robinson, Lisa A; Hammitt, James K; Resch, Stephen C

    2017-01-01

    Background In “Global health 2035: a world converging within a generation,” The Lancet Commission on Investing in Health (CIH) adds the value of increased life expectancy to the value of growth in gross domestic product (GDP) when assessing national well–being. To value changes in life expectancy, the CIH relies on several strong assumptions to bridge gaps in the empirical research. It finds that the value of a life year (VLY) averages 2.3 times GDP per capita for low– and middle–income countries (LMICs) assuming the changes in life expectancy they experienced from 2000 to 2011 are permanent. Methods The CIH VLY estimate is based on a specific shift in population life expectancy and includes a 50 percent reduction for children ages 0 through 4. We investigate the sensitivity of this estimate to the underlying assumptions, including the effects of income, age, and life expectancy, and the sequencing of the calculations. Findings We find that reasonable alternative assumptions regarding the effects of income, age, and life expectancy may reduce the VLY estimates to 0.2 to 2.1 times GDP per capita for LMICs. Removing the reduction for young children increases the VLY, while reversing the sequencing of the calculations reduces the VLY. Conclusion Because the VLY is sensitive to the underlying assumptions, analysts interested in applying this approach elsewhere must tailor the estimates to the impacts of the intervention and the characteristics of the affected population. Analysts should test the sensitivity of their conclusions to reasonable alternative assumptions. More work is needed to investigate options for improving the approach. PMID:28400950

  2. Sensitivity analysis of a sediment dynamics model applied in a Mediterranean river basin: global change and management implications.

    PubMed

    Sánchez-Canales, M; López-Benito, A; Acuña, V; Ziv, G; Hamel, P; Chaplin-Kramer, R; Elorza, F J

    2015-01-01

    Climate change and land-use change are major factors influencing sediment dynamics. Models can be used to better understand sediment production and retention by the landscape, although their interpretation is limited by large uncertainties, including model parameter uncertainties. The uncertainties related to parameter selection may be significant and need to be quantified to improve model interpretation for watershed management. In this study, we performed a sensitivity analysis of the InVEST (Integrated Valuation of Environmental Services and Tradeoffs) sediment retention model in order to determine which model parameters had the greatest influence on model outputs, and therefore require special attention during calibration. The estimation of the sediment loads in this model is based on the Universal Soil Loss Equation (USLE). The sensitivity analysis was performed in the Llobregat basin (NE Iberian Peninsula) for exported and retained sediment, which support two different ecosystem service benefits (avoided reservoir sedimentation and improved water quality). Our analysis identified the model parameters related to the natural environment as the most influential for sediment export and retention. Accordingly, small changes in variables such as the magnitude and frequency of extreme rainfall events could cause major changes in sediment dynamics, demonstrating the sensitivity of these dynamics to climate change in Mediterranean basins. Parameters directly related to human activities and decisions (such as cover management factor, C) were also influential, especially for sediment exported. The importance of these human-related parameters in the sediment export process suggests that mitigation measures have the potential to at least partially ameliorate climate-change driven changes in sediment exportation. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Model-based decision analysis of remedial alternatives using info-gap theory and Agent-Based Analysis of Global Uncertainty and Sensitivity (ABAGUS)

    NASA Astrophysics Data System (ADS)

    Harp, D.; Vesselinov, V. V.

    2011-12-01

    A newly developed methodology to model-based decision analysis is presented. The methodology incorporates a sampling approach, referred to as Agent-Based Analysis of Global Uncertainty and Sensitivity (ABAGUS; Harp & Vesselinov; 2011), that efficiently collects sets of acceptable solutions (i.e. acceptable model parameter sets) for different levels of a model performance metric representing the consistency of model predictions to observations. In this case, the performance metric is based on model residuals (i.e. discrepancies between observations and simulations). ABAGUS collects acceptable solutions from a discretized parameter space and stores them in a KD-tree for efficient retrieval. The parameter space domain (parameter minimum/maximum ranges) and discretization are predefined. On subsequent visits to collected locations, agents are provided with a modified value of the performance metric, and the model solution is not recalculated. The modified values of the performance metric sculpt the response surface (convexities become concavities), repulsing agents from collected regions. This promotes global exploration of the parameter space and discourages reinvestigation of regions of previously collected acceptable solutions. The resulting sets of acceptable solutions are formulated into a decision analysis using concepts from info-gap theory (Ben-Haim, 2006). Using info-gap theory, the decision robustness and opportuneness are quantified, providing measures of the immunity to failure and windfall, respectively, of alternative decisions. The approach is intended for cases where the information is extremely limited, resulting in non-probabilistic uncertainties concerning model properties such as boundary and initial conditions, model parameters, conceptual model elements, etc. The information provided by this analysis is weaker than the information provided by probabilistic decision analyses (i.e. posterior parameter distributions are not produced), however, this

  4. Cosmopolitan Sensitivities, Vulnerability, and Global Englishes

    ERIC Educational Resources Information Center

    Jacobsen, Ushma Chauhan

    2015-01-01

    This paper is the outcome of an afterthought that assembles connections between three elements: the ambitions of cultivating cosmopolitan sensitivities that circulate vibrantly in connection with the internationalization of higher education, a course on Global Englishes at a Danish university and the sensation of vulnerability. It discusses the…

  5. Cosmopolitan Sensitivities, Vulnerability, and Global Englishes

    ERIC Educational Resources Information Center

    Jacobsen, Ushma Chauhan

    2015-01-01

    This paper is the outcome of an afterthought that assembles connections between three elements: the ambitions of cultivating cosmopolitan sensitivities that circulate vibrantly in connection with the internationalization of higher education, a course on Global Englishes at a Danish university and the sensation of vulnerability. It discusses the…

  6. Assessment of the Potential Impacts of Wheat Plant Traits across Environments by Combining Crop Modeling and Global Sensitivity Analysis

    PubMed Central

    Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine

    2016-01-01

    A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement. PMID:26799483

  7. Assessment of the Potential Impacts of Wheat Plant Traits across Environments by Combining Crop Modeling and Global Sensitivity Analysis.

    PubMed

    Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine

    2016-01-01

    A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement.

  8. Global sensitivity analysis of a model related to memory formation in synapses: Model reduction based on epistemic parameter uncertainties and related issues.

    PubMed

    Kulasiri, Don; Liang, Jingyi; He, Yao; Samarasinghe, Sandhya

    2017-02-09

    We investigate the epistemic uncertainties of parameters of a mathematical model that describes the dynamics of CaMKII-NMDAR complex related to memory formation in synapses using global sensitivity analysis (GSA). The model, which was published in this journal, is nonlinear and complex with Ca(2+) patterns with different level of frequencies as inputs. We explore the effects of parameter on the key outputs of the model to discover the most sensitive ones using GSA and partial ranking correlation coefficient (PRCC) and to understand why they are sensitive and others are not based on the biology of the problem. We also extend the model to add presynaptic neurotransmitter vesicles release to have action potentials as inputs of different frequencies. We perform GSA on this extended model to show that the parameter sensitivities are different for the extended model as shown by PRCC landscapes. Based on the results of GSA and PRCC, we reduce the original model to a less complex model taking the most important biological processes into account. We validate the reduced model against the outputs of the original model. We show that the parameter sensitivities are dependent on the inputs and GSA would make us understand the sensitivities and the importance of the parameters. A thorough phenomenological understanding of the relationships involved is essential to interpret the results of GSA and hence for the possible model reduction.

  9. Global sensitivity analysis for repeated measures studies with informative drop-out: A semi-parametric approach.

    PubMed

    Scharfstein, Daniel; McDermott, Aidan; Díaz, Iván; Carone, Marco; Lunardon, Nicola; Turkoz, Ibrahim

    2017-05-23

    In practice, both testable and untestable assumptions are generally required to draw inference about the mean outcome measured at the final scheduled visit in a repeated measures study with drop-out. Scharfstein et al. (2014) proposed a sensitivity analysis methodology to determine the robustness of conclusions within a class of untestable assumptions. In their approach, the untestable and testable assumptions were guaranteed to be compatible; their testable assumptions were based on a fully parametric model for the distribution of the observable data. While convenient, these parametric assumptions have proven especially restrictive in empirical research. Here, we relax their distributional assumptions and provide a more flexible, semi-parametric approach. We illustrate our proposal in the context of a randomized trial for evaluating a treatment of schizoaffective disorder. © 2017, The International Biometric Society.

  10. Global Sensitivity Measures from Given Data

    SciTech Connect

    Elmar Plischke; Emanuele Borgonovo; Curtis L. Smith

    2013-05-01

    Simulation models support managers in the solution of complex problems. International agencies recommend uncertainty and global sensitivity methods as best practice in the audit, validation and application of scientific codes. However, numerical complexity, especially in the presence of a high number of factors, induces analysts to employ less informative but numerically cheaper methods. This work introduces a design for estimating global sensitivity indices from given data (including simulation input–output data), at the minimum computational cost. We address the problem starting with a statistic based on the L1-norm. A formal definition of the estimators is provided and corresponding consistency theorems are proved. The determination of confidence intervals through a bias-reducing bootstrap estimator is investigated. The strategy is applied in the identification of the key drivers of uncertainty for the complex computer code developed at the National Aeronautics and Space Administration (NASA) assessing the risk of lunar space missions. We also introduce a symmetry result that enables the estimation of global sensitivity measures to datasets produced outside a conventional input–output functional framework.

  11. Integrated Sensitivity Analysis Workflow

    SciTech Connect

    Friedman-Hill, Ernest J.; Hoffman, Edward L.; Gibson, Marcus J.; Clay, Robert L.

    2014-08-01

    Sensitivity analysis is a crucial element of rigorous engineering analysis, but performing such an analysis on a complex model is difficult and time consuming. The mission of the DART Workbench team at Sandia National Laboratories is to lower the barriers to adoption of advanced analysis tools through software integration. The integrated environment guides the engineer in the use of these integrated tools and greatly reduces the cycle time for engineering analysis.

  12. Arbitrary-resolution global sensitivity kernels

    NASA Astrophysics Data System (ADS)

    Nissen-Meyer, T.; Fournier, A.; Dahlen, F.

    2007-12-01

    Extracting observables out of any part of a seismogram (e.g. including diffracted phases such as Pdiff) necessitates the knowledge of 3-D time-space wavefields for the Green functions that form the backbone of Fréchet sensitivity kernels. While known for a while, this idea is still computationally intractable in 3-D, facing major simulation and storage issues when high-frequency wavefields are considered at the global scale. We recently developed a new "collapsed-dimension" spectral-element method that solves the 3-D system of elastodynamic equations in a 2-D space, based on exploring symmetry considerations of the seismic-wave radiation patterns. We will present the technical background on the computation of waveform kernels, various examples of time- and frequency-dependent sensitivity kernels and subsequently extracted time-window kernels (e.g. banana- doughnuts). Given the computationally light-weighted 2-D nature, we will explore some crucial parameters such as excitation type, source time functions, frequency, azimuth, discontinuity locations, and phase type, i.e. an a priori view into how, when, and where seismograms carry 3-D Earth signature. A once-and-for-all database of 2-D waveforms for various source depths shall then serve as a complete set of global time-space sensitivity for a given spherically symmetric background model, thereby allowing for tomographic inversions with arbitrary frequencies, observables, and phases.

  13. Sensitivity of Global Mortality to Sulfate Geoengineering

    NASA Astrophysics Data System (ADS)

    Eastham, S. D.; Weisenstein, D.; Keith, D.; Barrett, S. R. H.

    2016-12-01

    Geoengineering with stratospheric aerosol injection (SAI) may be an effective measure to manage climate risks associated with anthropogenic greenhouse gases. However, although many studies of SAI have examined the consequences for the climate and ozone layer, to date none have quantified the associated impacts on human health. We combine prior estimates of the climate response to SAI, a microphysical aerosol model, and a global chemistry-transport model to estimate changes in global mortality resulting from SAI due to its effects on air quality and surface UV exposure. We find that SAI sufficient to produce 1 K of global cooling in 2040 would result in between -30,000 and +79,000 premature air quality and UV-related mortalities per year. This is equivalent to approximately 1% of global premature mortality attributable to surface air quality degradation in 2014. Reduced temperatures result in increased formation of inorganic aerosol, and therefore an average of 26,000 additional premature mortalities per year. Reductions in precipitation also increase aerosol burdens, resulting in a further 13,000 premature mortalities per year. Injected aerosol descending to the surface increases annual mortality by 7,400, while the net impact of photochemical changes reduces annual mortality by 20,000. Uncertainty in the response functions relating exposure to mortality are responsible for 95% of the variance in the result, with only 5% attributable to climate sensitivity. However, this total does not include other benefits and effects of SAI such as reducing near-term mortality associated with increased mean and extreme temperatures, which may be an order of magnitude larger than the impacts of SAI we compute here.

  14. Seismic waveform sensitivity to global boundary topography

    NASA Astrophysics Data System (ADS)

    Colombi, Andrea; Nissen-Meyer, Tarje; Boschi, Lapo; Giardini, Domenico

    2012-09-01

    We investigate the implications of lateral variations in the topography of global seismic discontinuities, in the framework of high-resolution forward modelling and seismic imaging. We run 3-D wave-propagation simulations accurate at periods of 10 s and longer, with Earth models including core-mantle boundary topography anomalies of ˜1000 km spatial wavelength and up to 10 km height. We obtain very different waveform signatures for PcP (reflected) and Pdiff (diffracted) phases, supporting the theoretical expectation that the latter are sensitive primarily to large-scale structure, whereas the former only to small scale, where large and small are relative to the frequency. PcP at 10 s seems to be well suited to map such a small-scale perturbation, whereas Pdiff at the same frequency carries faint signatures that do not allow any tomographic reconstruction. Only at higher frequency, the signature becomes stronger. We present a new algorithm to compute sensitivity kernels relating seismic traveltimes (measured by cross-correlation of observed and theoretical seismograms) to the topography of seismic discontinuities at any depth in the Earth using full 3-D wave propagation. Calculation of accurate finite-frequency sensitivity kernels is notoriously expensive, but we reduce computational costs drastically by limiting ourselves to spherically symmetric reference models, and exploiting the axial symmetry of the resulting propagating wavefield that collapses to a 2-D numerical domain. We compute and analyse a suite of kernels for upper and lower mantle discontinuities that can be used for finite-frequency waveform inversion. The PcP and Pdiff sensitivity footprints are in good agreement with the result obtained cross-correlating perturbed and unperturbed seismogram, validating our approach against full 3-D modelling to invert for such structures.

  15. The application of global sensitivity analysis in the development of a physiologically based pharmacokinetic model for m-xylene and ethanol co-exposure in humans

    PubMed Central

    Loizou, George D.; McNally, Kevin; Jones, Kate; Cocker, John

    2015-01-01

    Global sensitivity analysis (SA) was used during the development phase of a binary chemical physiologically based pharmacokinetic (PBPK) model used for the analysis of m-xylene and ethanol co-exposure in humans. SA was used to identify those parameters which had the most significant impact on variability of venous blood and exhaled m-xylene and urinary excretion of the major metabolite of m-xylene metabolism, 3-methyl hippuric acid. This analysis informed the selection of parameters for estimation/calibration by fitting to measured biological monitoring (BM) data in a Bayesian framework using Markov chain Monte Carlo (MCMC) simulation. Data generated in controlled human studies were shown to be useful for investigating the structure and quantitative outputs of PBPK models as well as the biological plausibility and variability of parameters for which measured values were not available. This approach ensured that a priori knowledge in the form of prior distributions was ascribed only to those parameters that were identified as having the greatest impact on variability. This is an efficient approach which helps reduce computational cost. PMID:26175688

  16. RESRAD parameter sensitivity analysis

    SciTech Connect

    Cheng, J.J.; Yu, C.; Zielen, A.J.

    1991-08-01

    Three methods were used to perform a sensitivity analysis of RESRAD code input parameters -- enhancement of RESRAD by the Gradient Enhanced Software System (GRESS) package, direct parameter perturbation, and graphic comparison. Evaluation of these methods indicated that (1) the enhancement of RESRAD by GRESS has limitations and should be used cautiously, (2) direct parameter perturbation is tedious to implement, and (3) the graphics capability of RESRAD 4.0 is the most direct and convenient method for performing sensitivity analyses. This report describes procedures for implementing these methods and presents a comparison of results. 3 refs., 9 figs., 8 tabs.

  17. Determination of DNA methylation associated with Acer rubrum (red maple) adaptation to metals: analysis of global DNA modifications and methylation-sensitive amplified polymorphism.

    PubMed

    Kim, Nam-Soo; Im, Min-Ji; Nkongolo, Kabwe

    2016-08-01

    Red maple (Acer rubum), a common deciduous tree species in Northern Ontario, has shown resistance to soil metal contamination. Previous reports have indicated that this plant does not accumulate metals in its tissue. However, low level of nickel and copper corresponding to the bioavailable levels in contaminated soils in Northern Ontario causes severe physiological damages. No differentiation between metal-contaminated and uncontaminated populations has been reported based on genetic analyses. The main objective of this study was to assess whether DNA methylation is involved in A. rubrum adaptation to soil metal contamination. Global cytosine and methylation-sensitive amplified polymorphism (MSAP) analyses were carried out in A. rubrum populations from metal-contaminated and uncontaminated sites. The global modified cytosine ratios in genomic DNA revealed a significant decrease in cytosine methylation in genotypes from a metal-contaminated site compared to uncontaminated populations. Other genotypes from a different metal-contaminated site within the same region appear to be recalcitrant to metal-induced DNA alterations even ≥30 years of tree life exposure to nickel and copper. MSAP analysis showed a high level of polymorphisms in both uncontaminated (77%) and metal-contaminated (72%) populations. Overall, 205 CCGG loci were identified in which 127 were methylated in either outer or inner cytosine. No differentiation among populations was established based on several genetic parameters tested. The variations for nonmethylated and methylated loci were compared by analysis of molecular variance (AMOVA). For methylated loci, molecular variance among and within populations was 1.5% and 13.2%, respectively. These values were low (0.6% for among populations and 5.8% for within populations) for unmethylated loci. Metal contamination is seen to affect methylation of cytosine residues in CCGG motifs in the A. rubrum populations that were analyzed.

  18. Scaling in sensitivity analysis

    USGS Publications Warehouse

    Link, W.A.; Doherty, P.F.

    2002-01-01

    Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.

  19. Interference and Sensitivity Analysis

    PubMed Central

    VanderWeele, Tyler J.; Tchetgen Tchetgen, Eric J.; Halloran, M. Elizabeth

    2014-01-01

    Causal inference with interference is a rapidly growing area. The literature has begun to relax the “no-interference” assumption that the treatment received by one individual does not affect the outcomes of other individuals. In this paper we briefly review the literature on causal inference in the presence of interference when treatments have been randomized. We then consider settings in which causal effects in the presence of interference are not identified, either because randomization alone does not suffice for identification, or because treatment is not randomized and there may be unmeasured confounders of the treatment-outcome relationship. We develop sensitivity analysis techniques for these settings. We describe several sensitivity analysis techniques for the infectiousness effect which, in a vaccine trial, captures the effect of the vaccine of one person on protecting a second person from infection even if the first is infected. We also develop two sensitivity analysis techniques for causal effects in the presence of unmeasured confounding which generalize analogous techniques when interference is absent. These two techniques for unmeasured confounding are compared and contrasted. PMID:25620841

  20. The Sensitivity of a Global Ocean Model to Wind Forcing: A Test Using Sea Level and Wind Observations from Satellites and Operational Analysis

    NASA Technical Reports Server (NTRS)

    Fu, L. L.; Chao, Y.

    1997-01-01

    Investigated in this study is the response of a global ocean general circulation model to forcing provided by two wind products: operational analysis from the National Center for Environmental Prediction (NCEP); observations made by the ERS-1 radar scatterometer.

  1. Sensitivity testing and analysis

    SciTech Connect

    Neyer, B.T.

    1991-01-01

    New methods of sensitivity testing and analysis are proposed. The new test method utilizes Maximum Likelihood Estimates to pick the next test level in order to maximize knowledge of both the mean, {mu}, and the standard deviation, {sigma} of the population. Simulation results demonstrate that this new test provides better estimators (less bias and smaller variance) of both {mu} and {sigma} than the other commonly used tests (Probit, Bruceton, Robbins-Monro, Langlie). A new method of analyzing sensitivity tests is also proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions, for {mu}, {sigma}, and arbitrary percentiles. Unlike presently used methods, such as the program ASENT which is based on the Cramer-Rao theorem, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The new test and analysis methods will be explained and compared to the presently used methods. 19 refs., 12 figs.

  2. Global Uncertainty Propagation and Sensitivity Analysis in the CH3OCH2 + O2 System: Combining Experiment and Theory To Constrain Key Rate Coefficients in DME Combustion.

    PubMed

    Shannon, R J; Tomlin, A S; Robertson, S H; Blitz, M A; Pilling, M J; Seakins, P W

    2015-07-16

    Statistical rate theory calculations, in particular formulations of the chemical master equation, are widely used to calculate rate coefficients of interest in combustion environments as a function of temperature and pressure. However, despite the increasing accuracy of electronic structure calculations, small uncertainties in the input parameters for these master equation models can lead to relatively large uncertainties in the calculated rate coefficients. Master equation input parameters may be constrained further by using experimental data and the relationship between experiment and theory warrants further investigation. In this work, the CH3OCH2 + O2 system, of relevance to the combustion of dimethyl ether (DME), is used as an example and the input parameters for master equation calculations on this system are refined through fitting to experimental data. Complementing these fitting calculations, global sensitivity analysis is used to explore which input parameters are constrained by which experimental conditions, and which parameters need to be further constrained to accurately predict key elementary rate coefficients. Finally, uncertainties in the calculated rate coefficients are obtained using both correlated and uncorrelated distributions of input parameters.

  3. Assessment of the contamination of drinking water supply wells by pesticides from surface water resources using a finite element reactive transport model and global sensitivity analysis techniques

    NASA Astrophysics Data System (ADS)

    Malaguerra, Flavio; Albrechtsen, Hans-Jørgen; Binning, Philip John

    2013-01-01

    SummaryA reactive transport model is employed to evaluate the potential for contamination of drinking water wells by surface water pollution. The model considers various geologic settings, includes sorption and degradation processes and is tested by comparison with data from a tracer experiment where fluorescein dye injected in a river is monitored at nearby drinking water wells. Three compounds were considered: an older pesticide MCPP (Mecoprop) which is mobile and relatively persistent, glyphosate (Roundup), a newer biodegradable and strongly sorbing pesticide, and its degradation product AMPA. Global sensitivity analysis using the Morris method is employed to identify the dominant model parameters. Results show that the characteristics of clay aquitards (degree of fracturing and thickness), pollutant properties and well depths are crucial factors when evaluating the risk of drinking water well contamination from surface water. This study suggests that it is unlikely that glyphosate in streams can pose a threat to drinking water wells, while MCPP in surface water can represent a risk: MCPP concentration at the drinking water well can be up to 7% of surface water concentration in confined aquifers and up to 10% in unconfined aquifers. Thus, the presence of confining clay aquitards may not prevent contamination of drinking water wells by persistent compounds in surface water. Results are consistent with data on pesticide occurrence in Denmark where pesticides are found at higher concentrations at shallow depths and close to streams.

  4. Using global sensitivity analysis to understand higher order interactions in complex models: an application of GSA on the Revised Universal Soil Loss Equation (RUSLE) to quantify model sensitivity and implications for ecosystem services management in Costa Rica

    NASA Astrophysics Data System (ADS)

    Fremier, A. K.; Estrada Carmona, N.; Harper, E.; DeClerck, F.

    2011-12-01

    Appropriate application of complex models to estimate system behavior requires understanding the influence of model structure and parameter estimates on model output. To date, most researchers perform local sensitivity analyses, rather than global, because of computational time and quantity of data produced. Local sensitivity analyses are limited in quantifying the higher order interactions among parameters, which could lead to incomplete analysis of model behavior. To address this concern, we performed a GSA on a commonly applied equation for soil loss - the Revised Universal Soil Loss Equation. USLE is an empirical model built on plot-scale data from the USA and the Revised version (RUSLE) includes improved equations for wider conditions, with 25 parameters grouped into six factors to estimate long-term plot and watershed scale soil loss. Despite RUSLE's widespread application, a complete sensitivity analysis has yet to be performed. In this research, we applied a GSA to plot and watershed scale data from the US and Costa Rica to parameterize the RUSLE in an effort to understand the relative importance of model factors and parameters across wide environmental space. We analyzed the GSA results using Random Forest, a statistical approach to evaluate parameter importance accounting for the higher order interactions, and used Classification and Regression Trees to show the dominant trends in complex interactions. In all GSA calculations the management of cover crops (C factor) ranks the highest among factors (compared to rain-runoff erosivity, topography, support practices, and soil erodibility). This is counter to previous sensitivity analyses where the topographic factor was determined to be the most important. The GSA finding is consistent across multiple model runs, including data from the US, Costa Rica, and a synthetic dataset of the widest theoretical space. The three most important parameters were: Mass density of live and dead roots found in the upper inch

  5. New sensitivity analysis attack

    NASA Astrophysics Data System (ADS)

    El Choubassi, Maha; Moulin, Pierre

    2005-03-01

    The sensitivity analysis attacks by Kalker et al. constitute a known family of watermark removal attacks exploiting a vulnerability in some watermarking protocols: the attacker's unlimited access to the watermark detector. In this paper, a new attack on spread spectrum schemes is designed. We first examine one of Kalker's algorithms and prove its convergence using the law of large numbers, which gives more insight into the problem. Next, a new algorithm is presented and compared to existing ones. Various detection algorithms are considered including correlation detectors and normalized correlation detectors, as well as other, more complicated algorithms. Our algorithm is noniterative and requires at most n+1 operations, where n is the dimension of the signal. Moreover, the new approach directly estimates the watermark by exploiting the simple geometry of the detection boundary and the information leaked by the detector.

  6. Saltelli Global Sensitivity Analysis and Simulation Modelling to Identify Intervention Strategies to Reduce the Prevalence of Escherichia coli O157 Contaminated Beef Carcasses

    PubMed Central

    Brookes, Victoria J.; Jordan, David; Davis, Stephen; Ward, Michael P.; Heller, Jane

    2015-01-01

    Introduction Strains of Shiga-toxin producing Escherichia coli O157 (STEC O157) are important foodborne pathogens in humans, and outbreaks of illness have been associated with consumption of undercooked beef. Here, we determine the most effective intervention strategies to reduce the prevalence of STEC O157 contaminated beef carcasses using a modelling approach. Method A computational model simulated events and processes in the beef harvest chain. Information from empirical studies was used to parameterise the model. Variance-based global sensitivity analysis (GSA) using the Saltelli method identified variables with the greatest influence on the prevalence of STEC O157 contaminated carcasses. Following a baseline scenario (no interventions), a series of simulations systematically introduced and tested interventions based on influential variables identified by repeated Saltelli GSA, to determine the most effective intervention strategy. Results Transfer of STEC O157 from hide or gastro-intestinal tract to carcass (improved abattoir hygiene) had the greatest influence on the prevalence of contaminated carcases. Due to interactions between inputs (identified by Saltelli GSA), combinations of interventions based on improved abattoir hygiene achieved a greater reduction in maximum prevalence than would be expected from an additive effect of single interventions. The most effective combination was improved abattoir hygiene with vaccination, which achieved a greater than ten-fold decrease in maximum prevalence compared to the baseline scenario. Conclusion Study results suggest that effective interventions to reduce the prevalence of STEC O157 contaminated carcasses should initially be based on improved abattoir hygiene. However, the effect of improved abattoir hygiene on the distribution of STEC O157 concentration on carcasses is an important information gap—further empirical research is required to determine whether reduced prevalence of contaminated carcasses is

  7. Global sensitivity analysis of a mathematical model of acute inflammation identifies nonlinear dependence of cumulative tissue damage on host interleukin-6 responses.

    PubMed

    Mathew, Shibin; Bartels, John; Banerjee, Ipsita; Vodovotz, Yoram

    2014-10-07

    The precise inflammatory role of the cytokine interleukin (IL)-6 and its utility as a biomarker or therapeutic target have been the source of much debate, presumably due to the complex pro- and anti-inflammatory effects of this cytokine. We previously developed a nonlinear ordinary differential equation (ODE) model to explain the dynamics of endotoxin (lipopolysaccharide; LPS)-induced acute inflammation and associated whole-animal damage/dysfunction (a proxy for the health of the organism), along with the inflammatory mediators tumor necrosis factor (TNF)-α, IL-6, IL-10, and nitric oxide (NO). The model was partially calibrated using data from endotoxemic C57Bl/6 mice. Herein, we investigated the sensitivity of the area under the damage curve (AUCD) to the 51 rate parameters of the ODE model for different levels of simulated LPS challenges using a global sensitivity approach called Random Sampling High Dimensional Model Representation (RS-HDMR). We explored sufficient parametric Monte Carlo samples to generate the variance-based Sobol' global sensitivity indices, and found that inflammatory damage was highly sensitive to the parameters affecting the activity of IL-6 during the different stages of acute inflammation. The AUCIL6 showed a bimodal distribution, with the lower peak representing healthy response and the higher peak representing sustained inflammation. Damage was minimal at low AUCIL6, giving rise to a healthy response. In contrast, intermediate levels of AUCIL6 resulted in high damage, and this was due to the insufficiency of damage recovery driven by anti-inflammatory responses from IL-10 and the activation of positive feedback sustained by IL-6. At high AUCIL6, damage recovery was interestingly restored in some population of simulated animals due to the NO-mediated anti-inflammatory responses. These observations suggest that the host's health status during acute inflammation depends in a nonlinear fashion on the magnitude of the inflammatory stimulus

  8. Sensitivity of global terrestrial ecosystems to climate variability

    NASA Astrophysics Data System (ADS)

    Seddon, Alistair W. R.; Macias-Fauria, Marc; Long, Peter R.; Benz, David; Willis, Kathy J.

    2016-03-01

    The identification of properties that contribute to the persistence and resilience of ecosystems despite climate change constitutes a research priority of global relevance. Here we present a novel, empirical approach to assess the relative sensitivity of ecosystems to climate variability, one property of resilience that builds on theoretical modelling work recognizing that systems closer to critical thresholds respond more sensitively to external perturbations. We develop a new metric, the vegetation sensitivity index, that identifies areas sensitive to climate variability over the past 14 years. The metric uses time series data derived from the moderate-resolution imaging spectroradiometer (MODIS) enhanced vegetation index, and three climatic variables that drive vegetation productivity (air temperature, water availability and cloud cover). Underlying the analysis is an autoregressive modelling approach used to identify climate drivers of vegetation productivity on monthly timescales, in addition to regions with memory effects and reduced response rates to external forcing. We find ecologically sensitive regions with amplified responses to climate variability in the Arctic tundra, parts of the boreal forest belt, the tropical rainforest, alpine regions worldwide, steppe and prairie regions of central Asia and North and South America, the Caatinga deciduous forest in eastern South America, and eastern areas of Australia. Our study provides a quantitative methodology for assessing the relative response rate of ecosystems—be they natural or with a strong anthropogenic signature—to environmental variability, which is the first step towards addressing why some regions appear to be more sensitive than others, and what impact this has on the resilience of ecosystem service provision and human well-being.

  9. Sensitivity of global terrestrial ecosystems to climate variability.

    PubMed

    Seddon, Alistair W R; Macias-Fauria, Marc; Long, Peter R; Benz, David; Willis, Kathy J

    2016-03-10

    The identification of properties that contribute to the persistence and resilience of ecosystems despite climate change constitutes a research priority of global relevance. Here we present a novel, empirical approach to assess the relative sensitivity of ecosystems to climate variability, one property of resilience that builds on theoretical modelling work recognizing that systems closer to critical thresholds respond more sensitively to external perturbations. We develop a new metric, the vegetation sensitivity index, that identifies areas sensitive to climate variability over the past 14 years. The metric uses time series data derived from the moderate-resolution imaging spectroradiometer (MODIS) enhanced vegetation index, and three climatic variables that drive vegetation productivity (air temperature, water availability and cloud cover). Underlying the analysis is an autoregressive modelling approach used to identify climate drivers of vegetation productivity on monthly timescales, in addition to regions with memory effects and reduced response rates to external forcing. We find ecologically sensitive regions with amplified responses to climate variability in the Arctic tundra, parts of the boreal forest belt, the tropical rainforest, alpine regions worldwide, steppe and prairie regions of central Asia and North and South America, the Caatinga deciduous forest in eastern South America, and eastern areas of Australia. Our study provides a quantitative methodology for assessing the relative response rate of ecosystems--be they natural or with a strong anthropogenic signature--to environmental variability, which is the first step towards addressing why some regions appear to be more sensitive than others, and what impact this has on the resilience of ecosystem service provision and human well-being.

  10. Multidisciplinary optimization of controlled space structures with global sensitivity equations

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; James, Benjamin B.; Graves, Philip C.; Woodard, Stanley E.

    1991-01-01

    A new method for the preliminary design of controlled space structures is presented. The method coordinates standard finite element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structures and control systems of a spacecraft. Global sensitivity equations are a key feature of this method. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Fifteen design variables are used to optimize truss member sizes and feedback gain values. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporating the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables. The solution of the demonstration problem is an important step toward a comprehensive preliminary design capability for structures and control systems. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines.

  11. Extended Forward Sensitivity Analysis for Uncertainty Quantification

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau

    2011-09-01

    Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed

  12. Model-based global sensitivity analysis as applied to identification of anti-cancer drug targets and biomarkers of drug resistance in the ErbB2/3 network

    PubMed Central

    Lebedeva, Galina; Sorokin, Anatoly; Faratian, Dana; Mullen, Peter; Goltsov, Alexey; Langdon, Simon P.; Harrison, David J.; Goryanin, Igor

    2012-01-01

    High levels of variability in cancer-related cellular signalling networks and a lack of parameter identifiability in large-scale network models hamper translation of the results of modelling studies into the process of anti-cancer drug development. Recently global sensitivity analysis (GSA) has been recognised as a useful technique, capable of addressing the uncertainty of the model parameters and generating valid predictions on parametric sensitivities. Here we propose a novel implementation of model-based GSA specially designed to explore how multi-parametric network perturbations affect signal propagation through cancer-related networks. We use area-under-the-curve for time course of changes in phosphorylation of proteins as a characteristic for sensitivity analysis and rank network parameters with regard to their impact on the level of key cancer-related outputs, separating strong inhibitory from stimulatory effects. This allows interpretation of the results in terms which can incorporate the effects of potential anti-cancer drugs on targets and the associated biological markers of cancer. To illustrate the method we applied it to an ErbB signalling network model and explored the sensitivity profile of its key model readout, phosphorylated Akt, in the absence and presence of the ErbB2 inhibitor pertuzumab. The method successfully identified the parameters associated with elevation or suppression of Akt phosphorylation in the ErbB2/3 network. From analysis and comparison of the sensitivity profiles of pAkt in the absence and presence of targeted drugs we derived predictions of drug targets, cancer-related biomarkers and generated hypotheses for combinatorial therapy. Several key predictions have been confirmed in experiments using human ovarian carcinoma cell lines. We also compared GSA-derived predictions with the results of local sensitivity analysis and discuss the applicability of both methods. We propose that the developed GSA procedure can serve as a

  13. D2PC sensitivity analysis

    SciTech Connect

    Lombardi, D.P.

    1992-08-01

    The Chemical Hazard Prediction Model (D2PC) developed by the US Army will play a critical role in the Chemical Stockpile Emergency Preparedness Program by predicting chemical agent transport and dispersion through the atmosphere after an accidental release. To aid in the analysis of the output calculated by D2PC, this sensitivity analysis was conducted to provide information on model response to a variety of input parameters. The sensitivity analysis focused on six accidental release scenarios involving chemical agents VX, GB, and HD (sulfur mustard). Two categories, corresponding to conservative most likely and worst case meteorological conditions, provided the reference for standard input values. D2PC displayed a wide variety of sensitivity to the various input parameters. The model displayed the greatest overall sensitivity to wind speed, mixing height, and breathing rate. For other input parameters, sensitivity was mixed but generally lower. Sensitivity varied not only with parameter, but also over the range of values input for a single parameter. This information on model response can provide useful data for interpreting D2PC output.

  14. Global Sensitivity and Data-Worth Analyses in iTOUGH2: User's Guide

    SciTech Connect

    Wainwright, Haruko Murakami; Finsterle, Stefan

    2016-07-15

    This manual explains the use of local sensitivity analysis, the global Morris OAT and Sobol’ methods, and a related data-worth analysis as implemented in iTOUGH2. In addition to input specification and output formats, it includes some examples to show how to interpret results.

  15. Sensitivity of direct global warming potentials to key uncertainties

    SciTech Connect

    Wuebbles, D.J.; Patten, K.O.; Grant, K.E. ); Jain, A.K. )

    1992-07-01

    A series of sensitivity studies examines the effect of several uncertainties in Global Wanning Potentials (GWPs). For example, the original evaluation of GWPs for the Intergovernmental Panel on Climate Change (EPCC, 1990) did not attempt to account for the possible sinks of carbon dioxide (CO{sub 2}) that could balance the carbon cycle and produce atmospheric concentrations of C0{sub 2} that match observations. In this study, a balanced carbon cycle model is applied in calculation of the radiative forcing from C0{sub 2}. Use of the balanced model produces up to 20 percent enhancement of the GWPs for most trace gases compared with the EPCC (1990) values for time horizons up to 100 years, but a decreasing enhancement with longer time horizons. Uncertainty limits of the fertilization feedback parameter contribute a 10 percent range in GWP values. Another systematic uncertainty in GWPs is the assumption of an equilibrium atmosphere (one in which the concentration of trace gases remains constant) versus a disequilibrium atmosphere. The latter gives GWPs that are 15 to 30 percent greater than the former, dependening upon the carbon dioxide emission scenario chosen. Seven scenarios are employed: constant emission past 1990 and the six EPCC (1992) emission scenarios. For the analysis of uncertainties in atmospheric lifetime ({tau}), the GWP changes in direct proportion to {tau} for short-lived gases, but to a lesser extent for gases with {tau} greater than the time horizon for the GWP calculation.

  16. Comparative Sensitivity Analysis of Muscle Activation Dynamics

    PubMed Central

    Rockenfeller, Robert; Günther, Michael; Schmitt, Syn; Götz, Thomas

    2015-01-01

    We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379

  17. Computational methods for global/local analysis

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.

    1992-01-01

    Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.

  18. Global surveillance of antibiotic sensitivity of Vibrio cholerae*

    PubMed Central

    O'Grady, F.; Lewis, M. J.; Pearson, N. J.

    1976-01-01

    Strains of Vibrio cholerae—1156 from various parts of the world—were examined by standardized antibiotic sensitivity tests in one centre, to determine the global incidence of antibiotic resistance in this organism and to assess the extent to which differences in methods of sensitivity testing might be responsible for discrepancies in the reported incidence of resistant strains. Of the strains examined, 1127 were fully sensitive to ampicillin, chloramphenicol, tetracycline, furazolidone, and three different sulphonamides, 27 showed stable and reproducible resistance to one or more of these agents, and 2 proved to contain a minority of cells with unstable, presumably plasmid-borne, resistance to chloram-phenicol. Unstable resistance to antibiotics may be common in V. cholerae but rarely recognized, and may account for some of the discrepancies in the reported incidence of resistant strains. PMID:1088100

  19. Quality assessment and forecast sensitivity of global remote sensing observations

    NASA Astrophysics Data System (ADS)

    Mallick, Swapan; Dutta, Devajyoti; Min, Ki-Hong

    2017-03-01

    The satellite-derived wind from cloud and moisture features of geostationary satellites is an important data source for numerical weather prediction (NWP) models. These datasets and global positioning system radio occultation (GPSRO) satellite radiances are assimilated in the four-dimensional variational atmospheric data assimilation system of the UKMO Unified Model in India. This study focuses on the importance of these data in the NWP system and their impact on short-term 24-h forecasts. The quality of the wind observations is compared to the short-range forecast from the model background. The observation increments (observation minus background) are computed as the satellite-derived wind minus the model forecast with a 6-h lead time. The results show the model background has a large easterly wind component compared to satellite observations. The importance of each observation in the analysis is studied using an adjoint-based forecast sensitivity to observation method. The results show that at least around 50% of all types of satellite observations are beneficial. In terms of individual contribution, METEOSAT-7 shows a higher percentage of impact (nearly 50%), as compared to GEOS, MTSAT-2 and METEOSAT-10, all of which have a less than 25% impact. In addition, the impact of GPSRO, infrared atmospheric sounding interferometer (IASI) and atmospheric infrared sounder (AIRS) data is calculated. The GPSRO observations have beneficial impacts up to 50 km. Over the Southern Hemisphere, the high spectral radiances from IASI and AIRS show a greater impact than over the Northern Hemisphere. The results in this study can be used for further improvements in the use of new and existing satellite observations.

  20. Global average net radiation sensitivity to cloud amount variations

    SciTech Connect

    Karner, O.

    1993-12-01

    Time series analysis performed using an autoregressive model is carried out to study monthly oscillations in the earth radiation budget (ERB) at the top of the atmosphere (TOA) and cloud amount estimates on a global basis. Two independent cloud amount datasets, produced elsewhere by different authors, and the ERB record based on the Nimbus-7 wide field-of-view 8-year (1978-86) observations are used. Autoregressive models are used to eliminate the effects of the earth`s orbit eccentricity on the radiation budget and cloud amount series. Nonzero cross correlation between the residual series provides a way of estimating the contribution of the cloudiness variations to the variance in the net radiation. As a result, a new parameter to estimate the net radiation sensitivity at the TOA to changes in cloud amount is introduced. This parameter has a more general character than other estimates because it contains time-lag terms of different length responsible for different cloud-radiation feedback mechanisms in the earth climate system. Time lags of 0, 1, 12, and 13 months are involved. Inclusion of the zero-lag term only shows that the albedo effect of clouds dominates, as is known from other research. Inclusion of all four terms leads to an average quasi-annual insensitivity. Approximately 96% of the ERB variance at the TOA can be explained by the eccentricity factor and 1% by cloudiness variations, provided that the data used are without error. Although the latter assumption is not fully correct, the results presented allow one to estimate the contribution of current cloudiness changes to the net radiation variability. Two independent cloud amount datasets have very similar temporal variability and also approximately equal impact on the net radiation at the TOA.

  1. The Sensitivity of Regional Precipitation to Global Temperature Change and Forcings

    NASA Astrophysics Data System (ADS)

    Tebaldi, C.; O'Neill, B. C.; Lamarque, J. F.

    2016-12-01

    Global policies are most commonly formulated in terms of climate targets, like the much talked about 1.5° and 2°C warming thresholds identified as critical by the recent Paris agreements. But what does a target defined in terms of a globally averaged quantity mean in terms of expected regional changes? And, in particular, what should we expect in terms of significant changes in precipitation over specific regional domains for these and other incrementally different global goals? In this talk I will summarize the result of an analysis that aimed at characterizing the sensitivity of regional temperatures and precipitation amounts to changes in global average temperature. The analysis uses results from a multi-model ensemble (CMIP5), which allows us to address structural uncertainty in future projections, a type of uncertainty particularly relevant when considering precipitation changes. I will show what type of changes in global temperature and forcing levels bring about significant and pervasive changes in regional precipitation, contrasting its sensitivity to that of regional temperature changes. Because of the large internal variability of regional precipitation, I will show that significant changes in average regional precipitation can be detected only for fairly large separations (on the order of 2.5° or 3°C) in global average temperature levels, differently from the much higher sensitivity shown by regional temperatures.

  2. Sensitivity of Global Warming Potentials to the assumed background atmosphere

    SciTech Connect

    Wuebbles, D.J.; Patten, K.O.

    1992-03-05

    This is the first in a series of papers in which we will examine various aspects of the Global Warming Potential (GWP) concept and the sensitivity and uncertainties associated with the GWP values derived for the 1992 updated scientific assessment report of the Intergovernmental Panel on Climate Change (IPCC). One of the authors of this report (DJW) helped formulate the GWP concept for the first IPCC report in 1990. The Global Warming Potential concept was developed for that report as an attempt to fulfill the request from policymakers for a way of relating the potential effects on climate from various greenhouse gases, in much the same way as the Ozone Depletion Potential (ODP) concept (Wuebbles, 1981) is used in policy analyses related to concerns about the relative effects of CFCs and other compounds on stratospheric ozone destruction. We are also coauthors of the section on radiative forcing and Global Warming Potentials for the 1992 IPCC update; however, there was too little time to prepare much in the way of new research material for that report. Nonetheless, we have recognized for some time that there are a number of uncertainties and limitations associated with the definition of GWPs used in both the original and new IPCC reports. In this paper, we examine one of those uncertainties, namely, the effect of the assumed background atmospheric concentrations on the derived GWPs. Later papers will examine the sensitivity of GWPs to other uncertainties and limitations in the current concept.

  3. Sensitivity of regional climate to global temperature and forcing

    NASA Astrophysics Data System (ADS)

    Tebaldi, Claudia; O'Neill, Brian; Lamarque, Jean-François

    2015-07-01

    The sensitivity of regional climate to global average radiative forcing and temperature change is important for setting global climate policy targets and designing scenarios. Setting effective policy targets requires an understanding of the consequences exceeding them, even by small amounts, and the effective design of sets of scenarios requires the knowledge of how different emissions, concentrations, or forcing need to be in order to produce substantial differences in climate outcomes. Using an extensive database of climate model simulations, we quantify how differences in global average quantities relate to differences in both the spatial extent and magnitude of climate outcomes at regional (250-1250 km) scales. We show that differences of about 0.3 °C in global average temperature are required to generate statistically significant changes in regional annual average temperature over more than half of the Earth’s land surface. A global difference of 0.8 °C is necessary to produce regional warming over half the land surface that is not only significant but reaches at least 1 °C. As much as 2.5 to 3 °C is required for a statistically significant change in regional annual average precipitation that is equally pervasive. Global average temperature change provides a better metric than radiative forcing for indicating differences in regional climate outcomes due to the path dependency of the effects of radiative forcing. For example, a difference in radiative forcing of 0.5 W m-2 can produce statistically significant differences in regional temperature over an area that ranges between 30% and 85% of the land surface, depending on the forcing pathway.

  4. Global thermohaline circulation. Part 1: Sensitivity to atmospheric moisture transport

    SciTech Connect

    Wang, X.; Stone, P.H.; Marotzke, J.

    1999-01-01

    A global ocean general circulation model of idealized geometry, combined with an atmospheric model based on observed transports of heat, momentum, and moisture, is used to explore the sensitivity of the global conveyor belt circulation to the surface freshwater fluxes, in particular the effects of meridional atmospheric moisture transports. The numerical results indicate that the equilibrium strength of the North Atlantic Deep Water (NADW) formation increases as the global freshwater transports increase. However, the global deep water formation--that is, the sum of the NADW and the Southern Ocean Deep Water formation rates--is relatively insensitive to changes of the freshwater flux. Perturbations to the meridional moisture transports of each hemisphere identify equatorially asymmetric effects of the freshwater fluxes. The results are consistent with box model results that the equilibrium NADW formation is primarily controlled by the magnitude of the Southern Hemisphere freshwater flux. However, the results show that the Northern Hemisphere freshwater flux has a strong impact on the transient behavior of the North Atlantic overturning. Increasing this flux leads to a collapse of the conveyor belt circulation, but the collapse is delayed if the Southern Hemisphere flux also increases. The perturbation experiments also illustrate that the rapidity of collapse is affected by random fluctuations in the wind stress field.

  5. Putting order into the development of sensitivity to global motion.

    PubMed

    Ellemberg, D; Lewis, T L; Dirks, M; Maurer, D; Ledgeway, T; Guillemot, J-P; Lepore, F

    2004-01-01

    We studied differences in the development of sensitivity to first-versus second-order global motion by comparing the motion coherence thresholds of 5-year-olds and adults tested at three speeds (1.5, 6, and 9 degrees s(-1)). We used Random Gabor Kinematograms (RGKs) formed with luminance-modulated (first-order) or contrast-modulated (second-order) concentric Gabor patterns with a sinusoidal spatial frequency of 3c deg(-1). To achieve equal visibility, modulation depth was set at 30% for first-order Gabors and at 100%, for second-order Gabors. Subjects were 24 adults and 24 5-year-olds. For both first- and second-order global motion, the motion coherence threshold of 5-year-olds was less mature for the slowest speed (1.5 degrees s(-1)) than for the two faster speeds (6 and 9 degrees s(-1)). In addition, at the slowest speed, the immaturity was greater for second-order than for first-order global motion. The findings suggest that the extrastriate mechanisms underlying the perception of global motion are different, at least in part, for first- versus second-order signals and for slower versus faster speeds. They also suggest that those separate mechanisms mature at different rates during middle childhood.

  6. Use of global sensitivity analysis in quantitative microbial risk assessment: application to the evaluation of a biological time temperature integrator as a quality and safety indicator for cold smoked salmon.

    PubMed

    Ellouze, M; Gauchi, J-P; Augustin, J-C

    2011-06-01

    The aim of this study was to apply a global sensitivity analysis (SA) method in model simplification and to evaluate (eO)®, a biological Time Temperature Integrator (TTI) as a quality and safety indicator for cold smoked salmon (CSS). Models were thus developed to predict the evolutions of Listeria monocytogenes and the indigenous food flora in CSS and to predict TTIs endpoint. A global SA was then applied on the three models to identify the less important factors and simplify the models accordingly. Results showed that the subset of the most important factors of the three models was mainly composed of the durations and temperatures of two chill chain links, out of the control of the manufacturers: the domestic refrigerator and the retail/cabinet links. Then, the simplified versions of the three models were run with 10(4) time temperature profiles representing the variability associated to the microbial behavior, to the TTIs evolution and to the French chill chain characteristics. The results were used to assess the distributions of the microbial contaminations obtained at the TTI endpoint and at the end of the simulated profiles and proved that, in the case of poor storage conditions, the TTI use could reduce the number of unacceptable foods by 50%.

  7. Economic modeling and sensitivity analysis.

    PubMed

    Hay, J W

    1998-09-01

    The field of pharmacoeconomics (PE) faces serious concerns of research credibility and bias. The failure of researchers to reproduce similar results in similar settings, the inappropriate use of clinical data in economic models, the lack of transparency, and the inability of readers to make meaningful comparisons across published studies have greatly contributed to skepticism about the validity, reliability, and relevance of these studies to healthcare decision-makers. Using a case study in the field of lipid PE, two suggestions are presented for generally applicable reporting standards that will improve the credibility of PE. Health economists and researchers should be expected to provide either the software used to create their PE model or a multivariate sensitivity analysis of their PE model. Software distribution would allow other users to validate the assumptions and calculations of a particular model and apply it to their own circumstances. Multivariate sensitivity analysis can also be used to present results in a consistent and meaningful way that will facilitate comparisons across the PE literature. Using these methods, broader acceptance and application of PE results by policy-makers would become possible. To reduce the uncertainty about what is being accomplished with PE studies, it is recommended that these guidelines become requirements of both scientific journals and healthcare plan decision-makers. The standardization of economic modeling in this manner will increase the acceptability of pharmacoeconomics as a practical, real-world science.

  8. Using Dynamic Sensitivity Analysis to Assess Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey; Morell, Larry; Miller, Keith

    1990-01-01

    This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.

  9. Identification of the significant factors in food safety using global sensitivity analysis and the accept-and-reject algorithm: application to the cold chain of ham.

    PubMed

    Duret, Steven; Guillier, Laurent; Hoang, Hong-Minh; Flick, Denis; Laguerre, Onrawee

    2014-06-16

    Deterministic models describing heat transfer and microbial growth in the cold chain are widely studied. However, it is difficult to apply them in practice because of several variable parameters in the logistic supply chain (e.g., ambient temperature varying due to season and product residence time in refrigeration equipment), the product's characteristics (e.g., pH and water activity) and the microbial characteristics (e.g., initial microbial load and lag time). This variability can lead to different bacterial growth rates in food products and has to be considered to properly predict the consumer's exposure and identify the key parameters of the cold chain. This study proposes a new approach that combines deterministic (heat transfer) and stochastic (Monte Carlo) modeling to account for the variability in the logistic supply chain and the product's characteristics. The model generates a realistic time-temperature product history , contrary to existing modeling whose describe time-temperature profile Contrary to existing approaches that use directly a time-temperature profile, the proposed model predicts product temperature evolution from the thermostat setting and the ambient temperature. The developed methodology was applied to the cold chain of cooked ham including, the display cabinet, transport by the consumer and the domestic refrigerator, to predict the evolution of state variables, such as the temperature and the growth of Listeria monocytogenes. The impacts of the input factors were calculated and ranked. It was found that the product's time-temperature history and the initial contamination level are the main causes of consumers' exposure. Then, a refined analysis was applied, revealing the importance of consumer behaviors on Listeria monocytogenes exposure. Copyright © 2014. Published by Elsevier B.V.

  10. Stiff DAE integrator with sensitivity analysis capabilities

    SciTech Connect

    Serban, R.

    2007-11-26

    IDAS is a general purpose (serial and parallel) solver for differential equation (ODE) systems with senstivity analysis capabilities. It provides both forward and adjoint sensitivity analysis options.

  11. LCA data quality: sensitivity and uncertainty analysis.

    PubMed

    Guo, M; Murphy, R J

    2012-10-01

    Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions.

  12. Life cycle impact assessment of terrestrial acidification: modeling spatially explicit soil sensitivity at the global scale.

    PubMed

    Roy, Pierre-Olivier; Deschênes, Louise; Margni, Manuele

    2012-08-07

    This paper presents a novel life cycle impact assessment (LCIA) approach to derive spatially explicit soil sensitivity indicators for terrestrial acidification. This global approach is compatible with a subsequent damage assessment, making it possible to consistently link the developed midpoint indicators with a later endpoint assessment along the cause-effect chain-a prerequisite in LCIA. Four different soil chemical indicators were preselected to evaluate sensitivity factors (SFs) for regional receiving environments at the global scale, namely the base cations to aluminum ratio, aluminum to calcium ratio, pH, and aluminum concentration. These chemical indicators were assessed using the PROFILE geochemical steady-state soil model and a global data set of regional soil parameters developed specifically for this study. Results showed that the most sensitive regions (i.e., where SF is maximized) are in Canada, northern Europe, the Amazon, central Africa, and East and Southeast Asia. However, the approach is not bereft of uncertainty. Indeed, a Monte Carlo analysis showed that input parameter variability may induce SF variations of up to over 6 orders of magnitude for certain chemical indicators. These findings improve current practices and enable the development of regional characterization models to assess regional life cycle inventories in a global economy.

  13. Sensitivity Analysis for Multidisciplinary Systems (SAMS)

    DTIC Science & Technology

    2016-12-01

    AFRL-RQ-WP-TM-2017-0017 SENSITIVITY ANALYSIS FOR MULTIDISCIPLINARY SYSTEMS (SAMS) Richard D. Snyder Design & Analysis Branch Aerospace Vehicles...February 2017 4. TITLE AND SUBTITLE SENSITIVITY ANALYSIS FOR MULTIDISCIPLINARY SYSTEMS (SAMS) 5a. CONTRACT NUMBER In-house 5b. GRANT NUMBER N/A 5c...comprising an interim briefing for this work effort. PA Case Number 88ABW-2016-6159; Clearance Date: 30 Nov 2016. 14. ABSTRACT The Sensitivity Analysis

  14. Sensitivity of flood events to global climate change

    NASA Astrophysics Data System (ADS)

    Panagoulia, Dionysia; Dimou, George

    1997-04-01

    The sensitivity of Acheloos river flood events at the outfall of the mountainous Mesochora catchment in Central Greece was analysed under various scenarios of global climate change. The climate change pattern was simulated through a set of hypothetical and monthly GISS (Goddard Institute for Space Studies) scenarios of temperature increase coupled with precipitation changes. The daily outflow of the catchment, which is dominated by spring snowmelt runoff, was simulated by the coupling of snowmelt and soil moisture accounting models of the US National Weather Service River Forecast System. Two threshold levels were used to define a flood day—the double and triple long-term mean daily streamflow—and the flood parameters (occurrences, duration, magnitude, etc.) for these cases were determined. Despite the complicated response of flood events to temperature increase and threshold, both hypothetical and monthly GISS representations of climate change resulted in more and longer flood events for climates with increased precipitation. All climates yielded larger flood volumes and greater mean values of flood peaks with respect to precipitation increase. The lower threshold resulted in more and longer flood occurrences, as well as smaller flood volumes and peaks than those of the upper one. The combination of higher and frequent flood events could lead to greater risks of inudation and possible damage to structures. Furthermore, the winter swelling of the streamflow could increase erosion of the river bed and banks and hence modify the river profile.

  15. Electric dipole moments: A global analysis

    NASA Astrophysics Data System (ADS)

    Chupp, Timothy; Ramsey-Musolf, Michael

    2015-03-01

    We perform a global analysis of searches for the permanent electric dipole moments (EDMs) of the neutron, neutral atoms, and molecules in terms of six leptonic, semileptonic, and nonleptonic interactions involving photons, electrons, pions, and nucleons. By translating the results into fundamental charge-conjugation-parity symmetry (CP) violating effective interactions through dimension six involving standard model particles, we obtain rough lower bounds on the scale of beyond the standard model CP-violating interactions ranging from 1.5 TeV for the electron EDM to 1300 TeV for the nuclear spin-independent electron-quark interaction. We show that planned future measurements involving systems or combinations of systems with complementary sensitivities to the low-energy parameters may extend the mass reach by an order of magnitude or more.

  16. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    NASA Technical Reports Server (NTRS)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  17. A review of sensitivity analysis techniques

    SciTech Connect

    Hamby, D.M.

    1993-12-31

    Mathematical models are utilized to approximate various highly complex engineering, physical, environmental, social, and economic phenomena. Model parameters exerting the most influence on model results are identified through a {open_quotes}sensitivity analysis.{close_quotes} A comprehensive review is presented of more than a dozen sensitivity analysis methods. The most fundamental of sensitivity techniques utilizes partial differentiation whereas the simplest approach requires varying parameter values one-at-a-time. Correlation analysis is used to determine relationships between independent and dependent variables. Regression analysis provides the most comprehensive sensitivity measure and is commonly utilized to build response surfaces that approximate complex models.

  18. Iterative methods for design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Yoon, B. G.

    1989-01-01

    A numerical method is presented for design sensitivity analysis, using an iterative-method reanalysis of the structure generated by a small perturbation in the design variable; a forward-difference scheme is then employed to obtain the approximate sensitivity. Algorithms are developed for displacement and stress sensitivity, as well as for eignevalues and eigenvector sensitivity, and the iterative schemes are modified so that the coefficient matrices are constant and therefore decomposed only once.

  19. New Approaches to Derive Aerosol-Cloud Sensitivity from Global Observations

    NASA Astrophysics Data System (ADS)

    Andersen, Hendrik; Cermak, Jan; Fuchs, Julia

    2017-04-01

    This contribution presents novel satellite-based approaches to analyze interactions between aerosols and marine liquid water clouds (ACI) on a global scale. Clouds play a central role in the Earth's radiative budget by increasing the albedo but also by interacting with outgoing thermal radiation, leading to a net cooling effect. Cloud properties are determined by environmental conditions, as cloud formation requires sufficiently saturated conditions as well as condensation nuclei on which the water vapor can condense. The ways in which aerosols influence the optical, micro- and macrophysical properties of clouds as condensation nuclei are among the largest remaining uncertainties in climate research. In particular, cloud droplet size is believed to be impacted, and subsequently cloud reflectivity, lifetime, and precipitation susceptibility may be modified. Advances in the understanding of the processes that govern liquid-water cloud properties are of great importance in order to increase accuracy of climate model predictions of a changing climate. Two methods that illustrate how global satellite retrievals may be combined with reanalysis data sets to enhance knowledge on global patterns of ACI are presented: 1. A novel change-point analysis is presented to detect aerosol loadings at which cloud droplet size shows the greatest sensitivity to changes in aerosol loading. The method is applied to Terra MODIS L3 data sets; patterns of the maximum aerosol-cloud sensitivity are analyzed. Results point towards the importance of water-vapor availability as the framework in which ACI take place. 2. In a multivariate approach to analyzing ACI on a system scale, global monthly aerosol, cloud and meteorology data sets are applied in artificial neural networks (ANN). The ability of ANNs to predict global cloud patterns is demonstrated and sensitivities are subsequently derived. On this basis, the magnitude of aerosol indirect effects is compared to other determinants, pointing

  20. Sensitivity analysis applied to stalled airfoil wake and steady control

    NASA Astrophysics Data System (ADS)

    Patino, Gustavo; Gioria, Rafael; Meneghini, Julio

    2014-11-01

    The sensitivity of an eigenvalue to base flow modifications induced by an external force is applied to the global unstable modes associated to the onset of vortex shedding in the wake of a stalled airfoil. In this work, the flow regime is close to the first instability of the system and its associated eigenvalue/eigenmode is determined. The sensitivity analysis to a general punctual external force allows establishing the regions where control devices must be in order to stabilize the global modes. Different types of steady control devices, passive and active, are used in the regions predicted by the sensitivity analysis to check the vortex shedding suppression, i.e. the primary instability bifurcation is delayed. The new eigenvalue, modified by the action of the device, is also calculated. Finally the spectral finite element method is employed to determine flow characteristics before and after of the bifurcation in order to cross check the results.

  1. Individual differences in children's global motion sensitivity correlate with TBSS-based measures of the superior longitudinal fasciculus.

    PubMed

    Braddick, Oliver; Atkinson, Janette; Akshoomoff, Natacha; Newman, Erik; Curley, Lauren B; Gonzalez, Marybel Robledo; Brown, Timothy; Dale, Anders; Jernigan, Terry

    2016-12-16

    Reduced global motion sensitivity, relative to global static form sensitivity, has been found in children with many neurodevelopmental disorders, leading to the "dorsal stream vulnerability" hypothesis (Braddick et al., 2003). Individual differences in typically developing children's global motion thresholds have been shown to be associated with variations in specific parietal cortical areas (Braddick et al., 2016). Here, in 125 children aged 5-12years, we relate individual differences in global motion and form coherence thresholds to fractional anisotropy (FA) in the superior longitudinal fasciculus (SLF), a major fibre tract communicating between parietal lobe and anterior cortical areas. We find a positive correlation between FA of the right SLF and individual children's sensitivity to global motion coherence, while FA of the left SLF shows a negative correlation. Further analysis of parietal cortical area data shows that this is also asymmetrical, showing a stronger association with global motion sensitivity in the left hemisphere. None of these associations hold for an analogous measure of global form sensitivity. We conclude that a complex pattern of structural asymmetry, including the parietal lobe and the superior longitudinal fasciculus, is specifically linked to the development of sensitivity to global visual motion. This pattern suggests that individual differences in motion sensitivity are primarily linked to parietal brain areas interacting with frontal systems in making decisions on integrated motion signals, rather than in the extra-striate visual areas that perform the initial integration. The basis of motion processing deficits in neurodevelopmental disorders may depend on these same structures. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. What Constitutes a "Good" Sensitivity Analysis? Elements and Tools for a Robust Sensitivity Analysis with Reduced Computational Cost

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin

    2016-04-01

    Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  3. Recent developments in structural sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Adelman, Howard M.

    1988-01-01

    Recent developments are reviewed in two major areas of structural sensitivity analysis: sensitivity of static and transient response; and sensitivity of vibration and buckling eigenproblems. Recent developments from the standpoint of computational cost, accuracy, and ease of implementation are presented. In the area of static response, current interest is focused on sensitivity to shape variation and sensitivity of nonlinear response. Two general approaches are used for computing sensitivities: differentiation of the continuum equations followed by discretization, and the reverse approach of discretization followed by differentiation. It is shown that the choice of methods has important accuracy and implementation implications. In the area of eigenproblem sensitivity, there is a great deal of interest and significant progress in sensitivity of problems with repeated eigenvalues. In addition to reviewing recent contributions in this area, the paper raises the issue of differentiability and continuity associated with the occurrence of repeated eigenvalues.

  4. A global sensitivity tool for cardiac cell modeling: Application to ionic current balance and hypertrophic signaling.

    PubMed

    Sher, Anna A; Cooling, Michael T; Bethwaite, Blair; Tan, Jefferson; Peachey, Tom; Enticott, Colin; Garic, Slavisa; Gavaghan, David J; Noble, Denis; Abramson, David; Crampin, Edmund J

    2010-01-01

    Cardiovascular diseases are the major cause of death in the developed countries. Identifying key cellular processes involved in generation of the electrical signal and in regulation of signal transduction pathways is essential for unraveling the underlying mechanisms of heart rhythm behavior. Computational cardiac models provide important insights into cardiovascular function and disease. Sensitivity analysis presents a key tool for exploring the large parameter space of such models, in order to determine the key factors determining and controlling the underlying physiological processes. We developed a new global sensitivity analysis tool which implements the Morris method, a global sensitivity screening algorithm, onto a Nimrod platform, which is a distributed resources software toolkit. The newly developed tool has been validated using the model of IP3-calcineurin signal transduction pathway model which has 30 parameters. The key driving factors of the IP3 transient behaviour have been calculated and confirmed to agree with previously published data. We next demonstrated the use of this method as an assessment tool for characterizing the structure of cardiac ionic models. In three latest human ventricular myocyte models, we examined the contribution of transmembrane currents to the shape of the electrical signal (i.e. on the action potential duration). The resulting profiles of the ionic current balance demonstrated the highly nonlinear nature of cardiac ionic models and identified key players in different models. Such profiling suggests new avenues for development of methodologies to predict drug action effects in cardiac cells.

  5. Structural sensitivity analysis: Methods, applications and needs

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.; Camarda, C. J.; Walsh, J. L.

    1984-01-01

    Innovative techniques applicable to sensitivity analysis of discretized structural systems are reviewed. The techniques include a finite difference step size selection algorithm, a method for derivatives of iterative solutions, a Green's function technique for derivatives of transient response, simultaneous calculation of temperatures and their derivatives, derivatives with respect to shape, and derivatives of optimum designs with respect to problem parameters. Computerized implementations of sensitivity analysis and applications of sensitivity derivatives are also discussed. Some of the critical needs in the structural sensitivity area are indicated along with plans for dealing with some of those needs.

  6. Structural sensitivity analysis: Methods, applications, and needs

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.; Camarda, C. J.; Walsh, J. L.

    1984-01-01

    Some innovative techniques applicable to sensitivity analysis of discretized structural systems are reviewed. These techniques include a finite-difference step-size selection algorithm, a method for derivatives of iterative solutions, a Green's function technique for derivatives of transient response, a simultaneous calculation of temperatures and their derivatives, derivatives with respect to shape, and derivatives of optimum designs with respect to problem parameters. Computerized implementations of sensitivity analysis and applications of sensitivity derivatives are also discussed. Finally, some of the critical needs in the structural sensitivity area are indicated along with Langley plans for dealing with some of these needs.

  7. Parameter Sensitivity for Discriminant Analysis.

    ERIC Educational Resources Information Center

    Tate, Richard L.; Bryant, John L.

    1986-01-01

    The shape of the response surface associated with a discriminant analysis provides insight into the value of the derived optimal discriminant variates. A procedure for the determination of "indifference regions," presented in this article, allows the assessment of the degree of flatness of the response surface for any analysis.…

  8. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.

    PubMed

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-08-15

    It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.

  9. Sensitivity of global river discharges under Holocene and future climate conditions

    NASA Astrophysics Data System (ADS)

    Aerts, J. C. J. H.; Renssen, H.; Ward, P. J.; de Moel, H.; Odada, E.; Bouwer, L. M.; Goosse, H.

    2006-10-01

    A comparative analysis of global river basins shows that some river discharges are more sensitive to future climate change for the coming century than to natural climate variability over the last 9000 years. In these basins (Ganges, Mekong, Volta, Congo, Amazon, Murray-Darling, Rhine, Oder, Yukon) future discharges increase by 6-61%. These changes are of similar magnitude to changes over the last 9000 years. Some rivers (Nile, Syr Darya) experienced strong reductions in discharge over the last 9000 years (17-56%), but show much smaller responses to future warming. The simulation results for the last 9000 years are validated with independent proxy data.

  10. Sensitivity Analysis for some Water Pollution Problem

    NASA Astrophysics Data System (ADS)

    Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff

    2014-05-01

    Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .

  11. Behavioral metabolomics analysis identifies novel neurochemical signatures in methamphetamine sensitization

    PubMed Central

    Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.

    2014-01-01

    Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544

  12. Behavioral metabolomics analysis identifies novel neurochemical signatures in methamphetamine sensitization.

    PubMed

    Adkins, D E; McClay, J L; Vunck, S A; Batman, A M; Vann, R E; Clark, S L; Souza, R P; Crowley, J J; Sullivan, P F; van den Oord, E J C G; Beardsley, P M

    2013-11-01

    Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In this study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine (MA)-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate, FDR <0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent MA levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. © 2013 John Wiley & Sons Ltd and International Behavioural and Neural Genetics Society.

  13. Sensitivity Analysis of the Static Aeroelastic Response of a Wing

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.

    1993-01-01

    A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.

  14. Coal Transportation Rate Sensitivity Analysis

    EIA Publications

    2005-01-01

    On December 21, 2004, the Surface Transportation Board (STB) requested that the Energy Information Administration (EIA) analyze the impact of changes in coal transportation rates on projected levels of electric power sector energy use and emissions. Specifically, the STB requested an analysis of changes in national and regional coal consumption and emissions resulting from adjustments in railroad transportation rates for Wyoming's Powder River Basin (PRB) coal using the National Energy Modeling System (NEMS). However, because NEMS operates at a relatively aggregate regional level and does not represent the costs of transporting coal over specific rail lines, this analysis reports on the impacts of interregional changes in transportation rates from those used in the Annual Energy Outlook 2005 (AEO2005) reference case.

  15. Coal Transportation Rate Sensitivity Analysis

    EIA Publications

    2005-01-01

    On December 21, 2004, the Surface Transportation Board (STB) requested that the Energy Information Administration (EIA) analyze the impact of changes in coal transportation rates on projected levels of electric power sector energy use and emissions. Specifically, the STB requested an analysis of changes in national and regional coal consumption and emissions resulting from adjustments in railroad transportation rates for Wyoming's Powder River Basin (PRB) coal using the National Energy Modeling System (NEMS). However, because NEMS operates at a relatively aggregate regional level and does not represent the costs of transporting coal over specific rail lines, this analysis reports on the impacts of interregional changes in transportation rates from those used in the Annual Energy Outlook 2005 (AEO2005) reference case.

  16. SILAC for global phosphoproteomic analysis.

    PubMed

    Pimienta, Genaro; Chaerkady, Raghothama; Pandey, Akhilesh

    2009-01-01

    Establishing the phosphorylation pattern of proteins in a comprehensive fashion is an important goal of a majority of cell signaling projects. Phosphoproteomic strategies should be designed in such a manner as to identify sites of phosphorylation as well as to provide quantitative information about the extent of phosphorylation at the sites. In this chapter, we describe an experimental strategy that outlines such an approach using stable isotope labeling with amino acids in cell culture (SILAC) coupled to LC-MS/MS. We highlight the importance of quantitative strategies in signal transduction as a platform for a systematic and global elucidation of biological processes.

  17. Impact of diabetes mellitus on the clinical management of global cardiovascular risk: analysis of the results of the Evaluation of Final Feasible Effect of Control Training and Ultra Sensitization (EFFECTUS) educational program.

    PubMed

    Tocci, Giuliano; Ferrucci, Andrea; Guida, Pietro; Avogaro, Angelo; Comaschi, Marco; Corsini, Alberto; Cortese, Claudio; Giorda, Carlo Bruno; Manzato, Enzo; Medea, Gerardo; Mureddu, Gian Francesco; Riccardi, Gabriele; Titta, Giulio; Ventriglia, Giuseppe; Zito, Giovanni Battista; Volpe, Massimo

    2011-09-01

    The Evaluation of Final Feasible Effect of Ultra Control Training and Sensitization (EFFECTUS) study is aimed at implementing global cardiovascular (CV) risk management in Italy. To evaluate the impact of diabetes mellitus (DM) on attitudes and preferences for clinical management of global CV risk among physicians treating diabetic or nondiabetic patients. Involved physicians were asked to submit data into a study-designed case-report form, covering the first 10 adult outpatients consecutively seen in May 2006. All available clinical data were centrally analyzed for global CV risk assessment and CV risk profile characterization. Patients were stratified according to the presence or absence of DM. Overall, 1078 physicians (27% female, ages 50 ± 7 y) collected data of 9904 outpatients (46.5% female, ages 67 ± 9 y), among whom 3681 (37%) had a diagnosis of DM at baseline. Diabetic patients were older and had higher prevalence of obesity, hypertension, dyslipidemia, and associated CV diseases than nondiabetic individuals (P<0.001). They had higher systolic blood pressure, total cholesterol, triglycerides, and creatinine levels, but lower high-density lipoprotein cholesterol levels than nondiabetic patients (P<0.001). Higher numbers of blood pressure and lipid-lowering drugs and antiplatelet agents were used in diabetic than in nondiabetic patients (P<0.001). The EFFECTUS study confirmed higher CV risk and more CV drug prescriptions in diabetic than in nondiabetic patients. Presence of DM at baseline significantly improved clinical data collection. Such an approach, however, was not paralleled by a better control of global CV risk profile, which was significantly worse in the former than in the latter group. © 2011 Wiley Periodicals, Inc.

  18. Dynamic analysis of global copper flows. Global stocks, postconsumer material flows, recycling indicators, and uncertainty evaluation.

    PubMed

    Glöser, Simon; Soulier, Marcel; Tercero Espinoza, Luis A

    2013-06-18

    We present a dynamic model of global copper stocks and flows which allows a detailed analysis of recycling efficiencies, copper stocks in use, and dissipated and landfilled copper. The model is based on historical mining and refined copper production data (1910-2010) enhanced by a unique data set of recent global semifinished goods production and copper end-use sectors provided by the copper industry. To enable the consistency of the simulated copper life cycle in terms of a closed mass balance, particularly the matching of recycled metal flows to reported historical annual production data, a method was developed to estimate the yearly global collection rates of end-of-life (postconsumer) scrap. Based on this method, we provide estimates of 8 different recycling indicators over time. The main indicator for the efficiency of global copper recycling from end-of-life (EoL) scrap--the EoL recycling rate--was estimated to be 45% on average, ± 5% (one standard deviation) due to uncertainty and variability over time in the period 2000-2010. As uncertainties of specific input data--mainly concerning assumptions on end-use lifetimes and their distribution--are high, a sensitivity analysis with regard to the effect of uncertainties in the input data on the calculated recycling indicators was performed. The sensitivity analysis included a stochastic (Monte Carlo) uncertainty evaluation with 10(5) simulation runs.

  19. Sensitivity Analysis for Coupled Aero-structural Systems

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.

    1999-01-01

    A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.

  20. Multiple predictor smoothing methods for sensitivity analysis.

    SciTech Connect

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  1. Sensitivity of global greenhouse gas budgets to tropospheric ozone pollution mediated by the biosphere

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Shugart, Herman H.; Lerdau, Manuel T.

    2017-08-01

    Tropospheric ozone (O3), a harmful secondary air pollutant, can affect the climate via direct radiative forcing and by modifying the radiative forcing of aerosols through its role as an atmospheric oxidant. Moreover, O3 exerts a strong oxidative pressure on the biosphere and indirectly influences the climate by altering the materials and energy exchange between terrestrial ecosystems and the atmosphere. However, the magnitude by which O3 affects the global budgets of greenhouse gases (GHGs: CO2, CH4, and N2O) through altering the land-atmosphere exchange is largely unknown. Here we assess the sensitivity of these budgets to tropospheric O3 pollution based on a meta-analysis of experimental studies on the effects of elevated O3 on GHG exchange between terrestrial ecosystems and the atmosphere. We show that across ecosystems, elevated O3 suppresses N2O emissions and both CH4 emissions and uptake, and has little impact on stimulation of soil CO2 emissions except at relatively high concentrations. Therefore, the soil system would be transformed from a sink into a source of GHGs with O3 levels increasing. The global atmospheric budget of GHGs is sensitive to O3 pollution largely because of the carbon dioxide accumulation resulting from suppressed vegetation carbon uptake; the negative contributions from suppressed CH4 and N2O emissions can offset only ˜10% of CO2 emissions from the soil-vegetation system. Based on empirical data, this work, though with uncertainties, provides the first assessment of sensitivity of global budgets of GHGs to O3 pollution, representing a necessary step towards fully understanding and evaluating O3-climate feedbacks mediated by the biosphere.

  2. A pathway analysis of global aerosol processes

    NASA Astrophysics Data System (ADS)

    Schutgens, N. A. J.; Stier, P.

    2014-06-01

    We present a detailed budget of the changes in atmospheric aerosol mass and numbers due to various processes: emission, nucleation, coagulation, H2SO4 condensation and in-cloud production, ageing and deposition. The budget is created from monthly-averaged tracer tendencies calculated by the global aerosol model ECHAM5.5-HAM2 and allows us to investigate process contributions at various length- and time-scales. As a result, we show in unprecedented detail what processes drive the evolution of aerosol. In particular, we show that the processes that affect aerosol masses are quite different from those affecting aerosol numbers. Condensation of H2SO4 gas onto pre-existing particles is an important process, dominating the growth of small particles in the nucleation mode to the Aitken mode and the ageing of hydrophobic matter. Together with in-cloud production of H2SO4, it significantly contributes to (and often dominates) the mass burden (and hence composition) of the hydrophilic Aitken and accumulation mode particles. Particle growth itself is the leading source of number densities in the hydrophilic Aitken and accumulation modes, with their hydrophobic counterparts contributing (even locally) relatively little. As expected, the coarse mode is dominated by primary emissions and mostly decoupled from the smaller modes. Our analysis also suggests that coagulation serves mainly as a loss process for number densities and that, relative to other processes, it is a rather unimportant contributor to composition changes of aerosol. The analysis is extended with sensitivity studies where the impact of a lower model resolution or pre-industrial emissions is shown to be small. We discuss the use of the current budget for model simplification, prioritisation of model improvements, identification of potential structural model errors and model evaluation against observations.

  3. A pathway analysis of global aerosol processes

    NASA Astrophysics Data System (ADS)

    Schutgens, N. A. J.; Stier, P.

    2014-11-01

    We present a detailed budget of the changes in atmospheric aerosol mass and numbers due to various processes: emission (including instant condensation of soluble biogenic emissions), nucleation, coagulation, H2SO4 condensation and in-cloud production, aging and deposition. The budget is created from monthly averaged tracer tendencies calculated by the global aerosol model ECHAM5.5-HAM2 and allows us to investigate process contributions at various length-scales and timescales. As a result, we show in unprecedented detail what processes drive the evolution of aerosol. In particular, we show that the processes that affect aerosol masses are quite different from those that affect aerosol numbers. Condensation of H2SO4 gas onto pre-existing particles is an important process, dominating the growth of small particles in the nucleation mode to the Aitken mode and the aging of hydrophobic matter. Together with in-cloud production of H2SO4, it significantly contributes to (and often dominates) the mass burden (and hence composition) of the hydrophilic Aitken and accumulation mode particles. Particle growth itself is the leading source of number densities in the hydrophilic Aitken and accumulation modes, with their hydrophobic counterparts contributing (even locally) relatively little. As expected, the coarse mode is dominated by primary emissions and mostly decoupled from the smaller modes. Our analysis also suggests that coagulation serves mainly as a loss process for number densities and that, relative to other processes, it is a rather unimportant contributor to composition changes of aerosol. The analysis is extended with sensitivity studies where the impact of a lower model resolution or pre-industrial emissions is shown to be small. We discuss the use of the current budget for model simplification, prioritization of model improvements, identification of potential structural model errors and model evaluation against observations.

  4. Adjoint sensitivity analysis of an ultrawideband antenna

    SciTech Connect

    Stephanson, M B; White, D A

    2011-07-28

    The frequency domain finite element method using H(curl)-conforming finite elements is a robust technique for full-wave analysis of antennas. As computers become more powerful, it is becoming feasible to not only predict antenna performance, but also to compute sensitivity of antenna performance with respect to multiple parameters. This sensitivity information can then be used for optimization of the design or specification of manufacturing tolerances. In this paper we review the Adjoint Method for sensitivity calculation, and apply it to the problem of optimizing a Ultrawideband antenna.

  5. Sensitivity Analysis in the Model Web

    NASA Astrophysics Data System (ADS)

    Jones, R.; Cornford, D.; Boukouvalas, A.

    2012-04-01

    The Model Web, and in particular the Uncertainty enabled Model Web being developed in the UncertWeb project aims to allow model developers and model users to deploy and discover models exposed as services on the Web. In particular model users will be able to compose model and data resources to construct and evaluate complex workflows. When discovering such workflows and models on the Web it is likely that the users might not have prior experience of the model behaviour in detail. It would be particularly beneficial if users could undertake a sensitivity analysis of the models and workflows they have discovered and constructed to allow them to assess the sensitivity to their assumptions and parameters. This work presents a Web-based sensitivity analysis tool which provides computationally efficient sensitivity analysis methods for models exposed on the Web. In particular the tool is tailored to the UncertWeb profiles for both information models (NetCDF and Observations and Measurements) and service specifications (WPS and SOAP/WSDL). The tool employs emulation technology where this is found to be possible, constructing statistical surrogate models for the models or workflows, to allow very fast variance based sensitivity analysis. Where models are too complex for emulation to be possible, or evaluate too fast for this to be necessary the original models are used with a carefully designed sampling strategy. A particular benefit of constructing emulators of the models or workflow components is that within the framework these can be communicated and evaluated at any physical location. The Web-based tool and backend API provide several functions to facilitate the process of creating an emulator and performing sensitivity analysis. A user can select a model exposed on the Web and specify the input ranges. Once this process is complete, they are able to perform screening to discover important inputs, train an emulator, and validate the accuracy of the trained emulator. In

  6. Identifying sensitive ranges in global warming precipitation change dependence on convective parameters

    NASA Astrophysics Data System (ADS)

    Bernstein, Diana N.; Neelin, J. David

    2016-06-01

    A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3 mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme. This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive "dangerous ranges." The low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.

  7. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    EPA Pesticide Factsheets

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  8. Sparing of Sensitivity to Biological Motion but Not of Global Motion after Early Visual Deprivation

    ERIC Educational Resources Information Center

    Hadad, Bat-Sheva; Maurer, Daphne; Lewis, Terri L.

    2012-01-01

    Patients deprived of visual experience during infancy by dense bilateral congenital cataracts later show marked deficits in the perception of global motion (dorsal visual stream) and global form (ventral visual stream). We expected that they would also show marked deficits in sensitivity to biological motion, which is normally processed in the…

  9. Sparing of Sensitivity to Biological Motion but Not of Global Motion after Early Visual Deprivation

    ERIC Educational Resources Information Center

    Hadad, Bat-Sheva; Maurer, Daphne; Lewis, Terri L.

    2012-01-01

    Patients deprived of visual experience during infancy by dense bilateral congenital cataracts later show marked deficits in the perception of global motion (dorsal visual stream) and global form (ventral visual stream). We expected that they would also show marked deficits in sensitivity to biological motion, which is normally processed in the…

  10. Sensitivity analysis in quantitative microbial risk assessment.

    PubMed

    Zwieterin, M H; van Gerwen, S J

    2000-07-15

    The occurrence of foodborne disease remains a widespread problem in both the developing and the developed world. A systematic and quantitative evaluation of food safety is important to control the risk of foodborne diseases. World-wide, many initiatives are being taken to develop quantitative risk analysis. However, the quantitative evaluation of food safety in all its aspects is very complex, especially since in many cases specific parameter values are not available. Often many variables have large statistical variability while the quantitative effect of various phenomena is unknown. Therefore, sensitivity analysis can be a useful tool to determine the main risk-determining phenomena, as well as the aspects that mainly determine the inaccuracy in the risk estimate. This paper presents three stages of sensitivity analysis. First, deterministic analysis selects the most relevant determinants for risk. Overlooking of exceptional, but relevant cases is prevented by a second, worst-case analysis. This analysis finds relevant process steps in worst-case situations, and shows the relevance of variations of factors for risk. The third, stochastic analysis, studies the effects of variations of factors for the variability of risk estimates. Care must be taken that the assumptions made as well as the results are clearly communicated. Stochastic risk estimates are, like deterministic ones, just as good (or bad) as the available data, and the stochastic analysis must not be used to mask lack of information. Sensitivity analysis is a valuable tool in quantitative risk assessment by determining critical aspects and effects of variations.

  11. SEP thrust subsystem performance sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.

    1973-01-01

    This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.

  12. Sensitivity analysis of uncertainty in model prediction.

    PubMed

    Russi, Trent; Packard, Andrew; Feeley, Ryan; Frenklach, Michael

    2008-03-27

    Data Collaboration is a framework designed to make inferences from experimental observations in the context of an underlying model. In the prior studies, the methodology was applied to prediction on chemical kinetics models, consistency of a reaction system, and discrimination among competing reaction models. The present work advances Data Collaboration by developing sensitivity analysis of uncertainty in model prediction with respect to uncertainty in experimental observations and model parameters. Evaluation of sensitivity coefficients is performed alongside the solution of the general optimization ansatz of Data Collaboration. The obtained sensitivity coefficients allow one to determine which experiment/parameter uncertainty contributes the most to the uncertainty in model prediction, rank such effects, consider new or even hypothetical experiments to perform, and combine the uncertainty analysis with the cost of uncertainty reduction, thereby providing guidance in selecting an experimental/theoretical strategy for community action.

  13. Design sensitivity analysis of boundary element substructures

    NASA Technical Reports Server (NTRS)

    Kane, James H.; Saigal, Sunil; Gallagher, Richard H.

    1989-01-01

    The ability to reduce or condense a three-dimensional model exactly, and then iterate on this reduced size model representing the parts of the design that are allowed to change in an optimization loop is discussed. The discussion presents the results obtained from an ongoing research effort to exploit the concept of substructuring within the structural shape optimization context using a Boundary Element Analysis (BEA) formulation. The first part contains a formulation for the exact condensation of portions of the overall boundary element model designated as substructures. The use of reduced boundary element models in shape optimization requires that structural sensitivity analysis can be performed. A reduced sensitivity analysis formulation is then presented that allows for the calculation of structural response sensitivities of both the substructured (reduced) and unsubstructured parts of the model. It is shown that this approach produces significant computational economy in the design sensitivity analysis and reanalysis process by facilitating the block triangular factorization and forward reduction and backward substitution of smaller matrices. The implementatior of this formulation is discussed and timings and accuracies of representative test cases presented.

  14. Automated Sensitivity Analysis of Interplanetary Trajectories

    NASA Technical Reports Server (NTRS)

    Knittel, Jeremy; Hughes, Kyle; Englander, Jacob; Sarli, Bruno

    2017-01-01

    This work describes a suite of Python tools known as the Python EMTG Automated Trade Study Application (PEATSA). PEATSA was written to automate the operation of trajectory optimization software, simplify the process of performing sensitivity analysis, and was ultimately found to out-perform a human trajectory designer in unexpected ways. These benefits will be discussed and demonstrated on sample mission designs.

  15. Pediatric Pain, Predictive Inference, and Sensitivity Analysis.

    ERIC Educational Resources Information Center

    Weiss, Robert

    1994-01-01

    Coping style and effects of counseling intervention on pain tolerance was studied for 61 elementary school students through immersion of hands in cold water. Bayesian predictive inference tools are able to distinguish between subject characteristics and manipulable treatments. Sensitivity analysis strengthens the certainty of conclusions about…

  16. Global thermohaline circulation. Part 2: Sensitivity with interactive atmospheric transports

    SciTech Connect

    Wang, X.; Stone, P.H.; Marotzke, J.

    1999-01-01

    A hybrid coupled ocean-atmospheric model is used to investigate the stability of the thermohaline circulation (THC) to an increase in the surface freshwater forcing in the presence of interactive meridional transports in the atmosphere. The ocean component is the idealized global general circulation model used in Part 1. The atmospheric model assumes fixed latitudinal structure of the heat and moisture transports, and the amplitudes are calculated separately for each hemisphere from the large-scale sea surface temperature (SST) and SST gradient, using parameterizations based on baroclinic stability theory. The ocean-atmosphere heat and freshwater exchanges are calculated as residuals of the steady-state atmospheric budgets. Owing to the ocean component`s weak heat transport, the model has too strong a meridional SST gradient when driven with observed atmospheric meridional transports. When the latter are made interactive, the conveyor belt circulation collapses. A flux adjustment is introduced in which the efficiency of the atmospheric transports is lowered to match the too low efficiency of the ocean component. The feedbacks between the THC and both the atmospheric heat and moisture transports are positive, whether atmospheric transports are interactive in the Northern Hemisphere, the Southern Hemisphere, or both. However, the feedbacks operate differently in the northern and southern Hemispheres, because the Pacific THC dominates in the Southern Hemisphere, and deep water formation in the two hemispheres is negatively correlated. The feedbacks in the two hemisphere do not necessarily reinforce each other because they have opposite effects on low-latitude temperatures. The model is qualitatively similar in stability to one with conventional additive flux adjustment, but quantitatively more stable.

  17. NIR sensitivity analysis with the VANE

    NASA Astrophysics Data System (ADS)

    Carrillo, Justin T.; Goodin, Christopher T.; Baylot, Alex E.

    2016-05-01

    Near infrared (NIR) cameras, with peak sensitivity around 905-nm wavelengths, are increasingly used in object detection applications such as pedestrian detection, occupant detection in vehicles, and vehicle detection. In this work, we present the results of simulated sensitivity analysis for object detection with NIR cameras. The analysis was conducted using high performance computing (HPC) to determine the environmental effects on object detection in different terrains and environmental conditions. The Virtual Autonomous Navigation Environment (VANE) was used to simulate highresolution models for environment, terrain, vehicles, and sensors. In the experiment, an active fiducial marker was attached to the rear bumper of a vehicle. The camera was mounted on a following vehicle that trailed at varying standoff distances. Three different terrain conditions (rural, urban, and forest), two environmental conditions (clear and hazy), three different times of day (morning, noon, and evening), and six different standoff distances were used to perform the sensor sensitivity analysis. The NIR camera that was used for the simulation is the DMK firewire monochrome on a pan-tilt motor. Standoff distance was varied along with environment and environmental conditions to determine the critical failure points for the sensor. Feature matching was used to detect the markers in each frame of the simulation, and the percentage of frames in which one of the markers was detected was recorded. The standoff distance produced the biggest impact on the performance of the camera system, while the camera system was not sensitive to environment conditions.

  18. Sensitive chiral analysis by CE: an update.

    PubMed

    Sánchez-Hernández, Laura; Crego, Antonio Luis; Marina, María Luisa; García-Ruiz, Carmen

    2008-01-01

    A general view of the different strategies used in the last years to enhance the detection sensitivity in chiral analysis by CE is provided in this article. With this purpose and in order to update the previous review by García-Ruiz et al., the articles appeared on this subject from January 2005 to March 2007 are considered. Three were the main strategies employed to increase the detection sensitivity in chiral analysis by CE: (i) the use of off-line sample treatment techniques, (ii) the employment of in-capillary preconcentration techniques based on electrophoretic principles, and (iii) the use of alternative detection systems to the widely employed on-column UV-Vis absorption detection. Combinations of two or three of the above-mentioned strategies gave rise to adequate concentration detection limits up to 10(-10) M enabling enantiomer analysis in a variety of real samples including complex biological matrices.

  19. Sensitivity analysis techniques for models of human behavior.

    SciTech Connect

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  20. Using sensitivity analysis in model calibration efforts

    USGS Publications Warehouse

    Tiedeman, Claire R.; Hill, Mary C.

    2003-01-01

    In models of natural and engineered systems, sensitivity analysis can be used to assess relations among system state observations, model parameters, and model predictions. The model itself links these three entities, and model sensitivities can be used to quantify the links. Sensitivities are defined as the derivatives of simulated quantities (such as simulated equivalents of observations, or model predictions) with respect to model parameters. We present four measures calculated from model sensitivities that quantify the observation-parameter-prediction links and that are especially useful during the calibration and prediction phases of modeling. These four measures are composite scaled sensitivities (CSS), prediction scaled sensitivities (PSS), the value of improved information (VOII) statistic, and the observation prediction (OPR) statistic. These measures can be used to help guide initial calibration of models, collection of field data beneficial to model predictions, and recalibration of models updated with new field information. Once model sensitivities have been calculated, each of the four measures requires minimal computational effort. We apply the four measures to a three-layer MODFLOW-2000 (Harbaugh et al., 2000; Hill et al., 2000) model of the Death Valley regional ground-water flow system (DVRFS), located in southern Nevada and California. D’Agnese et al. (1997, 1999) developed and calibrated the model using nonlinear regression methods. Figure 1 shows some of the observations, parameters, and predictions for the DVRFS model. Observed quantities include hydraulic heads and spring flows. The 23 defined model parameters include hydraulic conductivities, vertical anisotropies, recharge rates, evapotranspiration rates, and pumpage. Predictions of interest for this regional-scale model are advective transport paths from potential contamination sites underlying the Nevada Test Site and Yucca Mountain.

  1. Nursing-sensitive indicators: a concept analysis

    PubMed Central

    Heslop, Liza; Lu, Sai

    2014-01-01

    Aim To report a concept analysis of nursing-sensitive indicators within the applied context of the acute care setting. Background The concept of ‘nursing sensitive indicators’ is valuable to elaborate nursing care performance. The conceptual foundation, theoretical role, meaning, use and interpretation of the concept tend to differ. The elusiveness of the concept and the ambiguity of its attributes may have hindered research efforts to advance its application in practice. Design Concept analysis. Data sources Using ‘clinical indicators’ or ‘quality of nursing care’ as subject headings and incorporating keyword combinations of ‘acute care’ and ‘nurs*’, CINAHL and MEDLINE with full text in EBSCOhost databases were searched for English language journal articles published between 2000–2012. Only primary research articles were selected. Methods A hybrid approach was undertaken, incorporating traditional strategies as per Walker and Avant and a conceptual matrix based on Holzemer's Outcomes Model for Health Care Research. Results The analysis revealed two main attributes of nursing-sensitive indicators. Structural attributes related to health service operation included: hours of nursing care per patient day, nurse staffing. Outcome attributes related to patient care included: the prevalence of pressure ulcer, falls and falls with injury, nosocomial selective infection and patient/family satisfaction with nursing care. Conclusion This concept analysis may be used as a basis to advance understandings of the theoretical structures that underpin both research and practical application of quality dimensions of nursing care performance. PMID:25113388

  2. SENSITIVITY ANALYSIS FOR OSCILLATING DYNAMICAL SYSTEMS

    PubMed Central

    WILKINS, A. KATHARINA; TIDOR, BRUCE; WHITE, JACOB; BARTON, PAUL I.

    2012-01-01

    Boundary value formulations are presented for exact and efficient sensitivity analysis, with respect to model parameters and initial conditions, of different classes of oscillating systems. Methods for the computation of sensitivities of derived quantities of oscillations such as period, amplitude and different types of phases are first developed for limit-cycle oscillators. In particular, a novel decomposition of the state sensitivities into three parts is proposed to provide an intuitive classification of the influence of parameter changes on period, amplitude and relative phase. The importance of the choice of time reference, i.e., the phase locking condition, is demonstrated and discussed, and its influence on the sensitivity solution is quantified. The methods are then extended to other classes of oscillatory systems in a general formulation. Numerical techniques are presented to facilitate the solution of the boundary value problem, and the computation of different types of sensitivities. Numerical results are verified by demonstrating consistency with finite difference approximations and are superior both in computational efficiency and in numerical precision to existing partial methods. PMID:23296349

  3. Optimal control concepts in design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Belegundu, Ashok D.

    1987-01-01

    A close link is established between open loop optimal control theory and optimal design by noting certain similarities in the gradient calculations. The resulting benefits include a unified approach, together with physical insights in design sensitivity analysis, and an efficient approach for simultaneous optimal control and design. Both matrix displacement and matrix force methods are considered, and results are presented for dynamic systems, structures, and elasticity problems.

  4. [Sensitivity analysis in health investment projects].

    PubMed

    Arroyave-Loaiza, G; Isaza-Nieto, P; Jarillo-Soto, E C

    1994-01-01

    This paper discusses some of the concepts and methodologies frequently used in sensitivity analyses in the evaluation of investment programs. In addition, a concrete example is presented: a hospital investment in which four indicators were used to design different scenarios and their impact on investment costs. This paper emphasizes the importance of this type of analysis in the field of management of health services, and more specifically in the formulation of investment programs.

  5. Demonstration sensitivity analysis for RADTRAN III

    SciTech Connect

    Neuhauser, K S; Reardon, P C

    1986-10-01

    A demonstration sensitivity analysis was performed to: quantify the relative importance of 37 variables to the total incident free dose; assess the elasticity of seven dose subgroups to those same variables; develop density distributions for accident dose to combinations of accident data under wide-ranging variations; show the relationship between accident consequences and probabilities of occurrence; and develop limits for the variability of probability consequence curves.

  6. Sensitivity Analysis for the System Reliability Function

    DTIC Science & Technology

    1987-12-01

    06SOLFkT9. !..URITYV CLASSIVý ’i OF THIS PACI ’f &L9AOwCP -rmog P Aas Page 2 ~f. Abstract Sensitivity analysis is an integral part of virtually every study...programming asistance. AbstrAt G Senitivity analysis is an integral part of virtually every study of system reliability. rT’du paper describe a Monte... virtually every study of Bystm rellaMbility, measure variation in this quantity in response to changes in component reliabiliti•s or in system design

  7. Development of sensitivity to global form and motion in macaque monkeys (Macaca nemestrina).

    PubMed

    Kiorpes, Lynne; Price, Tracy; Hall-Haro, Cynthia; Movshon, J Anthony

    2012-06-15

    To explore the relative development of the dorsal and ventral extrastriate processing streams, we studied the development of sensitivity to form and motion in macaque monkeys (Macaca nemestrina). We used Glass patterns and random dot kinematograms (RDK) to assay ventral and dorsal stream function, respectively. We tested 24 animals, longitudinally or cross-sectionally, between the ages of 5 weeks and 3 years. Each animal was tested with Glass patterns and RDK stimuli with each of two pattern types--circular and linear--at each age using a two alternative forced-choice task. We measured coherence threshold for discrimination of the global form or motion pattern from an incoherent control stimulus. Sensitivity to global motion appeared earlier than to global form and was higher at all ages, but performance approached adult levels at similar ages. Infants were most sensitive to large spatial scale (Δx) and fast speeds; sensitivity to fine scale and slow speeds developed more slowly independently of pattern type. Within the motion domain, pattern type had little effect on overall performance. However, within the form domain, sensitivity for linear Glass patterns was substantially poorer than that for concentric patterns. Our data show comparatively early onset for global motion integration ability, perhaps reflecting early development of the dorsal stream. However, both pathways mature over long time courses reaching adult levels between 2 and 3 years after birth.

  8. Variational global analysis of satellite temperature soundings

    NASA Technical Reports Server (NTRS)

    Halem, M.; Kalnay, E.

    1983-01-01

    A variational spherical harmonic analysis is developed for the production of global geopotential height and geostropic wind fields from the TIROS-N spacecraft's temperature sounding profiles. This scheme is based on Tykhonov's (1964) regularization method, and the smoothing parameter is determined by cross validation. The scheme is noted to be stable and computationally efficient, and it does not depend on a priori information. Its applications to three-dimensional temperature retrievals and to four-dimensional spectral analyses are illustrated.

  9. A global analysis of island pyrogeography

    NASA Astrophysics Data System (ADS)

    Trauernicht, C.; Murphy, B. P.

    2014-12-01

    Islands have provided insight into the ecological role of fire worldwide through research on the positive feedbacks between fire and nonnative grasses, particularly in the Hawaiian Islands. However, the global extent and frequency of fire on islands as an ecological disturbance has received little attention, possibly because 'natural fires' on islands are typically limited to infrequent dry lightning strikes and isolated volcanic events. But because most contemporary fires on islands are anthropogenic, islands provide ideal systems with which to understand the linkages between socio-economic development, shifting fire regimes, and ecological change. Here we use the density of satellite-derived (MODIS) active fire detections for the years 2000-2014 and global data sets of vegetation, climate, population density, and road development to examine the drivers of fire activity on islands at the global scale, and compare these results to existing pyrogeographic models derived from continental data sets. We also use the Hawaiian Islands as a case study to understand the extent to which novel fire regimes can pervade island ecosystems. The global analysis indicates that fire is a frequent disturbance across islands worldwide, strongly affected by human activities, indicating people can more readily override climatic drivers than on continental land masses. The extent of fire activity derived from local records in the Hawaiian Islands reveals that our global analysis likely underestimates the prevalence of fire among island systems and that the combined effects of human activity and invasion by nonnative grasses can create conditions for frequent and relatively large-scale fires. Understanding the extent of these novel fire regimes, and mitigating their impacts, is critical to reducing the current and rapid degradation of native island ecosystems worldwide.

  10. Analytic shape sensitivities and approximations of local and global airframe buckling constraints

    NASA Astrophysics Data System (ADS)

    Shin, Youngwon

    An examination of available shell finite elements suitable for buckling analysis of thin walled airframe structures leads to the selection of a simple, accurate, design-oriented element, which is, then, used with slight modifications to obtain explicit, closed form equations for the stiffness and geometric stiffness matrices. In turn, these equations are used to derive explicit expressions for the analytic sensitivities of the stiffness and geometric stiffness matrices with respect to shell shape design variables. With analytic shape sensitivities of structural matrices and corresponding buckling eigenvalues at hand, the resulting new computer capability makes it possible to construct buckling constraint approximations for Approximation-Concepts based structural synthesis, as well as to examine sources of numerical noise which might appear when parametric studies or finite difference sensitivities are carried out using existing FE codes. The simplicity of the shell elements used and the elimination of the need to carry out numerical integration, lead to computational savings, especially when repetitive analyses have to be carried out during shape design optimization of typical airframes. The new capability is aimed at capturing both local and global modes of buckling failure with the same FE model. Sub-component interaction during buckling can, thus, be taken into account during shape optimization of wing and fuselage structures. Numerical tests involving isotropic and laminated plates, thin walled channel sections and a complete wing box of a typical fighter airplane demonstrate the effectiveness and accuracy of the new design-oriented capability. Also the reduced order eigensystem which takes modeshapes at the reference design variable as the basis vectors for the pertubed design is derived and compared to the Rayleighy Quotient approximation.

  11. Climate sensitivity of global terrestrial ecosystems' subdaily carbon, water, and energy dynamics.

    NASA Astrophysics Data System (ADS)

    Yu, R.; Ruddell, B. L.; Childers, D. L.; Kang, M.

    2015-12-01

    Abstract: Under the context of global climate change, it is important to understand the direction and magnitude of different ecosystems respond to climate at the global level. In this study, we applied dynamical process network (DPN) approach combined with eco-climate system sensitivity model and used the global FLUXNET eddy covariance measurements (subdaily net ecosystem exchange of CO2, air temperature, and precipitation) to access eco-climate system sensitivity to climate and biophysical factors at the flux site level. For the first time, eco-climate system sensitivity was estimated at the global flux sites and extrapolated to all possible land covers by employing artificial neural network approach and using the MODIS phenology and land cover products, the long-term climate GLDAS-2 product, and the GMTED2010 Global Grid elevation dataset. We produced the seasonal eco-climate system DPN maps, which revealed how global carbon dynamics driven by temperature and precipitation. We also found that the eco-climate system dynamical process structures are more sensitive to temperature, whether directly or indirectly via phenology. Interestingly, if temperature continues rising, the temperature-NEE coupling may increase in tropical rain forest areas while decrease in tropical desert or Savanna areas, which means that rising temperature in the future could lead to more carbon sequestration in tropical forests whereas less carbon sequestration in tropical drylands. At the same time, phenology showed a positive effect on the temperature-NEE coupling at all pixels, which suggests increased greenness may increase temperature driven carbon dynamics and consequently carbon sequestration globally. Precipitation showed relatively strong influence on the precipitation-NEE coupling, especially indirectly via phenology. This study has the potential to conduct eco-climate system short-term and long-term forecasting.

  12. Sensitivity Analysis of Automated Ice Edge Detection

    NASA Astrophysics Data System (ADS)

    Moen, Mari-Ann N.; Isaksem, Hugo; Debien, Annekatrien

    2016-08-01

    The importance of highly detailed and time sensitive ice charts has increased with the increasing interest in the Arctic for oil and gas, tourism, and shipping. Manual ice charts are prepared by national ice services of several Arctic countries. Methods are also being developed to automate this task. Kongsberg Satellite Services uses a method that detects ice edges within 15 minutes after image acquisition. This paper describes a sensitivity analysis of the ice edge, assessing to which ice concentration class from the manual ice charts it can be compared to. The ice edge is derived using the Ice Tracking from SAR Images (ITSARI) algorithm. RADARSAT-2 images of February 2011 are used, both for the manual ice charts and the automatic ice edges. The results show that the KSAT ice edge lies within ice concentration classes with very low ice concentration or open water.

  13. Measuring Road Network Vulnerability with Sensitivity Analysis

    PubMed Central

    Jun-qiang, Leng; Long-hai, Yang; Liu, Wei-yi; Zhao, Lin

    2017-01-01

    This paper focuses on the development of a method for road network vulnerability analysis, from the perspective of capacity degradation, which seeks to identify the critical infrastructures in the road network and the operational performance of the whole traffic system. This research involves defining the traffic utility index and modeling vulnerability of road segment, route, OD (Origin Destination) pair and road network. Meanwhile, sensitivity analysis method is utilized to calculate the change of traffic utility index due to capacity degradation. This method, compared to traditional traffic assignment, can improve calculation efficiency and make the application of vulnerability analysis to large actual road network possible. Finally, all the above models and calculation method is applied to actual road network evaluation to verify its efficiency and utility. This approach can be used as a decision-supporting tool for evaluating the performance of road network and identifying critical infrastructures in transportation planning and management, especially in the resource allocation for mitigation and recovery. PMID:28125706

  14. Toward a Globally Sensitive Definition of Inclusive Education Based in Social Justice

    ERIC Educational Resources Information Center

    Shyman, Eric

    2015-01-01

    While many policies, pieces of legislation and educational discourse focus on the concept of inclusion, or inclusive education, the field of education as a whole lacks a clear, precise and comprehensive definition that is both globally sensitive and based in social justice. Even international efforts including the UN Convention on the Rights of…

  15. Long Trajectory for the Development of Sensitivity to Global and Biological Motion

    ERIC Educational Resources Information Center

    Hadad, Bat-Sheva; Maurer, Daphne; Lewis, Terri L.

    2011-01-01

    We used a staircase procedure to test sensitivity to (1) global motion in random-dot kinematograms moving at 4 degrees and 18 degrees s[superscript -1] and (2) biological motion. Thresholds were defined as (1) the minimum percentage of signal dots (i.e. the maximum percentage of noise dots) necessary for accurate discrimination of upward versus…

  16. Long Trajectory for the Development of Sensitivity to Global and Biological Motion

    ERIC Educational Resources Information Center

    Hadad, Bat-Sheva; Maurer, Daphne; Lewis, Terri L.

    2011-01-01

    We used a staircase procedure to test sensitivity to (1) global motion in random-dot kinematograms moving at 4 degrees and 18 degrees s[superscript -1] and (2) biological motion. Thresholds were defined as (1) the minimum percentage of signal dots (i.e. the maximum percentage of noise dots) necessary for accurate discrimination of upward versus…

  17. Toward a Globally Sensitive Definition of Inclusive Education Based in Social Justice

    ERIC Educational Resources Information Center

    Shyman, Eric

    2015-01-01

    While many policies, pieces of legislation and educational discourse focus on the concept of inclusion, or inclusive education, the field of education as a whole lacks a clear, precise and comprehensive definition that is both globally sensitive and based in social justice. Even international efforts including the UN Convention on the Rights of…

  18. Network analysis of global influenza spread.

    PubMed

    Chan, Joseph; Holmes, Antony; Rabadan, Raul

    2010-11-18

    Although vaccines pose the best means of preventing influenza infection, strain selection and optimal implementation remain difficult due to antigenic drift and a lack of understanding global spread. Detecting viral movement by sequence analysis is complicated by skewed geographic and seasonal distributions in viral isolates. We propose a probabilistic method that accounts for sampling bias through spatiotemporal clustering and modeling regional and seasonal transmission as a binomial process. Analysis of H3N2 not only confirmed East-Southeast Asia as a source of new seasonal variants, but also increased the resolution of observed transmission to a country level. H1N1 data revealed similar viral spread from the tropics. Network analysis suggested China and Hong Kong as the origins of new seasonal H3N2 strains and the United States as a region where increased vaccination would maximally disrupt global spread of the virus. These techniques provide a promising methodology for the analysis of any seasonal virus, as well as for the continued surveillance of influenza.

  19. Global climate sensitivity to land surface change: The Mid Holocene revisited

    NASA Astrophysics Data System (ADS)

    Diffenbaugh, Noah S.; Sloan, Lisa C.

    2002-05-01

    Land surface forcing of global climate has been shown both for anthropogenic and non-anthropogenic changes in land surface distribution. Because validation of global climate models (GCMs) is dependent upon the use of accurate boundary conditions, and because changes in land surface distribution have been shown to have effects on climate in areas remote from those changes, we have tested the sensitivity of a GCM to a global Mid Holocene vegetation distribution reconstructed from the fossil record, a first for a 6 ka GCM run. Here we demonstrate that large areas of the globe show statistically significant temperature sensitivity to these land surface changes and that the magnitude of the vegetation forcing is equal to the magnitude of 6 ka orbital forcing, emphasizing the importance of accurate land surface distribution for both model validation and future climate prediction.

  20. Turbulent Sensitivity Analysis for Enhancing Future Aircraft

    DTIC Science & Technology

    2003-10-01

    AFRL-VA-WP-TR-2004-3013 TURBULENT SENSITIVITY ANALYSIS FOR ENHANCING FUTURE AIRCRAFT Andrew G. Godfrey AeroSoft , Inc. 1872 Pratt Dr., Suite...5f. WORK UNIT NUMBER 0A 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER AeroSoft , Inc. 1872...Bibliography [1] Inc. AeroSoft . GASP Version 3 User’s Manual. AeroSoft , Inc., 1996. [2] T. J. Barth. “Numerical Aspects of Computing Viscous High Reynolds Number

  1. Chemistry in Protoplanetary Disks: A Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Vasyunin, A. I.; Semenov, D.; Henning, Th.; Wakelam, V.; Herbst, Eric; Sobolev, A. M.

    2008-01-01

    We study how uncertainties in the rate coefficients of chemical reactions in the RATE 06 database affect abundances and column densities of key molecules in protoplanetary disks. We randomly varied the gas-phase reaction rates within their uncertainty limits and calculated the time-dependent abundances and column densities using a gas-grain chemical model and a flaring steady state disk model. We find that key species can be separated into two distinct groups according to the sensitivity of their column densities to the rate uncertainties. The first group includes CO, C+, H+3, H2O, NH3, N2H+, and HCNH+. For these species the column densities are not very sensitive to the rate uncertainties, but the abundances in specific regions are. The second group includes CS, CO2, HCO+, H2CO, C2H, CN, HCN, HNC, and other, more complex species, for which high abundances and abundance uncertainties coexist in the same disk region, leading to larger scatters in column densities. However, even for complex and heavy molecules, the dispersion in their column densities is not more than a factor of ~4. We perform a sensitivity analysis of the computed abundances to rate uncertainties and identify those reactions with the most problematic rate coefficients. We conclude that the rate coefficients of about a hundred chemical reactions need to be determined more accurately in order to greatly improve the reliability of modern astrochemical models. This improvement should be an ultimate goal of future laboratory studies and theoretical investigations.

  2. The global analysis of DEER data.

    PubMed

    Brandon, Suzanne; Beth, Albert H; Hustedt, Eric J

    2012-05-01

    Double Electron-Electron Resonance (DEER) has emerged as a powerful technique for measuring long range distances and distance distributions between paramagnetic centers in biomolecules. This information can then be used to characterize functionally relevant structural and dynamic properties of biological molecules and their macromolecular assemblies. Approaches have been developed for analyzing experimental data from standard four-pulse DEER experiments to extract distance distributions. However, these methods typically use an a priori baseline correction to account for background signals. In the current work an approach is described for direct fitting of the DEER signal using a model for the distance distribution which permits a rigorous error analysis of the fitting parameters. Moreover, this approach does not require a priori background correction of the experimental data and can take into account excluded volume effects on the background signal when necessary. The global analysis of multiple DEER data sets is also demonstrated. Global analysis has the potential to provide new capabilities for extracting distance distributions and additional structural parameters in a wide range of studies.

  3. The global analysis of DEER data

    NASA Astrophysics Data System (ADS)

    Brandon, Suzanne; Beth, Albert H.; Hustedt, Eric J.

    2012-05-01

    Double Electron-Electron Resonance (DEER) has emerged as a powerful technique for measuring long range distances and distance distributions between paramagnetic centers in biomolecules. This information can then be used to characterize functionally relevant structural and dynamic properties of biological molecules and their macromolecular assemblies. Approaches have been developed for analyzing experimental data from standard four-pulse DEER experiments to extract distance distributions. However, these methods typically use an a priori baseline correction to account for background signals. In the current work an approach is described for direct fitting of the DEER signal using a model for the distance distribution which permits a rigorous error analysis of the fitting parameters. Moreover, this approach does not require a priori background correction of the experimental data and can take into account excluded volume effects on the background signal when necessary. The global analysis of multiple DEER data sets is also demonstrated. Global analysis has the potential to provide new capabilities for extracting distance distributions and additional structural parameters in a wide range of studies.

  4. Topological sensitivity analysis for systems biology

    PubMed Central

    Babtie, Ann C.; Kirk, Paul; Stumpf, Michael P. H.

    2014-01-01

    Mathematical models of natural systems are abstractions of much more complicated processes. Developing informative and realistic models of such systems typically involves suitable statistical inference methods, domain expertise, and a modicum of luck. Except for cases where physical principles provide sufficient guidance, it will also be generally possible to come up with a large number of potential models that are compatible with a given natural system and any finite amount of data generated from experiments on that system. Here we develop a computational framework to systematically evaluate potentially vast sets of candidate differential equation models in light of experimental and prior knowledge about biological systems. This topological sensitivity analysis enables us to evaluate quantitatively the dependence of model inferences and predictions on the assumed model structures. Failure to consider the impact of structural uncertainty introduces biases into the analysis and potentially gives rise to misleading conclusions. PMID:25512544

  5. A global analysis of neutrino oscillations

    NASA Astrophysics Data System (ADS)

    Fogli, G. L.; Lisi, E.; Marrone, A.; Montanino, D.; Palazzo, A.; Rotunno, A. M.

    2013-02-01

    We present a global analysis of neutrino oscillation data, including high-precision measurements of the neutrino mixing angle θ13 at reactor experiments, which have confirmed previous indications in favor of θ13>0. Recent data presented at this Conference are also included. We focus on the correlations between θ13 and the mixing angle θ23, as well as between θ13 and the neutrino CP-violation phase δ. We find interesting indications for θ23<π/4 and possible hints for δ˜π, with no significant difference between normal and inverted mass hierarchy.

  6. Simple Sensitivity Analysis for Orion GNC

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  7. A global analysis of soil acidification caused by nitrogen addition

    NASA Astrophysics Data System (ADS)

    Tian, Dashuan; Niu, Shuli

    2015-02-01

    Nitrogen (N) deposition-induced soil acidification has become a global problem. However, the response patterns of soil acidification to N addition and the underlying mechanisms remain far from clear. Here, we conducted a meta-analysis of 106 studies to reveal global patterns of soil acidification in responses to N addition. We found that N addition significantly reduced soil pH by 0.26 on average globally. However, the responses of soil pH varied with ecosystem types, N addition rate, N fertilization forms, and experimental durations. Soil pH decreased most in grassland, whereas boreal forest was not observed a decrease to N addition in soil acidification. Soil pH decreased linearly with N addition rates. Addition of urea and NH4NO3 contributed more to soil acidification than NH4-form fertilizer. When experimental duration was longer than 20 years, N addition effects on soil acidification diminished. Environmental factors such as initial soil pH, soil carbon and nitrogen content, precipitation, and temperature all influenced the responses of soil pH. Base cations of Ca2+, Mg2+ and K+ were critical important in buffering against N-induced soil acidification at the early stage. However, N addition has shifted global soils into the Al3+ buffering phase. Overall, this study indicates that acidification in global soils is very sensitive to N deposition, which is greatly modified by biotic and abiotic factors. Global soils are now at a buffering transition from base cations (Ca2+, Mg2+ and K+) to non-base cations (Mn2+ and Al3+). This calls our attention to care about the limitation of base cations and the toxic impact of non-base cations for terrestrial ecosystems with N deposition.

  8. Disentangling residence time and temperature sensitivity of microbial decomposition in a global soil carbon model

    NASA Astrophysics Data System (ADS)

    Exbrayat, J.-F.; Pitman, A. J.; Abramowitz, G.

    2014-12-01

    Recent studies have identified the first-order representation of microbial decomposition as a major source of uncertainty in simulations and projections of the terrestrial carbon balance. Here, we use a reduced complexity model representative of current state-of-the-art models of soil organic carbon decomposition. We undertake a systematic sensitivity analysis to disentangle the effect of the time-invariant baseline residence time (k) and the sensitivity of microbial decomposition to temperature (Q10) on soil carbon dynamics at regional and global scales. Our simulations produce a range in total soil carbon at equilibrium of ~ 592 to 2745 Pg C, which is similar to the ~ 561 to 2938 Pg C range in pre-industrial soil carbon in models used in the fifth phase of the Coupled Model Intercomparison Project (CMIP5). This range depends primarily on the value of k, although the impact of Q10 is not trivial at regional scales. As climate changes through the historical period, and into the future, k is primarily responsible for the magnitude of the response in soil carbon, whereas Q10 determines whether the soil remains a sink, or becomes a source in the future mostly by its effect on mid-latitude carbon balance. If we restrict our simulations to those simulating total soil carbon stocks consistent with observations of current stocks, the projected range in total soil carbon change is reduced by 42% for the historical simulations and 45% for the future projections. However, while this observation-based selection dismisses outliers, it does not increase confidence in the future sign of the soil carbon feedback. We conclude that despite this result, future estimates of soil carbon and how soil carbon responds to climate change should be more constrained by available data sets of carbon stocks.

  9. Assessing flood risk at the global scale: model setup, results, and sensitivity

    NASA Astrophysics Data System (ADS)

    Ward, Philip J.; Jongman, Brenden; Sperna Weiland, Frederiek; Bouwman, Arno; van Beek, Rens; Bierkens, Marc F. P.; Ligtvoet, Willem; Winsemius, Hessel C.

    2013-12-01

    Globally, economic losses from flooding exceeded 19 billion in 2012, and are rising rapidly. Hence, there is an increasing need for global-scale flood risk assessments, also within the context of integrated global assessments. We have developed and validated a model cascade for producing global flood risk maps, based on numerous flood return-periods. Validation results indicate that the model simulates interannual fluctuations in flood impacts well. The cascade involves: hydrological and hydraulic modelling; extreme value statistics; inundation modelling; flood impact modelling; and estimating annual expected impacts. The initial results estimate global impacts for several indicators, for example annual expected exposed population (169 million); and annual expected exposed GDP (1383 billion). These results are relatively insensitive to the extreme value distribution employed to estimate low frequency flood volumes. However, they are extremely sensitive to the assumed flood protection standard; developing a database of such standards should be a research priority. Also, results are sensitive to the use of two different climate forcing datasets. The impact model can easily accommodate new, user-defined, impact indicators. We envisage several applications, for example: identifying risk hotspots; calculating macro-scale risk for the insurance industry and large companies; and assessing potential benefits (and costs) of adaptation measures.

  10. Global climate sensitivity derived from ~784,000 years of SST data

    NASA Astrophysics Data System (ADS)

    Friedrich, T.; Timmermann, A.; Tigchelaar, M.; Elison Timm, O.; Ganopolski, A.

    2015-12-01

    Global mean temperatures will increase in response to future increasing greenhouse gas concentrations. The magnitude of this warming for a given radiative forcing is still subject of debate. Here we provide estimates for the equilibrium climate sensitivity using paleo-proxy and modeling data from the last eight glacial cycles (~784,000 years). First of all, two reconstructions of globally averaged surface air temperature (SAT) for the last eight glacial cycles are obtained from two independent sources: one mainly based on a transient model simulation, the other one derived from paleo- SST records and SST network/global SAT scaling factors. Both reconstructions exhibit very good agreement in both amplitude and timing of past SAT variations. In the second step, we calculate the radiative forcings associated with greenhouse gas concentrations, dust concentrations, and surface albedo changes for the last 784, 000 years. The equilibrium climate sensitivity is then derived from the ratio of the SAT anomalies and the radiative forcing changes. Our results reveal that this estimate of the Charney climate sensitivity is a function of the background climate with substantially higher values for warmer climates. Warm phases exhibit an equilibrium climate sensitivity of ~3.70 K per CO2-doubling - more than twice the value derived for cold phases (~1.40 K per 2xCO2). We will show that the current CMIP5 ensemble-mean projection of global warming during the 21st century is supported by our estimate of climate sensitivity derived from climate paleo data of the past 784,000 years.

  11. Global Proteomics Analysis of Protein Lysine Methylation.

    PubMed

    Cao, Xing-Jun; Garcia, Benjamin A

    2016-11-01

    Lysine methylation is a common protein post-translational modification dynamically mediated by protein lysine methyltransferases (PKMTs) and protein lysine demethylases (PKDMs). Beyond histone proteins, lysine methylation on non-histone proteins plays a substantial role in a variety of functions in cells and is closely associated with diseases such as cancer. A large body of evidence indicates that the dysregulation of some PKMTs leads to tumorigenesis via their non-histone substrates. However, most studies on other PKMTs have made slow progress owing to the lack of approaches for extensive screening of lysine methylation sites. However, recently, there has been a series of publications to perform large-scale analysis of protein lysine methylation. In this unit, we introduce a protocol for the global analysis of protein lysine methylation in cells by means of immunoaffinity enrichment and mass spectrometry. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.

  12. Global Proteomics Analysis of Protein Lysine Methylation

    PubMed Central

    Cao, Xing-Jun; Garcia, Benjamin A.

    2017-01-01

    Lysine methylation is a common protein post-translational modification dynamically mediated by protein lysine methyltransferases (PKMTs) and demethylases (PKDMs). Beyond histone proteins, lysine methylation on non-histone proteins play substantial roles in a variety of functions in cells, and is closely associated with diseases such as cancer. A large body of evidence indicates that the dysregulation of some PKMTs lead to tumorigenesis via their non-histone substrates. However, more studies on other PKMTs have made slow progress owing to the lack of the approaches for extensive screening of lysine methylation sites. Recently a series of publications to perform large-scale analysis of protein lysine methylation have emerged. In this unit, we introduce a protocol for the global analysis of protein lysine methylation in cells by means of immunoaffinity enrichment and mass spectrometry. PMID:27801517

  13. On computational schemes for global-local stress analysis

    NASA Technical Reports Server (NTRS)

    Reddy, J. N.

    1989-01-01

    An overview is given of global-local stress analysis methods and associated difficulties and recommendations for future research. The phrase global-local analysis is understood to be an analysis in which some parts of the domain or structure are identified, for reasons of accurate determination of stresses and displacements or for more refined analysis than in the remaining parts. The parts of refined analysis are termed local and the remaining parts are called global. Typically local regions are small in size compared to global regions, while the computational effort can be larger in local regions than in global regions.

  14. Tsunamis: Global Exposure and Local Risk Analysis

    NASA Astrophysics Data System (ADS)

    Harbitz, C. B.; Løvholt, F.; Glimsdal, S.; Horspool, N.; Griffin, J.; Davies, G.; Frauenfelder, R.

    2014-12-01

    The 2004 Indian Ocean tsunami led to a better understanding of the likelihood of tsunami occurrence and potential tsunami inundation, and the Hyogo Framework for Action (HFA) was one direct result of this event. The United Nations International Strategy for Disaster Risk Reduction (UN-ISDR) adopted HFA in January 2005 in order to reduce disaster risk. As an instrument to compare the risk due to different natural hazards, an integrated worldwide study was implemented and published in several Global Assessment Reports (GAR) by UN-ISDR. The results of the global earthquake induced tsunami hazard and exposure analysis for a return period of 500 years are presented. Both deterministic and probabilistic methods (PTHA) are used. The resulting hazard levels for both methods are compared quantitatively for selected areas. The comparison demonstrates that the analysis is rather rough, which is expected for a study aiming at average trends on a country level across the globe. It is shown that populous Asian countries account for the largest absolute number of people living in tsunami prone areas, more than 50% of the total exposed people live in Japan. Smaller nations like Macao and the Maldives are among the most exposed by population count. Exposed nuclear power plants are limited to Japan, China, India, Taiwan, and USA. On the contrary, a local tsunami vulnerability and risk analysis applies information on population, building types, infrastructure, inundation, flow depth for a certain tsunami scenario with a corresponding return period combined with empirical data on tsunami damages and mortality. Results and validation of a GIS tsunami vulnerability and risk assessment model are presented. The GIS model is adapted for optimal use of data available for each study. Finally, the importance of including landslide sources in the tsunami analysis is also discussed.

  15. Sensitivity analysis for the control of supersonic impinging jet noise

    NASA Astrophysics Data System (ADS)

    Nichols, Joseph W.; Hildebrand, Nathaniel

    2016-11-01

    The dynamics of a supersonic jet that impinges perpendicularly on a flat plate depend on complex interactions between fluid turbulence, shock waves, and acoustics. Strongly organized oscillations emerge, however, and they induce loud, often damaging, tones. We investigate this phenomenon using unstructured, high-fidelity Large Eddy Simulation (LES) and global stability analysis. Our flow configurations precisely match laboratory experiments with nozzle-to-wall distances of 4 and 4.5 jet diameters. We use multi-block shift-and-invert Arnoldi iteration to extract both direct and adjoint global modes that extend upstream into the nozzle. The frequency of the most unstable global mode agrees well with that of the emergent oscillations in the LES. We compute the "wavemaker" associated with this mode by multiplying it by its corresponding adjoint mode. The wavemaker shows that this instability is most sensitive to changes in the base flow slightly downstream of the nozzle exit. By modifying the base flow in this region, we then demonstrate that the flow can indeed be stabilized. This explains the success of microjets as an effective noise control measure when they are positioned around the nozzle lip. Computational resources were provided by the Argonne Leadership Computing Facility.

  16. Stormwater quality models: performance and sensitivity analysis.

    PubMed

    Dotto, C B S; Kleidorfer, M; Deletic, A; Fletcher, T D; McCarthy, D T; Rauch, W

    2010-01-01

    The complex nature of pollutant accumulation and washoff, along with high temporal and spatial variations, pose challenges for the development and establishment of accurate and reliable models of the pollution generation process in urban environments. Therefore, the search for reliable stormwater quality models remains an important area of research. Model calibration and sensitivity analysis of such models are essential in order to evaluate model performance; it is very unlikely that non-calibrated models will lead to reasonable results. This paper reports on the testing of three models which aim to represent pollutant generation from urban catchments. Assessment of the models was undertaken using a simplified Monte Carlo Markov Chain (MCMC) method. Results are presented in terms of performance, sensitivity to the parameters and correlation between these parameters. In general, it was suggested that the tested models poorly represent reality and result in a high level of uncertainty. The conclusions provide useful information for the improvement of existing models and insights for the development of new model formulations.

  17. A Sensitivity Analysis of SOLPS Plasma Detachment

    NASA Astrophysics Data System (ADS)

    Green, D. L.; Canik, J. M.; Eldon, D.; Meneghini, O.; AToM SciDAC Collaboration

    2016-10-01

    Predicting the scrape off layer plasma conditions required for the ITER plasma to achieve detachment is an important issue when considering divertor heat load management options that are compatible with desired core plasma operational scenarios. Given the complexity of the scrape off layer, such predictions often rely on an integrated model of plasma transport with many free parameters. However, the sensitivity of any given prediction to the choices made by the modeler is often overlooked due to the logistical difficulties in completing such a study. Here we utilize an OMFIT workflow to enable a sensitivity analysis of the midplane density at which detachment occurs within the SOLPS model. The workflow leverages the TaskFarmer technology developed at NERSC to launch many instances of the SOLPS integrated model in parallel to probe the high dimensional parameter space of SOLPS inputs. We examine both predictive and interpretive models where the plasma diffusion coefficients are chosen to match an empirical scaling for divertor heat flux width or experimental profiles respectively. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility, and is supported under Contracts DE-AC02-05CH11231, DE-AC05-00OR22725 and DE-SC0012656.

  18. Updated Chemical Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    2005-01-01

    An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.

  19. A high-throughput and sensitive method to measure global DNA methylation: application in lung cancer.

    PubMed

    Anisowicz, Anthony; Huang, Hui; Braunschweiger, Karen I; Liu, Ziying; Giese, Heidi; Wang, Huajun; Mamaev, Sergey; Olejnik, Jerzy; Massion, Pierre P; Del Mastro, Richard G

    2008-08-03

    Genome-wide changes in DNA methylation are an epigenetic phenomenon that can lead to the development of disease. The study of global DNA methylation utilizes technology that requires both expensive equipment and highly specialized skill sets. We have designed and developed an assay, CpGlobal, which is easy-to-use, does not utilize PCR, radioactivity and expensive equipment. CpGlobal utilizes methyl-sensitive restriction enzymes, HRP Neutravidin to detect the biotinylated nucleotides incorporated in an end-fill reaction and a luminometer to measure the chemiluminescence. The assay shows high accuracy and reproducibility in measuring global DNA methylation. Furthermore, CpGlobal correlates significantly with High Performance Capillary Electrophoresis (HPCE), a gold standard technology. We have applied the technology to understand the role of global DNA methylation in the natural history of lung cancer. World-wide, it is the leading cause of death attributed to any cancer. The survival rate is 15% over 5 years due to the lack of any clinical symptoms until the disease has progressed to a stage where cure is limited. Through the use of cell lines and paired normal/tumor samples from patients with non-small cell lung cancer (NSCLC) we show that global DNA hypomethylation is highly associated with the progression of the tumor. In addition, the results provide the first indication that the normal part of the lung from a cancer patient has already experienced a loss of methylation compared to a normal individual. By detecting these changes in global DNA methylation, CpGlobal may have a role as a barometer for the onset and development of lung cancer.

  20. Synthesis, Characterization, and Sensitivity Analysis of Urea Nitrate (UN)

    DTIC Science & Technology

    2015-04-01

    ARL-TR-7250 ● APR 2015 US Army Research Laboratory Synthesis, Characterization, and Sensitivity Analysis of Urea Nitrate (UN...Characterization, and Sensitivity Analysis of Urea Nitrate (UN) by William M Sherrill Weapons and Materials Research Directorate...Characterization, and Sensitivity Analysis of Urea Nitrate (UN) 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S

  1. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool

    PubMed Central

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-01-01

    Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080

  2. Scalable analysis tools for sensitivity analysis and UQ (3160) results.

    SciTech Connect

    Karelitz, David B.; Ice, Lisa G.; Thompson, David C.; Bennett, Janine C.; Fabian, Nathan; Scott, W. Alan; Moreland, Kenneth D.

    2009-09-01

    The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.

  3. Sensitivity analysis of distributed volcanic source inversion

    NASA Astrophysics Data System (ADS)

    Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José

    2016-04-01

    A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep

  4. Hydrologic sensitivities of the Sacramento-San Joaquin River basin, California, to global warming

    SciTech Connect

    Lettenmaier, D.P. ); Gan, Thian Yew )

    1990-01-01

    The hydrologic sensitivities of four medium-sized mountainous catchments in the Sacramento and San Joaquin River basins to long-term global warming were analyzed. The hydrologic response of these catchments, all of which are dominated by spring snowmelt runoff, were simulated by the coupling of the snowmelt and the soil moisture accounting models of the U.S. National Weather Service River Forecast System. In all four catchments the global warming pattern, which was indexed to CO{sub 2} doubling scenarios simulated by three (global) general circulation models, produced a major seasonal shift in the snow accumulation pattern. Under the alternative climate scenarios more winter precipitation fell as rain instead of snow, and winter runoff increased while spring snowmelt runoff decreased. In addition, large increases in the annual flood maxima were simulated, primarily due to an increase in rain-on-snow events, with the time of occurrence of many large floods shifting from spring to winter.

  5. Sensitivity of Photolysis Frequencies and Key Tropospheric Oxidants in a Global Model to Cloud Vertical Distributions and Optical Properties

    NASA Technical Reports Server (NTRS)

    Liu, Hongyu; Crawford, James H.; Considine, David B.; Platnick, Steven E.; Norris, Peter M.; Duncan, Bryan N.; Pierce, Robert B.; Chen, Gao; Yantosca, Robert M.

    2008-01-01

    As a follow-up study to our recent assessment of the radiative effects of clouds on tropospheric chemistry, this paper presents an analysis of the sensitivity of such effects to cloud vertical distributions and optical properties in a global 3-D chemical transport model (GEOS4-Chem CTM). GEOS-Chem was driven with a series of meteorological archives (GEOS1-STRAT, GEOS-3 and GEOS-4) generated by the NASA Goddard Earth Observing System data assimilation system, which have significantly different cloud optical depths (CODs) and vertical distributions. Clouds in GEOS1-STRAT and GEOS-3 have more similar vertical distributions while those in GEOS-4 are optically much thinner in the tropical upper troposphere. We find that the radiative impact of clouds on global photolysis frequencies and hydroxyl radical (OH) is more sensitive to the vertical distribution of clouds than to the magnitude of column CODs. Model simulations with each of the three cloud distributions all show that the change in the global burden of O3 due to clouds is less than 5%. Model perturbation experiments with GEOS-3, where the magnitude of 3-D CODs are progressively varied by -100% to 100%, predict only modest changes (<5%) in global mean OH concentrations. J(O1D), J(NO2) and OH concentrations show the strongest sensitivity for small CODs and become insensitive at large CODs due to saturation effects. Caution should be exercised not to use in photochemical models a value for cloud single scattering albedo lower than about 0.999 in order to be consistent with the current knowledge of cloud absorption at the UV wavelength. Our results have important implications for model intercomparisons and climate feedback on tropospheric photochemistry.

  6. Global-scale combustion sources of organic aerosols: sensitivity to formation and removal mechanisms

    NASA Astrophysics Data System (ADS)

    Tsimpidi, Alexandra P.; Karydis, Vlassis A.; Pandis, Spyros N.; Lelieveld, Jos

    2017-06-01

    Organic compounds from combustion sources such as biomass burning and fossil fuel use are major contributors to the global atmospheric load of aerosols. We analyzed the sensitivity of model-predicted global-scale organic aerosols (OA) to parameters that control primary emissions, photochemical aging, and the scavenging efficiency of organic vapors. We used a computationally efficient module for the description of OA composition and evolution in the atmosphere (ORACLE) of the global chemistry-climate model EMAC (ECHAM/MESSy Atmospheric Chemistry). A global dataset of aerosol mass spectrometer (AMS) measurements was used to evaluate simulated primary (POA) and secondary (SOA) OA concentrations. Model results are sensitive to the emission rates of intermediate-volatility organic compounds (IVOCs) and POA. Assuming enhanced reactivity of semi-volatile organic compounds (SVOCs) and IVOCs with OH substantially improved the model performance for SOA. The use of a hybrid approach for the parameterization of the aging of IVOCs had a small effect on predicted SOA levels. The model performance improved by assuming that freshly emitted organic compounds are relatively hydrophobic and become increasingly hygroscopic due to oxidation.

  7. Sensitivity Studies for Space-Based Global Measurements of Atmospheric Carbon Dioxide

    NASA Technical Reports Server (NTRS)

    Mao, Jian-Ping; Kawa, S. Randolph; Bhartia, P. K. (Technical Monitor)

    2001-01-01

    Carbon dioxide (CO2) is well known as the primary forcing agent of global warming. Although the climate forcing due to CO2 is well known, the sources and sinks of CO2 are not well understood. Currently the lack of global atmospheric CO2 observations limits our ability to diagnose the global carbon budget (e.g., finding the so-called "missing sink") and thus limits our ability to understand past climate change and predict future climate response. Space-based techniques are being developed to make high-resolution and high-precision global column CO2 measurements. One of the proposed techniques utilizes the passive remote sensing of Earth's reflected solar radiation at the weaker vibration-rotation band of CO2 in the near infrared (approx. 1.57 micron). We use a line-by-line radiative transfer model to explore the potential of this method. Results of sensitivity studies for CO2 concentration variation and geophysical conditions (i.e., atmospheric temperature, surface reflectivity, solar zenith angle, aerosol, and cirrus cloud) will be presented. We will also present sensitivity results for an O2 A-band (approx. 0.76 micron) sensor that will be needed along with CO2 to make surface pressure and cloud height measurements.

  8. Sensitivity Studies for Space-Based Global Measurements of Atmospheric Carbon Dioxide

    NASA Technical Reports Server (NTRS)

    Mao, Jian-Ping; Kawa, S. Randolph; Bhartia, P. K. (Technical Monitor)

    2001-01-01

    Carbon dioxide (CO2) is well known as the primary forcing agent of global warming. Although the climate forcing due to CO2 is well known, the sources and sinks of CO2 are not well understood. Currently the lack of global atmospheric CO2 observations limits our ability to diagnose the global carbon budget (e.g., finding the so-called "missing sink") and thus limits our ability to understand past climate change and predict future climate response. Space-based techniques are being developed to make high-resolution and high-precision global column CO2 measurements. One of the proposed techniques utilizes the passive remote sensing of Earth's reflected solar radiation at the weaker vibration-rotation band of CO2 in the near infrared (approx. 1.57 micron). We use a line-by-line radiative transfer model to explore the potential of this method. Results of sensitivity studies for CO2 concentration variation and geophysical conditions (i.e., atmospheric temperature, surface reflectivity, solar zenith angle, aerosol, and cirrus cloud) will be presented. We will also present sensitivity results for an O2 A-band (approx. 0.76 micron) sensor that will be needed along with CO2 to make surface pressure and cloud height measurements.

  9. Longitudinal Genetic Analysis of Anxiety Sensitivity

    ERIC Educational Resources Information Center

    Zavos, Helena M. S.; Gregory, Alice M.; Eley, Thalia C.

    2012-01-01

    Anxiety sensitivity is associated with both anxiety and depression and has been shown to be heritable. Little, however, is known about the role of genetic influence on continuity and change of symptoms over time. The authors' aim was to examine the stability of anxiety sensitivity during adolescence. By using a genetically sensitive design, the…

  10. Longitudinal Genetic Analysis of Anxiety Sensitivity

    ERIC Educational Resources Information Center

    Zavos, Helena M. S.; Gregory, Alice M.; Eley, Thalia C.

    2012-01-01

    Anxiety sensitivity is associated with both anxiety and depression and has been shown to be heritable. Little, however, is known about the role of genetic influence on continuity and change of symptoms over time. The authors' aim was to examine the stability of anxiety sensitivity during adolescence. By using a genetically sensitive design, the…

  11. Adjoint sensitivity structures of typhoon DIANMU (2010) based on a global model

    NASA Astrophysics Data System (ADS)

    Kim, S.; Kim, H.; Joo, S.; Shin, H.; Won, D.

    2010-12-01

    Sung-Min Kim1, Hyun Mee Kim1, Sang-Won Joo2, Hyun-Cheol Shin2, DukJin Won2 Department of Atmospheric Sciences, Yonsei University, Seoul, Korea1 Korea Meteorological Administration2 Submitted to AGU 2010 Fall Meeting 13-17 December 2010, San Francisco, CA The path and intensity forecast of typhoons (TYs) depend on the initial condition of the TY itself and surrounding background fields. Because TYs are evolved on the ocean, there are not many observational data available. In this sense, additional observations on the western North Pacific are necessary to get the proper initial condition of TYs. Due to the limited resource of observing facilities, identifying the sensitive regions for the specific forecast aspect in the forecast region of interest will be very beneficial to decide where to deploy additional observations. The additional observations deployed in those sensitive regions are called as the adaptive observations, and the strategies to decide the sensitive regions are called as the adaptive observation strategies. Among the adaptive observation strategies, the adjoint sensitivity represents the gradient of some forecast aspects with respect to the control variables of the model (i.e., initial conditions, boundary conditions, and parameters) (Errico 1997). According to a recent research on the adjoint sensitivity of a TY based on a regional model, the sensitive regions are located horizontally in the right half circle of the TY, and vertically in the lower and upper troposphere near the TY (Kim and Jung 2006). Because the adjoint sensitivity based on a regional model is calculated in a relatively small domain, the adjoint sensitivity structures may be affected by the size and location of the domain. In this study, the adjoint sensitivity distributions for TY DIANMU (2010) based on a global model are investigated. The adjoint sensitivity based on a global model is calculated by using the perturbation forecast (PF) and adjoint PF model of the Unified Model at

  12. Sensitivity Analysis of Wing Aeroelastic Responses

    NASA Technical Reports Server (NTRS)

    Issac, Jason Cherian

    1995-01-01

    Design for prevention of aeroelastic instability (that is, the critical speeds leading to aeroelastic instability lie outside the operating range) is an integral part of the wing design process. Availability of the sensitivity derivatives of the various critical speeds with respect to shape parameters of the wing could be very useful to a designer in the initial design phase, when several design changes are made and the shape of the final configuration is not yet frozen. These derivatives are also indispensable for a gradient-based optimization with aeroelastic constraints. In this study, flutter characteristic of a typical section in subsonic compressible flow is examined using a state-space unsteady aerodynamic representation. The sensitivity of the flutter speed of the typical section with respect to its mass and stiffness parameters, namely, mass ratio, static unbalance, radius of gyration, bending frequency, and torsional frequency is calculated analytically. A strip theory formulation is newly developed to represent the unsteady aerodynamic forces on a wing. This is coupled with an equivalent plate structural model and solved as an eigenvalue problem to determine the critical speed of the wing. Flutter analysis of the wing is also carried out using a lifting-surface subsonic kernel function aerodynamic theory (FAST) and an equivalent plate structural model. Finite element modeling of the wing is done using NASTRAN so that wing structures made of spars and ribs and top and bottom wing skins could be analyzed. The free vibration modes of the wing obtained from NASTRAN are input into FAST to compute the flutter speed. An equivalent plate model which incorporates first-order shear deformation theory is then examined so it can be used to model thick wings, where shear deformations are important. The sensitivity of natural frequencies to changes in shape parameters is obtained using ADIFOR. A simple optimization effort is made towards obtaining a minimum weight

  13. Analysis of Globalization, the Planet and Education

    ERIC Educational Resources Information Center

    Tsegay, Samson Maekele

    2016-01-01

    Thorough the framework of theories analyzing globalization and education, this paper focuses on the intersection among globalization, the environment and education. This paper critically analyzes how globalization could affect environmental devastation, and explore the role of pedagogies that could foster planetary citizenship by exposing…

  14. Uncertainty and sensitivity analysis for photovoltaic system modeling.

    SciTech Connect

    Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk

    2013-12-01

    We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.

  15. Sensitivity of Water Scarcity Events to ENSO-Driven Climate Variability at the Global Scale

    NASA Technical Reports Server (NTRS)

    Veldkamp, T. I. E.; Eisner, S.; Wada, Y.; Aerts, J. C. J. H.; Ward, P. J.

    2015-01-01

    Globally, freshwater shortage is one of the most dangerous risks for society. Changing hydro-climatic and socioeconomic conditions have aggravated water scarcity over the past decades. A wide range of studies show that water scarcity will intensify in the future, as a result of both increased consumptive water use and, in some regions, climate change. Although it is well-known that El Niño- Southern Oscillation (ENSO) affects patterns of precipitation and drought at global and regional scales, little attention has yet been paid to the impacts of climate variability on water scarcity conditions, despite its importance for adaptation planning. Therefore, we present the first global-scale sensitivity assessment of water scarcity to ENSO, the most dominant signal of climate variability. We show that over the time period 1961-2010, both water availability and water scarcity conditions are significantly correlated with ENSO-driven climate variability over a large proportion of the global land area (> 28.1 %); an area inhabited by more than 31.4% of the global population. We also found, however, that climate variability alone is often not enough to trigger the actual incidence of water scarcity events. The sensitivity of a region to water scarcity events, expressed in terms of land area or population exposed, is determined by both hydro-climatic and socioeconomic conditions. Currently, the population actually impacted by water scarcity events consists of 39.6% (CTA: consumption-to-availability ratio) and 41.1% (WCI: water crowding index) of the global population, whilst only 11.4% (CTA) and 15.9% (WCI) of the global population is at the same time living in areas sensitive to ENSO-driven climate variability. These results are contrasted, however, by differences in growth rates found under changing socioeconomic conditions, which are relatively high in regions exposed to water scarcity events. Given the correlations found between ENSO and water availability and scarcity

  16. Sensitivity of water scarcity events to ENSO-driven climate variability at the global scale

    NASA Astrophysics Data System (ADS)

    Veldkamp, T. I. E.; Eisner, S.; Wada, Y.; Aerts, J. C. J. H.; Ward, P. J.

    2015-10-01

    Globally, freshwater shortage is one of the most dangerous risks for society. Changing hydro-climatic and socioeconomic conditions have aggravated water scarcity over the past decades. A wide range of studies show that water scarcity will intensify in the future, as a result of both increased consumptive water use and, in some regions, climate change. Although it is well-known that El Niño-Southern Oscillation (ENSO) affects patterns of precipitation and drought at global and regional scales, little attention has yet been paid to the impacts of climate variability on water scarcity conditions, despite its importance for adaptation planning. Therefore, we present the first global-scale sensitivity assessment of water scarcity to ENSO, the most dominant signal of climate variability. We show that over the time period 1961-2010, both water availability and water scarcity conditions are significantly correlated with ENSO-driven climate variability over a large proportion of the global land area (> 28.1 %); an area inhabited by more than 31.4 % of the global population. We also found, however, that climate variability alone is often not enough to trigger the actual incidence of water scarcity events. The sensitivity of a region to water scarcity events, expressed in terms of land area or population exposed, is determined by both hydro-climatic and socioeconomic conditions. Currently, the population actually impacted by water scarcity events consists of 39.6 % (CTA: consumption-to-availability ratio) and 41.1 % (WCI: water crowding index) of the global population, whilst only 11.4 % (CTA) and 15.9 % (WCI) of the global population is at the same time living in areas sensitive to ENSO-driven climate variability. These results are contrasted, however, by differences in growth rates found under changing socioeconomic conditions, which are relatively high in regions exposed to water scarcity events. Given the correlations found between ENSO and water availability and

  17. Sensitivity of Water Scarcity Events to ENSO-Driven Climate Variability at the Global Scale

    NASA Technical Reports Server (NTRS)

    Veldkamp, T. I. E.; Eisner, S.; Wada, Y.; Aerts, J. C. J. H.; Ward, P. J.

    2015-01-01

    Globally, freshwater shortage is one of the most dangerous risks for society. Changing hydro-climatic and socioeconomic conditions have aggravated water scarcity over the past decades. A wide range of studies show that water scarcity will intensify in the future, as a result of both increased consumptive water use and, in some regions, climate change. Although it is well-known that El Niño- Southern Oscillation (ENSO) affects patterns of precipitation and drought at global and regional scales, little attention has yet been paid to the impacts of climate variability on water scarcity conditions, despite its importance for adaptation planning. Therefore, we present the first global-scale sensitivity assessment of water scarcity to ENSO, the most dominant signal of climate variability. We show that over the time period 1961-2010, both water availability and water scarcity conditions are significantly correlated with ENSO-driven climate variability over a large proportion of the global land area (> 28.1 %); an area inhabited by more than 31.4% of the global population. We also found, however, that climate variability alone is often not enough to trigger the actual incidence of water scarcity events. The sensitivity of a region to water scarcity events, expressed in terms of land area or population exposed, is determined by both hydro-climatic and socioeconomic conditions. Currently, the population actually impacted by water scarcity events consists of 39.6% (CTA: consumption-to-availability ratio) and 41.1% (WCI: water crowding index) of the global population, whilst only 11.4% (CTA) and 15.9% (WCI) of the global population is at the same time living in areas sensitive to ENSO-driven climate variability. These results are contrasted, however, by differences in growth rates found under changing socioeconomic conditions, which are relatively high in regions exposed to water scarcity events. Given the correlations found between ENSO and water availability and scarcity

  18. Global analysis of the immune response

    NASA Astrophysics Data System (ADS)

    Ribeiro, Leonardo C.; Dickman, Ronald; Bernardes, Américo T.

    2008-10-01

    The immune system may be seen as a complex system, characterized using tools developed in the study of such systems, for example, surface roughness and its associated Hurst exponent. We analyze densitometric (Panama blot) profiles of immune reactivity, to classify individuals into groups with similar roughness statistics. We focus on a population of individuals living in a region in which malaria endemic, as well as a control group from a disease-free region. Our analysis groups individuals according to the presence, or absence, of malaria symptoms and number of malaria manifestations. Applied to the Panama blot data, our method proves more effective at discriminating between groups than principal-components analysis or super-paramagnetic clustering. Our findings provide evidence that some phenomena observed in the immune system can be only understood from a global point of view. We observe similar tendencies between experimental immune profiles and those of artificial profiles, obtained from an immune network model. The statistical entropy of the experimental profiles is found to exhibit variations similar to those observed in the Hurst exponent.

  19. Wear-Out Sensitivity Analysis Project Abstract

    NASA Technical Reports Server (NTRS)

    Harris, Adam

    2015-01-01

    During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.

  20. Fast Computation of Global Sensitivity Kernel Database Based on Spectral-Element Simulations

    NASA Astrophysics Data System (ADS)

    Sales de Andrade, Elliott; Liu, Qinya

    2017-07-01

    Finite-frequency sensitivity kernels, a theoretical improvement from simple infinitely thin ray paths, have been used extensively in recent global and regional tomographic inversions. These sensitivity kernels provide more consistent and accurate interpretation of a growing number of broadband measurements, and are critical in mapping 3D heterogeneous structures of the mantle. Based on Born approximation, the calculation of sensitivity kernels requires the interaction of the forward wavefield and an adjoint wavefield generated by placing adjoint sources at stations. Both fields can be obtained accurately through numerical simulations of seismic wave propagation, particularly important for kernels of phases that cannot be sufficiently described by ray theory (such as core-diffracted waves). However, the total number of forward and adjoint numerical simulations required to build kernels for individual source-receiver pairs and to form the design matrix for classical tomography is computationally unaffordable. In this paper, we take advantage of the symmetry of 1D reference models, perform moment tensor forward and point force adjoint spectral-element simulations, and save six-component strain fields only on the equatorial plane based on the open-source spectral-element simulation package, SPECFEM3D_GLOBE. Sensitivity kernels for seismic phases at any epicentral distance can be efficiently computed by combining forward and adjoint strain wavefields from the saved strain field database, which significantly reduces both the number of simulations and the amount of storage required for global tomographic problems. Based on this technique, we compute traveltime, amplitude and/or boundary kernels of isotropic and radially anisotropic elastic parameters for various (P, S, P_{diff}, S_{diff}, depth, surface-reflected, surface wave, S 660 S boundary, etc.) phases for 1D ak135 model, in preparation for future global tomographic inversions.

  1. Sensitivity of the global water cycle to the water-holding capacity of land

    SciTech Connect

    Milly, P.C.D.; Dunne, K.A. )

    1994-04-01

    The sensitivity of the global water cycle to the water-holding capacity of the plant-root zone of continental soils is estimated by simulations using a mathematical model of the general circulation of the atmosphere, with prescribed ocean surface temperatures and prescribed cloud. With an increase of the globally constant storage capacity, evaporation from the continents rises and runoff falls, because a high storage capacity enhances the ability of the soil to store water from periods of excess for later evaporation during periods of shortage. In addition, atmospheric feedbacks associated with higher precipitation and lower potential evaporation drive further changes in evaporation and runoff. Most changes in evaporation and runoff occur in the tropics and the northern middle-latitude rain belts. Global evaporation from land increases by 7 cm for each doubling of storage capacity. Sensitivity is negligible for capacity above 60 cm. In the tropics and in the extratropics,increased continental evaporation is split between increased continental precipitation and decreased convergence of atmospheric water vapor from ocean to land. In the tropics, this partitioning is strongly affected by induced circulation changes, which are themselves forced by changes in latent heating. In the northern middle and high latitudes, the increased continental evaporation moistens the atmosphere. This change in humidity of the atmosphere is greater above the continents than above the oceans, and the resulting reduction in the sea-land humidity gradient causes a decreased onshore transport of water vapor by transient eddies. Results here may have implications for problems in global hydrology and climate dynamics, including effects of water resource development on global precipitation, climatic control of plant rooting characteristics, climatic effects of tropical deforestation, and climate-model errors. 21 refs., 13 figs., 21 tabs.

  2. Sensitivity analysis of hydrodynamic stability operators

    NASA Technical Reports Server (NTRS)

    Schmid, Peter J.; Henningson, Dan S.; Khorrami, Mehdi R.; Malik, Mujeeb R.

    1992-01-01

    The eigenvalue sensitivity for hydrodynamic stability operators is investigated. Classical matrix perturbation techniques as well as the concept of epsilon-pseudoeigenvalues are applied to show that parts of the spectrum are highly sensitive to small perturbations. Applications are drawn from incompressible plane Couette, trailing line vortex flow and compressible Blasius boundary layer flow. Parametric studies indicate a monotonically increasing effect of the Reynolds number on the sensitivity. The phenomenon of eigenvalue sensitivity is due to the non-normality of the operators and their discrete matrix analogs and may be associated with large transient growth of the corresponding initial value problem.

  3. Limits to global and Australian temperature change this century based on expert judgment of climate sensitivity

    NASA Astrophysics Data System (ADS)

    Grose, Michael R.; Colman, Robert; Bhend, Jonas; Moise, Aurel F.

    2016-07-01

    The projected warming of surface air temperature at the global and regional scale by the end of the century is directly related to emissions and Earth's climate sensitivity. Projections are typically produced using an ensemble of climate models such as CMIP5, however the range of climate sensitivity in models doesn't cover the entire range considered plausible by expert judgment. Of particular interest from a risk-management perspective is the lower impact outcome associated with low climate sensitivity and the low-probability, high-impact outcomes associated with the top of the range. Here we scale climate model output to the limits of expert judgment of climate sensitivity to explore these limits. This scaling indicates an expanded range of projected change for each emissions pathway, including a much higher upper bound for both the globe and Australia. We find the possibility of exceeding a warming of 2 °C since pre-industrial is projected under high emissions for every model even scaled to the lowest estimate of sensitivity, and is possible under low emissions under most estimates of sensitivity. Although these are not quantitative projections, the results may be useful to inform thinking about the limits to change until the sensitivity can be more reliably constrained, or this expanded range of possibilities can be explored in a more formal way. When viewing climate projections, accounting for these low-probability but high-impact outcomes in a risk management approach can complement the focus on the likely range of projections. They can also highlight the scale of the potential reduction in range of projections, should tight constraints on climate sensitivity be established by future research.

  4. Limits to global and Australian temperature change this century based on expert judgment of climate sensitivity

    NASA Astrophysics Data System (ADS)

    Grose, Michael R.; Colman, Robert; Bhend, Jonas; Moise, Aurel F.

    2017-05-01

    The projected warming of surface air temperature at the global and regional scale by the end of the century is directly related to emissions and Earth's climate sensitivity. Projections are typically produced using an ensemble of climate models such as CMIP5, however the range of climate sensitivity in models doesn't cover the entire range considered plausible by expert judgment. Of particular interest from a risk-management perspective is the lower impact outcome associated with low climate sensitivity and the low-probability, high-impact outcomes associated with the top of the range. Here we scale climate model output to the limits of expert judgment of climate sensitivity to explore these limits. This scaling indicates an expanded range of projected change for each emissions pathway, including a much higher upper bound for both the globe and Australia. We find the possibility of exceeding a warming of 2 °C since pre-industrial is projected under high emissions for every model even scaled to the lowest estimate of sensitivity, and is possible under low emissions under most estimates of sensitivity. Although these are not quantitative projections, the results may be useful to inform thinking about the limits to change until the sensitivity can be more reliably constrained, or this expanded range of possibilities can be explored in a more formal way. When viewing climate projections, accounting for these low-probability but high-impact outcomes in a risk management approach can complement the focus on the likely range of projections. They can also highlight the scale of the potential reduction in range of projections, should tight constraints on climate sensitivity be established by future research.

  5. Effect of ice-albedo feedback on global sensitivity in a one-dimensional radiative-convective climate model

    NASA Technical Reports Server (NTRS)

    Wang, W.-C.; Stone, P. H.

    1980-01-01

    The feedback between the ice albedo and temperature is included in a one-dimensional radiative-convective climate model. The effect of this feedback on global sensitivity to changes in solar constant is studied for the current climate conditions. This ice-albedo feedback amplifies global sensitivity by 26 and 39%, respectively, for assumptions of fixed cloud altitude and fixed cloud temperature. The global sensitivity is not affected significantly if the latitudinal variations of mean solar zenith angle and cloud cover are included in the global model. The differences in global sensitivity between one-dimensional radiative-convective models and energy balance models are examined. It is shown that the models are in close agreement when the same feedback mechanisms are included. The one-dimensional radiative-convective model with ice-albedo feedback included is used to compute the equilibrium ice line as a function of solar constant.

  6. Effect of ice-albedo feedback on global sensitivity in a one-dimensional radiative-convective climate model

    NASA Technical Reports Server (NTRS)

    Wang, W.-C.; Stone, P. H.

    1980-01-01

    The feedback between the ice albedo and temperature is included in a one-dimensional radiative-convective climate model. The effect of this feedback on global sensitivity to changes in solar constant is studied for the current climate conditions. This ice-albedo feedback amplifies global sensitivity by 26 and 39%, respectively, for assumptions of fixed cloud altitude and fixed cloud temperature. The global sensitivity is not affected significantly if the latitudinal variations of mean solar zenith angle and cloud cover are included in the global model. The differences in global sensitivity between one-dimensional radiative-convective models and energy balance models are examined. It is shown that the models are in close agreement when the same feedback mechanisms are included. The one-dimensional radiative-convective model with ice-albedo feedback included is used to compute the equilibrium ice line as a function of solar constant.

  7. Sensitivity of photolysis frequencies and key tropospheric oxidants in a global model to cloud vertical distributions and optical properties

    NASA Astrophysics Data System (ADS)

    Liu, Hongyu; Crawford, James H.; Considine, David B.; Platnick, Steven; Norris, Peter M.; Duncan, Bryan N.; Pierce, Robert B.; Chen, Gao; Yantosca, Robert M.

    2009-05-01

    Clouds directly affect tropospheric photochemistry through modification of solar radiation that determines photolysis frequencies. As a follow-up study to our recent assessment of these direct radiative effects of clouds on tropospheric chemistry, this paper presents an analysis of the sensitivity of such effects to cloud vertical distributions and optical properties (cloud optical depths (CODs) and cloud single scattering albedo), in a global three-dimensional (3-D) chemical transport model. The model was driven with a series of meteorological archives (GEOS-1 in support of the Stratospheric Tracers of Atmospheric Transport mission, or GEOS1-STRAT, GEOS-3, and GEOS-4) generated by the NASA Goddard Earth Observing System (GEOS) data assimilation system. Clouds in GEOS1-STRAT and GEOS-3 have more similar vertical distributions (with substantially smaller CODs in GEOS1-STRAT) while those in GEOS-4 are optically much thinner in the tropical upper troposphere. We find that the radiative impact of clouds on global photolysis frequencies and hydroxyl radical (OH) is more sensitive to the vertical distribution of clouds than to the magnitude of column CODs. With random vertical overlap for clouds, the model calculated changes in global mean OH (J(O1D), J(NO2)) due to the radiative effects of clouds in June are about 0.0% (0.4%, 0.9%), 0.8% (1.7%, 3.1%), and 7.3% (4.1%, 6.0%) for GEOS1-STRAT, GEOS-3, and GEOS-4, respectively; the geographic distributions of these quantities show much larger changes, with maximum decrease in OH concentrations of ˜15-35% near the midlatitude surface. The much larger global impact of clouds in GEOS-4 reflects the fact that more solar radiation is able to penetrate through the optically thin upper tropospheric clouds, increasing backscattering from low-level clouds. Model simulations with each of the three cloud distributions all show that the change in the global burden of ozone due to clouds is less than 5%. Model perturbation experiments

  8. Sensitivity of Photolysis Frequencies and Key Tropospheric Oxidants in a Global Model to Cloud Vertical Distributions and Optical Properties

    NASA Technical Reports Server (NTRS)

    Liu, Hongyu; Crawford, James H.; Considine, David B.; Platnick, Steven; Norris, Peter M.; Duncan, Bryan N.; Pierce, Robert B.; Chen, Gao; Yantosca, Robert M.

    2009-01-01

    Clouds affect tropospheric photochemistry through modification of solar radiation that determines photolysis frequencies. As a follow-up study to our recent assessment of the radiative effects of clouds on tropospheric chemistry, this paper presents an analysis of the sensitivity of such effects to cloud vertical distributions and optical properties (cloud optical depths (CODs) and cloud single scattering albedo), in a global 3-D chemical transport model (GEOS-Chem). GEOS-Chem was driven with a series of meteorological archives (GEOS1- STRAT, GEOS-3 and GEOS-4) generated by the NASA Goddard Earth Observing System data assimilation system. Clouds in GEOS1-STRAT and GEOS-3 have more similar vertical distributions (with substantially smaller CODs in GEOS1-STRAT) while those in GEOS-4 are optically much thinner in the tropical upper troposphere. We find that the radiative impact of clouds on global photolysis frequencies and hydroxyl radical (OH) is more sensitive to the vertical distribution of clouds than to the magnitude of column CODs. With random vertical overlap for clouds, the model calculated changes in global mean OH (J(O1D), J(NO2)) due to the radiative effects of clouds in June are about 0.0% (0.4%, 0.9%), 0.8% (1.7%, 3.1%), and 7.3% (4.1%, 6.0%), for GEOS1-STRAT, GEOS-3 and GEOS-4, respectively; the geographic distributions of these quantities show much larger changes, with maximum decrease in OH concentrations of approx.15-35% near the midlatitude surface. The much larger global impact of clouds in GEOS-4 reflects the fact that more solar radiation is able to penetrate through the optically thin upper-tropospheric clouds, increasing backscattering from low-level clouds. Model simulations with each of the three cloud distributions all show that the change in the global burden of ozone due to clouds is less than 5%. Model perturbation experiments with GEOS-3, where the magnitude of 3-D CODs are progressively varied from -100% to 100%, predict only modest

  9. Global rotation has high sensitivity in ACL lesions within stress MRI.

    PubMed

    Espregueira-Mendes, João; Andrade, Renato; Leal, Ana; Pereira, Hélder; Skaf, Abdala; Rodrigues-Gomes, Sérgio; Oliveira, J Miguel; Reis, Rui L; Pereira, Rogério

    2016-08-16

    This study aims to objectively compare side-to-side differences of P-A laxity alone and coupled with rotatory laxity within magnetic resonance imaging, in patients with total anterior cruciate ligament (ACL) rupture. This prospective study enrolled sixty-one patients with signs and symptoms of unilateral total anterior cruciate ligament rupture, which were referred to magnetic resonance evaluation with simultaneous instrumented laxity measurements. Sixteen of those patients were randomly selected to also have the contralateral healthy knee laxity profile tested. Images were acquired for the medial and lateral tibial plateaus without pressure, with postero-anterior translation, and postero-anterior translation coupled with maximum internal and external rotation, respectively. All parameters measured were significantly different between healthy and injured knees (P < 0.05), with exception of lateral plateau without stress. The difference between injured and healthy knees for medial and lateral tibial plateaus anterior displacement (P < 0.05) and rotation (P < 0.001) was statistically significant. It was found a significant correlation between the global rotation of the lateral tibial plateau (lateral plateau with internal + external rotation) with pivot-shift, and between the anterior global translation of both tibial plateaus (medial + lateral tibial plateau) with Lachman. The anterior global translation of both tibial plateaus was the most specific test with a cut-off point of 11.1 mm (93.8 %), and the global rotation of the lateral tibial plateau was the most sensitive test with a correspondent cut-off point of 15.1 mm (92.9 %). Objective laxity quantification of ACL-injured knees showed increased sagittal laxity, and simultaneously in sagittal and transversal planes, when compared to their healthy contralateral knee. Moreover, when measuring instability from anterior cruciate ligament ruptures, the anterior global translation of both tibial plateaus

  10. Sensitivity of the global submarine hydrate inventory to scenarios of future climate change

    NASA Astrophysics Data System (ADS)

    Hunter, S. J.; Goldobin, D. S.; Haywood, A. M.; Ridgwell, A.; Rees, J. G.

    2013-04-01

    The global submarine inventory of methane hydrate is thought to be considerable. The stability of marine hydrates is sensitive to changes in temperature and pressure and once destabilised, hydrates release methane into sediments and ocean and potentially into the atmosphere, creating a positive feedback with climate change. Here we present results from a multi-model study investigating how the methane hydrate inventory dynamically responds to different scenarios of future climate and sea level change. The results indicate that a warming-induced reduction is dominant even when assuming rather extreme rates of sea level rise (up to 20 mm yr-1) under moderate warming scenarios (RCP 4.5). Over the next century modelled hydrate dissociation is focussed in the top ˜100m of Arctic and Subarctic sediments beneath <500m water depth. Predicted dissociation rates are particularly sensitive to the modelled vertical hydrate distribution within sediments. Under the worst case business-as-usual scenario (RCP 8.5), upper estimates of resulting global sea-floor methane fluxes could exceed estimates of natural global fluxes by 2100 (>30-50TgCH4yr-1), although subsequent oxidation in the water column could reduce peak atmospheric release rates to 0.75-1.4 Tg CH4 yr-1.

  11. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes

    PubMed Central

    Curtis, Janelle M.R.

    2016-01-01

    Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along

  12. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes.

    PubMed

    Naujokaitis-Lewis, Ilona; Curtis, Janelle M R

    2016-01-01

    Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along

  13. Global resilience analysis of water distribution systems.

    PubMed

    Diao, Kegong; Sweetapple, Chris; Farmani, Raziyeh; Fu, Guangtao; Ward, Sarah; Butler, David

    2016-12-01

    Evaluating and enhancing resilience in water infrastructure is a crucial step towards more sustainable urban water management. As a prerequisite to enhancing resilience, a detailed understanding is required of the inherent resilience of the underlying system. Differing from traditional risk analysis, here we propose a global resilience analysis (GRA) approach that shifts the objective from analysing multiple and unknown threats to analysing the more identifiable and measurable system responses to extreme conditions, i.e. potential failure modes. GRA aims to evaluate a system's resilience to a possible failure mode regardless of the causal threat(s) (known or unknown, external or internal). The method is applied to test the resilience of four water distribution systems (WDSs) with various features to three typical failure modes (pipe failure, excess demand, and substance intrusion). The study reveals GRA provides an overview of a water system's resilience to various failure modes. For each failure mode, it identifies the range of corresponding failure impacts and reveals extreme scenarios (e.g. the complete loss of water supply with only 5% pipe failure, or still meeting 80% of demand despite over 70% of pipes failing). GRA also reveals that increased resilience to one failure mode may decrease resilience to another and increasing system capacity may delay the system's recovery in some situations. It is also shown that selecting an appropriate level of detail for hydraulic models is of great importance in resilience analysis. The method can be used as a comprehensive diagnostic framework to evaluate a range of interventions for improving system resilience in future studies.

  14. Sensitivity of global tropical climate to land surface processes: Mean state and interannual variability

    SciTech Connect

    Ma, Hsi-Yen; Xiao, Heng; Mechoso, C. R.; Xue, Yongkang

    2013-03-01

    This study examines the sensitivity of global tropical climate to land surface processes (LSP) using an atmospheric general circulation model both uncoupled (with prescribed SSTs) and coupled to an oceanic general circulation model. The emphasis is on the interactive soil moisture and vegetation biophysical processes, which have first order influence on the surface energy and water budgets. The sensitivity to those processes is represented by the differences between model simulations, in which two land surface schemes are considered: 1) a simple land scheme that specifies surface albedo and soil moisture availability, and 2) the Simplified Simple Biosphere Model (SSiB), which allows for consideration of interactive soil moisture and vegetation biophysical process. Observational datasets are also employed to assess the reality of model-revealed sensitivity. The mean state sensitivity to different LSP is stronger in the coupled mode, especially in the tropical Pacific. Furthermore, seasonal cycle of SSTs in the equatorial Pacific, as well as ENSO frequency, amplitude, and locking to the seasonal cycle of SSTs are significantly modified and more realistic with SSiB. This outstanding sensitivity of the atmosphere-ocean system develops through changes in the intensity of equatorial Pacific trades modified by convection over land. Our results further demonstrate that the direct impact of land-atmosphere interactions on the tropical climate is modified by feedbacks associated with perturbed oceanic conditions ("indirect effect" of LSP). The magnitude of such indirect effect is strong enough to suggest that comprehensive studies on the importance of LSP on the global climate have to be made in a system that allows for atmosphere-ocean interactions.

  15. Thermodynamics-based Metabolite Sensitivity Analysis in metabolic networks.

    PubMed

    Kiparissides, A; Hatzimanikatis, V

    2017-01-01

    The increasing availability of large metabolomics datasets enhances the need for computational methodologies that can organize the data in a way that can lead to the inference of meaningful relationships. Knowledge of the metabolic state of a cell and how it responds to various stimuli and extracellular conditions can offer significant insight in the regulatory functions and how to manipulate them. Constraint based methods, such as Flux Balance Analysis (FBA) and Thermodynamics-based flux analysis (TFA), are commonly used to estimate the flow of metabolites through genome-wide metabolic networks, making it possible to identify the ranges of flux values that are consistent with the studied physiological and thermodynamic conditions. However, unless key intracellular fluxes and metabolite concentrations are known, constraint-based models lead to underdetermined problem formulations. This lack of information propagates as uncertainty in the estimation of fluxes and basic reaction properties such as the determination of reaction directionalities. Therefore, knowledge of which metabolites, if measured, would contribute the most to reducing this uncertainty can significantly improve our ability to define the internal state of the cell. In the present work we combine constraint based modeling, Design of Experiments (DoE) and Global Sensitivity Analysis (GSA) into the Thermodynamics-based Metabolite Sensitivity Analysis (TMSA) method. TMSA ranks metabolites comprising a metabolic network based on their ability to constrain the gamut of possible solutions to a limited, thermodynamically consistent set of internal states. TMSA is modular and can be applied to a single reaction, a metabolic pathway or an entire metabolic network. This is, to our knowledge, the first attempt to use metabolic modeling in order to provide a significance ranking of metabolites to guide experimental measurements.

  16. What Do We Mean By Sensitivity Analysis? The Need For A Comprehensive Characterization Of Sensitivity In Earth System Models

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Gupta, H. V.

    2014-12-01

    Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.

  17. Sensitivity of Mid Holocene Global Climate to Changes in Vegetation Reconstructed From the Geologic Record

    NASA Astrophysics Data System (ADS)

    Diffenbaugh, N. S.; Sloan, L. C.

    2001-12-01

    The influence of land surface changes upon global and regional climate has been shown both for anthropogenic and non-anthropogenic changes in land surface distribution. Because validation of global climate models (GCMs) is dependent upon the use of accurate boundary conditions, and because changes in land surface distribution have been shown to have effects on climate in areas remote from those changes, we have tested the sensitivity of a GCM to a global Mid Holocene vegetation distribution reconstructed from the fossil record, a first for a 6 ka GCM run. Large areas of the globe exhibit statistically significant seasonal warming of 2 to 4 ° C, with peak warming of 10 ° C over the Middle East in June-July-August (JJA). The patterns of maximum warming over both Northern Asia and the Middle East strongly coincide with the patterns of maximum decrease in albedo in all seasons. Likewise, cooling of up to 4 ° C over Northern Africa associated with the expansion of savanna and broadleaf evergreen forest also coincides with increases in surface heat flux of up to 35 W/m2 in March-April-May (MAM) and 60 W/m2 in JJA. At both the regional and global scale, the magnitude of vegetation forcing is equal to that of 6 ka orbital forcing, emphasizing the importance of accurate land surface distribution for both model validation and future climate prediction.

  18. A Global Spectral Study of Stellar-Mass Black Holes with Unprecedented Sensitivity

    NASA Astrophysics Data System (ADS)

    Garci, Javier

    There are two well established populations of black holes: (i) stellar-mass black holes with masses in the range 5 to 30 solar masses, many millions of which are present in each galaxy in the universe, and (ii) supermassive black holes with masses in the range millions to billions of solar masses, which reside in the nucleus of most galaxies. Supermassive black holes play a leading role in shaping galaxies and are central to cosmology. However, they are hard to study because they are dim and they scarcely vary on a human timescale. Luckily, their variability and full range of behavior can be very effectively studied by observing their stellar-mass cousins, which display in miniature the full repertoire of a black hole over the course of a single year. The archive of data collected by NASA's Rossi X-ray Timing Explorer (RXTE) during its 16 year mission is of first importance for the study of stellar-mass black holes. While our ultimate goal is a complete spectral analysis of all the stellar-mass black hole data in the RXTE archive, the goal of this proposal is the global study of six of these black holes. The two key methodologies we bring to the study are: (1) Our recently developed calibration tool that increases the sensitivity of RXTE's detector by up to an order of magnitude; and (2) the leading X-ray spectral "reflection" models that are arguably the most effective means currently available for probing the effects of strong gravity near the event horizon of a black hole. For each of the six black holes, we will fit our models to all the archived spectral data and determine several key parameters describing the black hole and the 10-million-degree gas that surrounds it. Of special interest will be our measurement of the spin (or rate of rotation) of each black hole, which can be as high as tens of thousands of RPM. Profoundly, all the properties of an astronomical black hole are completely defined by specifying its spin and its mass. The main goal of this

  19. Global stability analysis of electrified jets

    NASA Astrophysics Data System (ADS)

    Rivero-Rodriguez, Javier; Pérez-Saborid, Miguel

    2014-11-01

    Electrospinning is a common process used to produce micro and nano polymeric fibers. In this technique, the whipping mode of a very thin electrified jet generated in an electrospray device is nhanced in order to increase its elongation. In this work, we use a theoretical Eulerian model that describes the kinematics and dynamics of the midline of the jet, its radius and convective velocity. The model equations result from balances of mass, linear and angular momentum applied to any differential slice of the jet together with constitutive laws for viscous forces and moments, as well as appropriate expressions for capillary and electrical forces. As a first step towards computing the complete nonlinear, transient dynamics of the electrified jet, we have performed a global stability analysis of the forementioned equations and compared the results with experimental data obtained by Guillaume et al. [2011] and Guerrero-Millán et al. [2014]. The support of the Ministry of Science and Innovation of Spain (Project DPI 2010-20450-C03-02) is acknowledged.

  20. Global analysis of photosynthesis transcriptional regulatory networks.

    PubMed

    Imam, Saheed; Noguera, Daniel R; Donohue, Timothy J

    2014-12-01

    Photosynthesis is a crucial biological process that depends on the interplay of many components. This work analyzed the gene targets for 4 transcription factors: FnrL, PrrA, CrpK and MppG (RSP_2888), which are known or predicted to control photosynthesis in Rhodobacter sphaeroides. Chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) identified 52 operons under direct control of FnrL, illustrating its regulatory role in photosynthesis, iron homeostasis, nitrogen metabolism and regulation of sRNA synthesis. Using global gene expression analysis combined with ChIP-seq, we mapped the regulons of PrrA, CrpK and MppG. PrrA regulates ∼34 operons encoding mainly photosynthesis and electron transport functions, while CrpK, a previously uncharacterized Crp-family protein, regulates genes involved in photosynthesis and maintenance of iron homeostasis. Furthermore, CrpK and FnrL share similar DNA binding determinants, possibly explaining our observation of the ability of CrpK to partially compensate for the growth defects of a ΔFnrL mutant. We show that the Rrf2 family protein, MppG, plays an important role in photopigment biosynthesis, as part of an incoherent feed-forward loop with PrrA. Our results reveal a previously unrealized, high degree of combinatorial regulation of photosynthetic genes and significant cross-talk between their transcriptional regulators, while illustrating previously unidentified links between photosynthesis and the maintenance of iron homeostasis.

  1. Determinants for global cargo analysis tools

    NASA Astrophysics Data System (ADS)

    Wilmoth, M.; Kay, W.; Sessions, C.; Hancock, M.

    2007-04-01

    The purpose of Global TRADER (GT) is not only to gather and query supply-chain transactional data for facts but also to analyze that data for hidden knowledge for the purpose of useful and meaningful pattern prediction. The application of advanced analytics provides benefits beyond simple information retrieval from GT, including computer-aided detection of useful patterns and associations. Knowledge discovery, offering a breadth and depth of analysis unattainable by manual processes, involves three components: repository structures, analytical engines, and user tools and reports. For a large and complex domain like supply-chains, there are many stages to developing the most advanced analytic capabilities; however, significant benefits accrue as components are incrementally added. These benefits include detecting emerging patterns; identifying new patterns; fusing data; creating models that can learn and predict behavior; and identifying new features for future tools. The GT Analyst Toolset was designed to overcome a variety of constraints, including lack of third party data, partial data loads, non-cleansed data (non-disambiguation of parties, misspellings, transpositions, etc.), and varying levels of analyst experience and expertise. The end result was a set of analytical tools that are flexible, extensible, tunable, and able to support a wide range of analyst demands.

  2. Stability of fundamental couplings: A global analysis

    NASA Astrophysics Data System (ADS)

    Martins, C. J. A. P.; Pinho, A. M. M.

    2017-01-01

    Astrophysical tests of the stability of fundamental couplings are becoming an increasingly important probe of new physics. Motivated by the recent availability of new and stronger constraints we update previous works testing the consistency of measurements of the fine-structure constant α and the proton-to-electron mass ratio μ =mp/me (mostly obtained in the optical/ultraviolet) with combined measurements of α , μ and the proton gyromagnetic ratio gp (mostly in the radio band). We carry out a global analysis of all available data, including the 293 archival measurements of Webb et al. and 66 more recent dedicated measurements, and constraining both time and spatial variations. While nominally the full data sets show a slight statistical preference for variations of α and μ (at up to two standard deviations), we also find several inconsistencies between different subsets, likely due to hidden systematics and implying that these statistical preferences need to be taken with caution. The statistical evidence for a spatial dipole in the values of α is found at the 2.3 sigma level. Forthcoming studies with facilities such as ALMA and ESPRESSO should clarify these issues.

  3. The Hydrological Sensitivity to Global Warming and Solar Geoengineering Derived from Thermodynamic Constraints

    SciTech Connect

    Kleidon, Alex; Kravitz, Benjamin S.; Renner, Maik

    2015-01-16

    We derive analytic expressions of the transient response of the hydrological cycle to surface warming from an extremely simple energy balance model in which turbulent heat fluxes are constrained by the thermodynamic limit of maximum power. For a given magnitude of steady-state temperature change, this approach predicts the transient response as well as the steady-state change in surface energy partitioning and the hydrologic cycle. We show that the transient behavior of the simple model as well as the steady state hydrological sensitivities to greenhouse warming and solar geoengineering are comparable to results from simulations using highly complex models. Many of the global-scale hydrological cycle changes can be understood from a surface energy balance perspective, and our thermodynamically-constrained approach provides a physically robust way of estimating global hydrological changes in response to altered radiative forcing.

  4. Sensitivity of tropospheric hydrogen peroxide to global chemical and climate change

    NASA Technical Reports Server (NTRS)

    Thompson, Anne M.; Stewart, Richard W.; Owens, Melody A.

    1989-01-01

    The sensitivities of tropospheric HO2 and hydrogen peroxide (H2O2) levels to increases in CH4, CO, and NO emissions and to changes in stratospheric O3 and tropospheric O3 and H2O have been evaluated with a one-dimensional photochemical model. Specific scenarios of CH4-CO-NO(x) emissions and global climate changes are used to predict HO2 and H2O2 changes between 1980 and 2030. Calculations are made for urban and nonurban continental conditions and for low latitudes. Generally, CO and CH4 emissions will enhance H2O2; NO emissions will suppress H2O2 except in very low NO(x) regions. A global warming or stratospheric O3 depletion will add to H2O2. Hydrogen peroxide increases from 1980 to 2030 could be 100 percent or more in the urban boundary layer.

  5. Sensitivity of tropospheric hydrogen peroxide to global chemical and climate change

    NASA Technical Reports Server (NTRS)

    Thompson, Anne M.; Stewart, Richard W.; Owens, Melody A.

    1989-01-01

    The sensitivities of tropospheric HO2 and hydrogen peroxide (H2O2) levels to increases in CH4, CO, and NO emissions and to changes in stratospheric O3 and tropospheric O3 and H2O have been evaluated with a one-dimensional photochemical model. Specific scenarios of CH4-CO-NO(x) emissions and global climate changes are used to predict HO2 and H2O2 changes between 1980 and 2030. Calculations are made for urban and nonurban continental conditions and for low latitudes. Generally, CO and CH4 emissions will enhance H2O2; NO emissions will suppress H2O2 except in very low NO(x) regions. A global warming or stratospheric O3 depletion will add to H2O2. Hydrogen peroxide increases from 1980 to 2030 could be 100 percent or more in the urban boundary layer.

  6. Topographic Avalanche Risk: DEM Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Nazarkulova, Ainura; Strobl, Josef

    2015-04-01

    GIS-based models are frequently used to assess the risk and trigger probabilities of (snow) avalanche releases, based on parameters and geomorphometric derivatives like elevation, exposure, slope, proximity to ridges and local relief energy. Numerous models, and model-based specific applications and project results have been published based on a variety of approaches and parametrizations as well as calibrations. Digital Elevation Models (DEM) come with many different resolution (scale) and quality (accuracy) properties, some of these resulting from sensor characteristics and DEM generation algorithms, others from different DEM processing workflows and analysis strategies. This paper explores the impact of using different types and characteristics of DEMs for avalanche risk modeling approaches, and aims at establishing a framework for assessing the uncertainty of results. The research question is derived from simply demonstrating the differences in release risk areas and intensities by applying identical models to DEMs with different properties, and then extending this into a broader sensitivity analysis. For the quantification and calibration of uncertainty parameters different metrics are established, based on simple value ranges, probabilities, as well as fuzzy expressions and fractal metrics. As a specific approach the work on DEM resolution-dependent 'slope spectra' is being considered and linked with the specific application of geomorphometry-base risk assessment. For the purpose of this study focusing on DEM characteristics, factors like land cover, meteorological recordings and snowpack structure and transformation are kept constant, i.e. not considered explicitly. Key aims of the research presented here are the development of a multi-resolution and multi-scale framework supporting the consistent combination of large area basic risk assessment with local mitigation-oriented studies, and the transferability of the latter into areas without availability of

  7. Derivative based sensitivity analysis of gamma index

    PubMed Central

    Sarkar, Biplab; Pradhan, Anirudh; Ganesh, T.

    2015-01-01

    Originally developed as a tool for patient-specific quality assurance in advanced treatment delivery methods to compare between measured and calculated dose distributions, the gamma index (γ) concept was later extended to compare between any two dose distributions. It takes into effect both the dose difference (DD) and distance-to-agreement (DTA) measurements in the comparison. Its strength lies in its capability to give a quantitative value for the analysis, unlike other methods. For every point on the reference curve, if there is at least one point in the evaluated curve that satisfies the pass criteria (e.g., δDD = 1%, δDTA = 1 mm), the point is included in the quantitative score as “pass.” Gamma analysis does not account for the gradient of the evaluated curve - it looks at only the minimum gamma value, and if it is <1, then the point passes, no matter what the gradient of evaluated curve is. In this work, an attempt has been made to present a derivative-based method for the identification of dose gradient. A mathematically derived reference profile (RP) representing the penumbral region of 6 MV 10 cm × 10 cm field was generated from an error function. A general test profile (GTP) was created from this RP by introducing 1 mm distance error and 1% dose error at each point. This was considered as the first of the two evaluated curves. By its nature, this curve is a smooth curve and would satisfy the pass criteria for all points in it. The second evaluated profile was generated as a sawtooth test profile (STTP) which again would satisfy the pass criteria for every point on the RP. However, being a sawtooth curve, it is not a smooth one and would be obviously poor when compared with the smooth profile. Considering the smooth GTP as an acceptable profile when it passed the gamma pass criteria (1% DD and 1 mm DTA) against the RP, the first and second order derivatives of the DDs (δD’, δD”) between these two curves were derived and used as the boundary

  8. Sensitivity analysis of textural parameters for vertebroplasty

    NASA Astrophysics Data System (ADS)

    Tack, Gye Rae; Lee, Seung Y.; Shin, Kyu-Chul; Lee, Sung J.

    2002-05-01

    Vertebroplasty is one of the newest surgical approaches for the treatment of the osteoporotic spine. Recent studies have shown that it is a minimally invasive, safe, promising procedure for patients with osteoporotic fractures while providing structural reinforcement of the osteoporotic vertebrae as well as immediate pain relief. However, treatment failures due to excessive bone cement injection have been reported as one of complications. It is believed that control of bone cement volume seems to be one of the most critical factors in preventing complications. We believed that an optimal bone cement volume could be assessed based on CT data of a patient. Gray-level run length analysis was used to extract textural information of the trabecular. At initial stage of the project, four indices were used to represent the textural information: mean width of intertrabecular space, mean width of trabecular, area of intertrabecular space, and area of trabecular. Finally, the area of intertrabecular space was selected as a parameter to estimate an optimal bone cement volume and it was found that there was a strong linear relationship between these 2 variables (correlation coefficient = 0.9433, standard deviation = 0.0246). In this study, we examined several factors affecting overall procedures. The threshold level, the radius of rolling ball and the size of region of interest were selected for the sensitivity analysis. As the level of threshold varied with 9, 10, and 11, the correlation coefficient varied from 0.9123 to 0.9534. As the radius of rolling ball varied with 45, 50, and 55, the correlation coefficient varied from 0.9265 to 0.9730. As the size of region of interest varied with 58 x 58, 64 x 64, and 70 x 70, the correlation coefficient varied from 0.9685 to 0.9468. Finally, we found that strong correlation between actual bone cement volume (Y) and the area (X) of the intertrabecular space calculated from the binary image and the linear equation Y = 0.001722 X - 2

  9. Ensemble reconstruction constraints on the global carbon cycle sensitivity to climate.

    PubMed

    Frank, David C; Esper, Jan; Raible, Christoph C; Büntgen, Ulf; Trouet, Valerie; Stocker, Benjamin; Joos, Fortunat

    2010-01-28

    The processes controlling the carbon flux and carbon storage of the atmosphere, ocean and terrestrial biosphere are temperature sensitive and are likely to provide a positive feedback leading to amplified anthropogenic warming. Owing to this feedback, at timescales ranging from interannual to the 20-100-kyr cycles of Earth's orbital variations, warming of the climate system causes a net release of CO(2) into the atmosphere; this in turn amplifies warming. But the magnitude of the climate sensitivity of the global carbon cycle (termed gamma), and thus of its positive feedback strength, is under debate, giving rise to large uncertainties in global warming projections. Here we quantify the median gamma as 7.7 p.p.m.v. CO(2) per degrees C warming, with a likely range of 1.7-21.4 p.p.m.v. CO(2) per degrees C. Sensitivity experiments exclude significant influence of pre-industrial land-use change on these estimates. Our results, based on the coupling of a probabilistic approach with an ensemble of proxy-based temperature reconstructions and pre-industrial CO(2) data from three ice cores, provide robust constraints for gamma on the policy-relevant multi-decadal to centennial timescales. By using an ensemble of >200,000 members, quantification of gamma is not only improved, but also likelihoods can be assigned, thereby providing a benchmark for future model simulations. Although uncertainties do not at present allow exclusion of gamma calculated from any of ten coupled carbon-climate models, we find that gamma is about twice as likely to fall in the lowermost than in the uppermost quartile of their range. Our results are incompatibly lower (P < 0.05) than recent pre-industrial empirical estimates of approximately 40 p.p.m.v. CO(2) per degrees C (refs 6, 7), and correspondingly suggest approximately 80% less potential amplification of ongoing global warming.

  10. Economic Analysis and Assumptions in Global Education.

    ERIC Educational Resources Information Center

    Miller, Steven L.

    Economic educators recognize the importance of a global perspective, at least in part because the international sector has become more important over the past few decades. The application of economic principles calls into question some assumptions that appear to be common among members of the global education movement. That these assumptions might…

  11. Global boundedness in a quasilinear chemotaxis system with general density-signal governed sensitivity

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Ding, Mengyao; Li, Yan

    2017-09-01

    In this paper we study the global boundedness of solutions to the quasilinear parabolic chemotaxis system: ut = ∇ ṡ (D (u) ∇u - S (u) ∇φ (v)), 0 = Δv - v + u, subject to homogeneous Neumann boundary conditions and the initial data u0 in a bounded and smooth domain Ω ⊂Rn (n ≥ 2), where the diffusivity D (u) is supposed to satisfy D (u) ≥a0(u + 1) - α with a0 > 0 and α ∈ R, while the density-signal governed sensitivity fulfills 0 ≤ S (u) ≤b0(u + 1) β and 0 <φ‧ (v) ≤χ/vk for b0 , χ > 0 and β , k ∈ R. It is shown that the solution is globally bounded if α + β < (1 -2/n) k +2/n with n ≥ 3 and k < 1, or α + β < 1 for k ≥ 1. This implies that the large k benefits the global boundedness of solutions due to the weaker chemotactic migration of the signal-dependent sensitivity at high signal concentrations. Moreover, when α + β arrives at the critical value, we establish the global boundedness of solutions for the coefficient χ properly small. It should be emphasized that the smallness of χ under k > 1 is positively related to the total cellular mass ∫Ωu0 dx, which is attributed to the stronger singularity of φ (v) at v = 0 for k > 1 and the fact that v can be estimated from below by a multiple of ∫Ωu0 dx. In addition, distinctive phenomena concerning this model are observed by comparison with the known results.

  12. Quantifying PM2.5-meteorology sensitivities in a global climate model

    NASA Astrophysics Data System (ADS)

    Westervelt, D. M.; Horowitz, L. W.; Naik, V.; Tai, A. P. K.; Fiore, A. M.; Mauzerall, D. L.

    2016-10-01

    Climate change can influence fine particulate matter concentrations (PM2.5) through changes in air pollution meteorology. Knowledge of the extent to which climate change can exacerbate or alleviate air pollution in the future is needed for robust climate and air pollution policy decision-making. To examine the influence of climate on PM2.5, we use the Geophysical Fluid Dynamics Laboratory Coupled Model version 3 (GFDL CM3), a fully-coupled chemistry-climate model, combined with future emissions and concentrations provided by the four Representative Concentration Pathways (RCPs). For each of the RCPs, we conduct future simulations in which emissions of aerosols and their precursors are held at 2005 levels while other climate forcing agents evolve in time, such that only climate (and thus meteorology) can influence PM2.5 surface concentrations. We find a small increase in global, annual mean PM2.5 of about 0.21 μg m-3 (5%) for RCP8.5, a scenario with maximum warming. Changes in global mean PM2.5 are at a maximum in the fall and are mainly controlled by sulfate followed by organic aerosol with minimal influence of black carbon. RCP2.6 is the only scenario that projects a decrease in global PM2.5 with future climate changes, albeit only by -0.06 μg m-3 (1.5%) by the end of the 21st century. Regional and local changes in PM2.5 are larger, reaching upwards of 2 μg m-3 for polluted (eastern China) and dusty (western Africa) locations on an annually averaged basis in RCP8.5. Using multiple linear regression, we find that future PM2.5 concentrations are most sensitive to local temperature, followed by surface wind and precipitation. PM2.5 concentrations are robustly positively associated with temperature, while negatively related with precipitation and wind speed. Present-day (2006-2015) modeled sensitivities of PM2.5 to meteorological variables are evaluated against observations and found to agree reasonably well with observed sensitivities (within 10-50% over the

  13. Quantifying PM2.5-Meteorology Sensitivities in a Global Climate Model

    NASA Technical Reports Server (NTRS)

    Westervelt, D. M.; Horowitz, L. W.; Naik, V.; Tai, A. P. K.; Fiore, A. M.; Mauzerall, D. L.

    2016-01-01

    Climate change can influence fine particulate matter concentrations (PM2.5) through changes in air pollution meteorology. Knowledge of the extent to which climate change can exacerbate or alleviate air pollution in the future is needed for robust climate and air pollution policy decision-making. To examine the influence of climate on PM2.5, we use the Geophysical Fluid Dynamics Laboratory Coupled Model version 3 (GFDL CM3), a fully-coupled chemistry-climate model, combined with future emissions and concentrations provided by the four Representative Concentration Pathways (RCPs). For each of the RCPs, we conduct future simulations in which emissions of aerosols and their precursors are held at 2005 levels while other climate forcing agents evolve in time, such that only climate (and thus meteorology) can influence PM2.5 surface concentrations. We find a small increase in global, annual mean PM2.5 of about 0.21 micro-g/cu m3 (5%) for RCP8.5, a scenario with maximum warming. Changes in global mean PM2.5 are at a maximum in the fall and are mainly controlled by sulfate followed by organic aerosol with minimal influence of black carbon. RCP2.6 is the only scenario that projects a decrease in global PM2.5 with future climate changes, albeit only by -0.06 micro-g/cu m (1.5%) by the end of the 21st century. Regional and local changes in PM2.5 are larger, reaching upwards of 2 micro-g/cu m for polluted (eastern China) and dusty (western Africa) locations on an annually averaged basis in RCP8.5. Using multiple linear regression, we find that future PM2.5 concentrations are most sensitive to local temperature, followed by surface wind and precipitation. PM2.5 concentrations are robustly positively associated with temperature, while negatively related with precipitation and wind speed. Present-day (2006-2015) modeled sensitivities of PM2.5 to meteorological variables are evaluated against observations and found to agree reasonably well with observed sensitivities (within 10e50

  14. Global boundedness to a chemotaxis system with singular sensitivity and logistic source

    NASA Astrophysics Data System (ADS)

    Zhao, Xiangdong; Zheng, Sining

    2017-02-01

    We consider the parabolic-parabolic Keller-Segel system with singular sensitivity and logistic source: u_t=Δ u-χ nabla \\cdot (u/vnabla v) +ru-μ u^2, v_t=Δ v-v+u under the homogeneous Neumann boundary conditions in a smooth bounded domain Ω subset {R}^2, χ ,μ >0 and rin {R}. It is proved that the system exists globally bounded classical solutions if r>χ ^2/4 for 0<χ ≤ 2, or r>χ -1 for χ >2.

  15. Development of a Pressure Sensitive Paint System for Measuring Global Surface Pressures on Rotorcraft Blades

    NASA Technical Reports Server (NTRS)

    Watkins, A. Neal; Leighty, Bradley D.; Lipford, William E.; Wong, Oliver D.; Oglesby, Donald M.; Ingram, JoAnne L.

    2007-01-01

    This paper will describe the results from a proof of concept test to examine the feasibility of using Pressure Sensitive Paint (PSP) to measure global surface pressures on rotorcraft blades in hover. The test was performed using the U.S. Army 2-meter Rotor Test Stand (2MRTS) and 15% scale swept rotor blades. Data were collected from five blades using both the intensity- and lifetime-based approaches. This paper will also outline several modifications and improvements that are underway to develop a system capable of measuring pressure distributions on up to four blades simultaneously at hover and forward flight conditions.

  16. Quantifying PM2.5-Meteorology Sensitivities in a Global Climate Model

    NASA Technical Reports Server (NTRS)

    Westervelt, D. M.; Horowitz, L. W.; Naik, V.; Tai, A. P. K.; Fiore, A. M.; Mauzerall, D. L.

    2016-01-01

    Climate change can influence fine particulate matter concentrations (PM2.5) through changes in air pollution meteorology. Knowledge of the extent to which climate change can exacerbate or alleviate air pollution in the future is needed for robust climate and air pollution policy decision-making. To examine the influence of climate on PM2.5, we use the Geophysical Fluid Dynamics Laboratory Coupled Model version 3 (GFDL CM3), a fully-coupled chemistry-climate model, combined with future emissions and concentrations provided by the four Representative Concentration Pathways (RCPs). For each of the RCPs, we conduct future simulations in which emissions of aerosols and their precursors are held at 2005 levels while other climate forcing agents evolve in time, such that only climate (and thus meteorology) can influence PM2.5 surface concentrations. We find a small increase in global, annual mean PM2.5 of about 0.21 micro-g/cu m3 (5%) for RCP8.5, a scenario with maximum warming. Changes in global mean PM2.5 are at a maximum in the fall and are mainly controlled by sulfate followed by organic aerosol with minimal influence of black carbon. RCP2.6 is the only scenario that projects a decrease in global PM2.5 with future climate changes, albeit only by -0.06 micro-g/cu m (1.5%) by the end of the 21st century. Regional and local changes in PM2.5 are larger, reaching upwards of 2 micro-g/cu m for polluted (eastern China) and dusty (western Africa) locations on an annually averaged basis in RCP8.5. Using multiple linear regression, we find that future PM2.5 concentrations are most sensitive to local temperature, followed by surface wind and precipitation. PM2.5 concentrations are robustly positively associated with temperature, while negatively related with precipitation and wind speed. Present-day (2006-2015) modeled sensitivities of PM2.5 to meteorological variables are evaluated against observations and found to agree reasonably well with observed sensitivities (within 10e50

  17. Global observations of cloud-sensitive aerosol loadings in low-level marine clouds

    NASA Astrophysics Data System (ADS)

    Andersen, H.; Cermak, J.; Fuchs, J.; Schwarz, K.

    2016-11-01

    Aerosol-cloud interaction is a key component of the Earth's radiative budget and hydrological cycle, but many facets of its mechanisms are not yet fully understood. In this study, global satellite-derived aerosol and cloud products are used to identify at what aerosol loading cloud droplet size shows the greatest sensitivity to changes in aerosol loading (ACSmax). While, on average, cloud droplet size is most sensitive at relatively low aerosol loadings, distinct spatial and temporal patterns exist. Possible determinants for these are identified with reanalysis data. The magnitude of ACSmax is found to be constrained by the total columnar water vapor. Seasonal patterns of water vapor are reflected in the seasonal patterns of ACSmax. Also, situations with enhanced turbulent mixing are connected to higher ACSmax, possibly due to intensified aerosol activation. Of the analyzed aerosol species, dust seems to impact ACSmax the most, as dust particles increase the retrieved aerosol loading without substantially increasing the concentration of cloud condensation nuclei.

  18. Global bioenergy potentials from agricultural land in 2050: Sensitivity to climate change, diets and yields

    PubMed Central

    Haberl, Helmut; Erb, Karl-Heinz; Krausmann, Fridolin; Bondeau, Alberte; Lauk, Christian; Müller, Christoph; Plutzar, Christoph; Steinberger, Julia K.

    2011-01-01

    There is a growing recognition that the interrelations between agriculture, food, bioenergy, and climate change have to be better understood in order to derive more realistic estimates of future bioenergy potentials. This article estimates global bioenergy potentials in the year 2050, following a “food first” approach. It presents integrated food, livestock, agriculture, and bioenergy scenarios for the year 2050 based on a consistent representation of FAO projections of future agricultural development in a global biomass balance model. The model discerns 11 regions, 10 crop aggregates, 2 livestock aggregates, and 10 food aggregates. It incorporates detailed accounts of land use, global net primary production (NPP) and its human appropriation as well as socioeconomic biomass flow balances for the year 2000 that are modified according to a set of scenario assumptions to derive the biomass potential for 2050. We calculate the amount of biomass required to feed humans and livestock, considering losses between biomass supply and provision of final products. Based on this biomass balance as well as on global land-use data, we evaluate the potential to grow bioenergy crops and estimate the residue potentials from cropland (forestry is outside the scope of this study). We assess the sensitivity of the biomass potential to assumptions on diets, agricultural yields, cropland expansion and climate change. We use the dynamic global vegetation model LPJmL to evaluate possible impacts of changes in temperature, precipitation, and elevated CO2 on agricultural yields. We find that the gross (primary) bioenergy potential ranges from 64 to 161 EJ y−1, depending on climate impact, yields and diet, while the dependency on cropland expansion is weak. We conclude that food requirements for a growing world population, in particular feed required for livestock, strongly influence bioenergy potentials, and that integrated approaches are needed to optimize food and bioenergy supply

  19. Ringed Seal Search for Global Optimization via a Sensitive Search Model

    PubMed Central

    Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar

    2016-01-01

    The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global

  20. Ringed Seal Search for Global Optimization via a Sensitive Search Model.

    PubMed

    Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar

    2016-01-01

    The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global

  1. Global bioenergy potentials from agricultural land in 2050: Sensitivity to climate change, diets and yields.

    PubMed

    Haberl, Helmut; Erb, Karl-Heinz; Krausmann, Fridolin; Bondeau, Alberte; Lauk, Christian; Müller, Christoph; Plutzar, Christoph; Steinberger, Julia K

    2011-12-01

    There is a growing recognition that the interrelations between agriculture, food, bioenergy, and climate change have to be better understood in order to derive more realistic estimates of future bioenergy potentials. This article estimates global bioenergy potentials in the year 2050, following a "food first" approach. It presents integrated food, livestock, agriculture, and bioenergy scenarios for the year 2050 based on a consistent representation of FAO projections of future agricultural development in a global biomass balance model. The model discerns 11 regions, 10 crop aggregates, 2 livestock aggregates, and 10 food aggregates. It incorporates detailed accounts of land use, global net primary production (NPP) and its human appropriation as well as socioeconomic biomass flow balances for the year 2000 that are modified according to a set of scenario assumptions to derive the biomass potential for 2050. We calculate the amount of biomass required to feed humans and livestock, considering losses between biomass supply and provision of final products. Based on this biomass balance as well as on global land-use data, we evaluate the potential to grow bioenergy crops and estimate the residue potentials from cropland (forestry is outside the scope of this study). We assess the sensitivity of the biomass potential to assumptions on diets, agricultural yields, cropland expansion and climate change. We use the dynamic global vegetation model LPJmL to evaluate possible impacts of changes in temperature, precipitation, and elevated CO(2) on agricultural yields. We find that the gross (primary) bioenergy potential ranges from 64 to 161 EJ y(-1), depending on climate impact, yields and diet, while the dependency on cropland expansion is weak. We conclude that food requirements for a growing world population, in particular feed required for livestock, strongly influence bioenergy potentials, and that integrated approaches are needed to optimize food and bioenergy supply.

  2. GPT-Free Sensitivity Analysis for Reactor Depletion and Analysis

    NASA Astrophysics Data System (ADS)

    Kennedy, Christopher Brandon

    model (ROM) error. When building a subspace using the GPT-Free approach, the reduction error can be selected based on an error tolerance for generic flux response-integrals. The GPT-Free approach then solves the fundamental adjoint equation with randomly generated sets of input parameters. Using properties from linear algebra, the fundamental k-eigenvalue sensitivities, spanned by the various randomly generated models, can be related to response sensitivity profiles by a change of basis. These sensitivity profiles are the first-order derivatives of responses to input parameters. The quality of the basis is evaluated using the kappa-metric, developed from Wilks' order statistics, on the user-defined response functionals that involve the flux state-space. Because the kappa-metric is formed from Wilks' order statistics, a probability-confidence interval can be established around the reduction error based on user-defined responses such as fuel-flux, max-flux error, or other generic inner products requiring the flux. In general, The GPT-Free approach will produce a ROM with a quantifiable, user-specified reduction error. This dissertation demonstrates the GPT-Free approach for steady state and depletion reactor calculations modeled by SCALE6, an analysis tool developed by Oak Ridge National Laboratory. Future work includes the development of GPT-Free for new Monte Carlo methods where the fundamental adjoint is available. Additionally, the approach in this dissertation examines only the first derivatives of responses, the response sensitivity profile; extension and/or generalization of the GPT-Free approach to higher order response sensitivity profiles is natural area for future research.

  3. Changing carbon cycle: a global analysis

    SciTech Connect

    Trabalka, J.R.; Reichle, D.E.

    1986-01-01

    An attempt is made to examine current knowledge about the fluxes, sources, and sinks in the global carbon cycle, as well as our ability to predict changes in atmospheric CO/sub 2/ concentration resulting from anthropogenic influences. The reader will find authoritative discussions of: past and expected releases of CO/sub 2/ from fossil fuels; the historical record and implications of atmospheric CO/sub 2/ increases; isotopic and geological records of past carbon cycle processes; the role of the oceans in the global carbon cycle; the influence of the world biosphere on changes in atmospheric CO/sub 2/ levels; and, evidence linking the components of the global carbon cycle.

  4. Sensitivity analysis of channel-bend hydraulics influenced by vegetation

    NASA Astrophysics Data System (ADS)

    Bywater-Reyes, S.; Manners, R.; McDonald, R.; Wilcox, A. C.

    2015-12-01

    Alternating bars influence hydraulics by changing the force balance of channels as part of a morphodynamic feedback loop that dictates channel geometry. Pioneer woody riparian trees recruit on river bars and may steer flow, alter cross-stream and downstream force balances, and ultimately change channel morphology. Quantifying the influence of vegetation on stream hydraulics is difficult, and researchers increasingly rely on two-dimensional hydraulic models. In many cases, channel characteristics (channel drag and lateral eddy viscosity) and vegetation characteristics (density, frontal area, and drag coefficient) are uncertain. This study uses a beta version of FaSTMECH that models vegetation explicitly as a drag force to test the sensitivity of channel-bend hydraulics to riparian vegetation. We use a simplified, scale model of a meandering river with bars and conduct a global sensitivity analysis that ranks the influence of specified channel characteristics (channel drag and lateral eddy viscosity) against vegetation characteristics (density, frontal area, and drag coefficient) on cross-stream hydraulics. The primary influence on cross-stream velocity and shear stress is channel drag (i.e., bed roughness), followed by the near-equal influence of all vegetation parameters and lateral eddy viscosity. To test the implication of the sensitivity indices on bend hydraulics, we hold calibrated channel characteristics constant for a wandering gravel-bed river with bars (Bitterroot River, MT), and vary vegetation parameters on a bar. For a dense vegetation scenario, we find flow to be steered away from the bar, and velocity and shear stress to be reduced within the thalweg. This provides insight into how the morphodynamic evolution of vegetated bars differs from unvegetated bars.

  5. A discourse on sensitivity analysis for discretely-modeled structures

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Haftka, Raphael T.

    1991-01-01

    A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.

  6. Global Analysis of Photosynthesis Transcriptional Regulatory Networks

    PubMed Central

    Imam, Saheed; Noguera, Daniel R.; Donohue, Timothy J.

    2014-01-01

    Photosynthesis is a crucial biological process that depends on the interplay of many components. This work analyzed the gene targets for 4 transcription factors: FnrL, PrrA, CrpK and MppG (RSP_2888), which are known or predicted to control photosynthesis in Rhodobacter sphaeroides. Chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) identified 52 operons under direct control of FnrL, illustrating its regulatory role in photosynthesis, iron homeostasis, nitrogen metabolism and regulation of sRNA synthesis. Using global gene expression analysis combined with ChIP-seq, we mapped the regulons of PrrA, CrpK and MppG. PrrA regulates ∼34 operons encoding mainly photosynthesis and electron transport functions, while CrpK, a previously uncharacterized Crp-family protein, regulates genes involved in photosynthesis and maintenance of iron homeostasis. Furthermore, CrpK and FnrL share similar DNA binding determinants, possibly explaining our observation of the ability of CrpK to partially compensate for the growth defects of a ΔFnrL mutant. We show that the Rrf2 family protein, MppG, plays an important role in photopigment biosynthesis, as part of an incoherent feed-forward loop with PrrA. Our results reveal a previously unrealized, high degree of combinatorial regulation of photosynthetic genes and significant cross-talk between their transcriptional regulators, while illustrating previously unidentified links between photosynthesis and the maintenance of iron homeostasis. PMID:25503406

  7. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation

    PubMed Central

    Zajac, Zuzanna; Stith, Bradley; Bowling, Andrea C; Langtimm, Catherine A; Swain, Eric D

    2015-01-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  8. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation.

    PubMed

    Zajac, Zuzanna; Stith, Bradley; Bowling, Andrea C; Langtimm, Catherine A; Swain, Eric D

    2015-07-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  9. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation

    USGS Publications Warehouse

    Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.

    2015-01-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  10. Analysis of the sensitivity properties of a model of vector-borne bubonic plague.

    PubMed

    Buzby, Megan; Neckels, David; Antolin, Michael F; Estep, Donald

    2008-09-06

    Model sensitivity is a key to evaluation of mathematical models in ecology and evolution, especially in complex models with numerous parameters. In this paper, we use some recently developed methods for sensitivity analysis to study the parameter sensitivity of a model of vector-borne bubonic plague in a rodent population proposed by Keeling & Gilligan. The new sensitivity tools are based on a variational analysis involving the adjoint equation. The new approach provides a relatively inexpensive way to obtain derivative information about model output with respect to parameters. We use this approach to determine the sensitivity of a quantity of interest (the force of infection from rats and their fleas to humans) to various model parameters, determine a region over which linearization at a specific parameter reference point is valid, develop a global picture of the output surface, and search for maxima and minima in a given region in the parameter space.

  11. Design sensitivity analysis using EAL. Part 2: Shape design parameters

    NASA Technical Reports Server (NTRS)

    Dopker, B.; Choi, Kyung K.

    1986-01-01

    A numerical implementation of shape design sensitivity analysis of built-up structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its data base management system. This report is a continuation of a previous report on conventional design parameters. The finite element code used in the implementation presented is the Engineering Analysis Language (EAL), which is based on a hybrid analysis method. It has been shown that shape design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate data base. The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to derive shape design sensitivity information of structural performances. A domain method of shape design sensitivity analysis and a design component method are used. Displacement and stress functionals are considered as performance criteria.

  12. Sensitivity of the global distribution of cirrus ice crystal concentration to heterogeneous freezing

    NASA Astrophysics Data System (ADS)

    Barahona, D.; Rodriguez, J.; Nenes, A.

    2010-12-01

    This study presents the sensitivity of global ice crystal number concentration, Nc, to the parameterization of heterogeneous ice nuclei (IN). Simulations are carried out with the NASA Global Modeling Initiative chemical and transport model coupled to an analytical ice microphysics parameterization. Heterogeneous freezing is described using nucleation spectra derived from theoretical considerations and empirical data for dust, black carbon, ammonium sulfate, and glassy aerosol as IN precursors. When competition between homogeneous and heterogeneous freezing is considered, global mean Nc vary by up to a factor of twenty depending on the heterogeneous freezing spectrum used. IN effects on Nc strongly depend on dust and black carbon concentrations and are strongest under conditions of weak updraft and high temperature. Regardless of the heterogeneous spectrum used, dust is an important contributor of IN over large regions of the Northern Hemisphere. Black carbon however exhibits appreciable effects on Nc when the freezing fraction is greater than 1%. Compared to in situ observations, Nc is overpredicted at temperatures below 205 K, even if a fraction of liquid aerosol is allowed to act as glassy IN. Assuming that cirrus formation is forced by weak updraft addressed this overprediction but promoted heterogeneous freezing effects to the point where homogeneous freezing is inhibited for IN concentrations as low as 1 L-1. Chemistry and dynamics must be considered to explain cirrus characteristics at low temperature. Only cloud formation scenarios where competition between homogeneous and heterogeneous freezing is the dominant feature would result in maximum supersaturation levels consistent with observations.

  13. The sensitivity of ozone and fine particulate matter concentrations to global change at different spatiotemporal scales

    NASA Astrophysics Data System (ADS)

    Racherla, Pavan Nandan

    Ozone (O3) and fine particulate matter (PM) are harmful to human health. Changes in climate and anthropogenic emissions due to global change will affect concentrations of O3 and fine PM. These effects are not well understood, however. We perform a suite of simulations using an integrated model of global climate, tropospheric gas-phase chemistry, and aerosols to investigate the effects of global change on O3 and fine PM at different spatiotemporal scales ranging from the global annual-average concentrations to regional (eg. United States) air pollution episodes. One major consequence of climate change is a lengthening of the O3 season over the eastern U.S. to include late spring and early fall months. Climate change is also predicted to increase the severity and frequency of O3 episodes over much of the eastern U.S. We found that U.S. O 3 and fine PM are sensitive first and foremost to U.S. anthropogenic emissions changes. However, the effect of climate change is very sensitive to the prevalent domestic anthropogenic emissions, and it increases strongly with emissions, thereby making it important to factor climate change in to air quality planning. The reductions in domestic emissions will, therefore, have the added benefit of minimized climate effects. Climate change affects fine PM sulfate and nitrate concentrations the most. Substantial increases of up to 2 mug m-3 in the July-average sulfate concentrations were predicted in many polluted regions in the eastern U.S. Higher NO x and ammonia emissions could negate the benefits of significant SO2 emissions reductions vis-a-vis the annual-average PM2.5 standard for several areas in the Northeast and Midwest U.S. Simultaneous reductions in SO2 and NOx emissions, however, will help bring most of the eastern U.S. into compliance with the current annual-average PM2.5 standard. If the U.S. O3 standard were to change from the current 80 ppbv to 55 ppbv (which is the case in many European countries), the increased O3

  14. NPV Sensitivity Analysis: A Dynamic Excel Approach

    ERIC Educational Resources Information Center

    Mangiero, George A.; Kraten, Michael

    2017-01-01

    Financial analysts generally create static formulas for the computation of NPV. When they do so, however, it is not readily apparent how sensitive the value of NPV is to changes in multiple interdependent and interrelated variables. It is the aim of this paper to analyze this variability by employing a dynamic, visually graphic presentation using…

  15. Relative performance of academic departments using DEA with sensitivity analysis.

    PubMed

    Tyagi, Preeti; Yadav, Shiv Prasad; Singh, S P

    2009-05-01

    The process of liberalization and globalization of Indian economy has brought new opportunities and challenges in all areas of human endeavor including education. Educational institutions have to adopt new strategies to make best use of the opportunities and counter the challenges. One of these challenges is how to assess the performance of academic programs based on multiple criteria. Keeping this in view, this paper attempts to evaluate the performance efficiencies of 19 academic departments of IIT Roorkee (India) through data envelopment analysis (DEA) technique. The technique has been used to assess the performance of academic institutions in a number of countries like USA, UK, Australia, etc. But we are using it first time in Indian context to the best of our knowledge. Applying DEA models, we calculate technical, pure technical and scale efficiencies and identify the reference sets for inefficient departments. Input and output projections are also suggested for inefficient departments to reach the frontier. Overall performance, research performance and teaching performance are assessed separately using sensitivity analysis.

  16. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    PubMed

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  17. Design Parameters Influencing Reliability of CCGA Assembly: A Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Tasooji, Amaneh; Ghaffarian, Reza; Rinaldi, Antonio

    2006-01-01

    Area Array microelectronic packages with small pitch and large I/O counts are now widely used in microelectronics packaging. The impact of various package design and materials/process parameters on reliability has been studied through extensive literature review. Reliability of Ceramic Column Grid Array (CCGA) package assemblies has been evaluated using JPL thermal cycle test results (-50(deg)/75(deg)C, -55(deg)/100(deg)C, and -55(deg)/125(deg)C), as well as those reported by other investigators. A sensitivity analysis has been performed using the literature da to study the impact of design parameters and global/local stress conditions on assembly reliability. The applicability of various life-prediction models for CCGA design has been investigated by comparing model's predictions with the experimental thermal cycling data. Finite Element Method (FEM) analysis has been conducted to assess the state of the stress/strain in CCGA assembly under different thermal cycling, and to explain the different failure modes and locations observed in JPL test assemblies.

  18. Design Parameters Influencing Reliability of CCGA Assembly: A Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Tasooji, Amaneh; Ghaffarian, Reza; Rinaldi, Antonio

    2006-01-01

    Area Array microelectronic packages with small pitch and large I/O counts are now widely used in microelectronics packaging. The impact of various package design and materials/process parameters on reliability has been studied through extensive literature review. Reliability of Ceramic Column Grid Array (CCGA) package assemblies has been evaluated using JPL thermal cycle test results (-50(deg)/75(deg)C, -55(deg)/100(deg)C, and -55(deg)/125(deg)C), as well as those reported by other investigators. A sensitivity analysis has been performed using the literature da to study the impact of design parameters and global/local stress conditions on assembly reliability. The applicability of various life-prediction models for CCGA design has been investigated by comparing model's predictions with the experimental thermal cycling data. Finite Element Method (FEM) analysis has been conducted to assess the state of the stress/strain in CCGA assembly under different thermal cycling, and to explain the different failure modes and locations observed in JPL test assemblies.

  19. Value-Driven Design and Sensitivity Analysis of Hybrid Energy Systems using Surrogate Modeling

    SciTech Connect

    Wenbo Du; Humberto E. Garcia; William R. Binder; Christiaan J. J. Paredis

    2001-10-01

    A surrogate modeling and analysis methodology is applied to study dynamic hybrid energy systems (HES). The effect of battery size on the smoothing of variability in renewable energy generation is investigated. Global sensitivity indices calculated using surrogate models show the relative sensitivity of system variability to dynamic properties of key components. A value maximization approach is used to consider the tradeoff between system variability and required battery size. Results are found to be highly sensitive to the renewable power profile considered, demonstrating the importance of accurate renewable resource modeling and prediction. The documented computational framework and preliminary results represent an important step towards a comprehensive methodology for HES evaluation, design, and optimization.

  20. Is globalization healthy: a statistical indicator analysis of the impacts of globalization on health

    PubMed Central

    2010-01-01

    It is clear that globalization is something more than a purely economic phenomenon manifesting itself on a global scale. Among the visible manifestations of globalization are the greater international movement of goods and services, financial capital, information and people. In addition, there are technological developments, more transboundary cultural exchanges, facilitated by the freer trade of more differentiated products as well as by tourism and immigration, changes in the political landscape and ecological consequences. In this paper, we link the Maastricht Globalization Index with health indicators to analyse if more globalized countries are doing better in terms of infant mortality rate, under-five mortality rate, and adult mortality rate. The results indicate a positive association between a high level of globalization and low mortality rates. In view of the arguments that globalization provides winners and losers, and might be seen as a disequalizing process, we should perhaps be careful in interpreting the observed positive association as simple evidence that globalization is mostly good for our health. It is our hope that a further analysis of health impacts of globalization may help in adjusting and optimising the process of globalization on every level in the direction of a sustainable and healthy development for all. PMID:20849605

  1. Is globalization healthy: a statistical indicator analysis of the impacts of globalization on health.

    PubMed

    Martens, Pim; Akin, Su-Mia; Maud, Huynen; Mohsin, Raza

    2010-09-17

    It is clear that globalization is something more than a purely economic phenomenon manifesting itself on a global scale. Among the visible manifestations of globalization are the greater international movement of goods and services, financial capital, information and people. In addition, there are technological developments, more transboundary cultural exchanges, facilitated by the freer trade of more differentiated products as well as by tourism and immigration, changes in the political landscape and ecological consequences. In this paper, we link the Maastricht Globalization Index with health indicators to analyse if more globalized countries are doing better in terms of infant mortality rate, under-five mortality rate, and adult mortality rate. The results indicate a positive association between a high level of globalization and low mortality rates. In view of the arguments that globalization provides winners and losers, and might be seen as a disequalizing process, we should perhaps be careful in interpreting the observed positive association as simple evidence that globalization is mostly good for our health. It is our hope that a further analysis of health impacts of globalization may help in adjusting and optimising the process of globalization on every level in the direction of a sustainable and healthy development for all.

  2. Global O3-CO Correlations in a Global Model During July-August: Evaluation with TES Satellite Observations and Sensitivity to Emissions

    NASA Astrophysics Data System (ADS)

    Choi, H.; Liu, H.; Crawford, J. H.; Considine, D. B.; Allen, D. J.; Duncan, B. N.; Rodriguez, J. M.; Strahan, S. E.; Damon, M.; Steenrod, S. D.; Zhang, L.; Liu, X.

    2013-12-01

    We examine global mid-tropospheric (619 hPa) ozone - carbon monoxide (O3-CO) correlations and its sensitivity to emissions during July - August 2005 in the Global Modeling Initiative (GMI) chemistry and transport model driven by the Modern-Era Retrospective Analysis for Research and Application (MERRA) meteorological data set. We evaluate the simulated O3 with climatological O3 profiles from ozonesonde measurements and satellite tropospheric O3 columns. Model O3-CO correlations are 1). positive in the Northern Hemisphere continental outflow regions with large dO3/dCO enhancement ratios, and in the southern African westerly outflow region and Indonesia with small dO3/dCO enhancement ratios; 2). negative over the Asian continent (including the Tibetan Plateau), Middle East, northern and central Africa, and tropical and subtropical deep convective regions. These patterns are consistent with those derived from collocated measurements of O3 and CO from the Tropospheric Emission Spectrometer (TES) on board NASA's Aura satellite, except over the tropical Atlantic and Pacific. Model sensitivity experiments indicate that fossil fuel emissions are responsible for the positive O3-CO correlations in major continental outflow regions and Europe. Biomass burning emissions lead to the positive correlations in the Southern Hemisphere mid-high latitudes. Biogenic emissions make important contributions to the negative O3-CO correlations over the tropical eastern Pacific. Lightning NOx emissions significantly reduce both the positive O3-CO correlations at mid-high latitudes and the negative correlations in the tropics. The corresponding chemical and transport processes will be discussed.

  3. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling

    NASA Astrophysics Data System (ADS)

    Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.

    2017-05-01

    Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.

  4. Finite-frequency sensitivity kernels for global seismic wave propagation based upon adjoint methods

    NASA Astrophysics Data System (ADS)

    Liu, Qinya; Tromp, Jeroen

    2008-07-01

    We determine adjoint equations and Fréchet kernels for global seismic wave propagation based upon a Lagrange multiplier method. We start from the equations of motion for a rotating, self-gravitating earth model initially in hydrostatic equilibrium, and derive the corresponding adjoint equations that involve motions on an earth model that rotates in the opposite direction. Variations in the misfit function χ then may be expressed as , where δlnm = δm/m denotes relative model perturbations in the volume V, δlnd denotes relative topographic variations on solid-solid or fluid-solid boundaries Σ, and ∇Σδlnd denotes surface gradients in relative topographic variations on fluid-solid boundaries ΣFS. The 3-D Fréchet kernel Km determines the sensitivity to model perturbations δlnm, and the 2-D kernels Kd and Kd determine the sensitivity to topographic variations δlnd. We demonstrate also how anelasticity may be incorporated within the framework of adjoint methods. Finite-frequency sensitivity kernels are calculated by simultaneously computing the adjoint wavefield forward in time and reconstructing the regular wavefield backward in time. Both the forward and adjoint simulations are based upon a spectral-element method. We apply the adjoint technique to generate finite-frequency traveltime kernels for global seismic phases (P, Pdiff, PKP, S, SKS, depth phases, surface-reflected phases, surface waves, etc.) in both 1-D and 3-D earth models. For 1-D models these adjoint-generated kernels generally agree well with results obtained from ray-based methods. However, adjoint methods do not have the same theoretical limitations as ray-based methods, and can produce sensitivity kernels for any given phase in any 3-D earth model. The Fréchet kernels presented in this paper illustrate the sensitivity of seismic observations to structural parameters and topography on internal discontinuities. These kernels form the basis of future 3-D tomographic inversions.

  5. An Analysis of Solar Global Activity

    NASA Astrophysics Data System (ADS)

    Mouradian, Zadig

    2013-02-01

    This article proposes a unified observational model of solar activity based on sunspot number and the solar global activity in the rotation of the structures, both per 11-year cycle. The rotation rates show a variation of a half-century period and the same period is also associated to the sunspot amplitude variation. The global solar rotation interweaves with the observed global organisation of solar activity. An important role for this assembly is played by the Grand Cycle formed by the merging of five sunspot cycles: a forgotten discovery by R. Wolf. On the basis of these elements, the nature of the Dalton Minimum, the Maunder Minimum, the Gleissberg Cycle, and the Grand Minima are presented.

  6. Global Human Settlement Analysis for Disaster Risk Reduction

    NASA Astrophysics Data System (ADS)

    Pesaresi, M.; Ehrlich, D.; Ferri, S.; Florczyk, A.; Freire, S.; Haag, F.; Halkia, M.; Julea, A. M.; Kemper, T.; Soille, P.

    2015-04-01

    The Global Human Settlement Layer (GHSL) is supported by the European Commission, Joint Research Center (JRC) in the frame of his institutional research activities. Scope of GHSL is developing, testing and applying the technologies and analysis methods integrated in the JRC Global Human Settlement analysis platform for applications in support to global disaster risk reduction initiatives (DRR) and regional analysis in the frame of the European Cohesion policy. GHSL analysis platform uses geo-spatial data, primarily remotely sensed and population. GHSL also cooperates with the Group on Earth Observation on SB-04-Global Urban Observation and Information, and various international partners andWorld Bank and United Nations agencies. Some preliminary results integrating global human settlement information extracted from Landsat data records of the last 40 years and population data are presented.

  7. Grid sensitivity for aerodynamic optimization and flow analysis

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1993-01-01

    After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.

  8. Discrete analysis of spatial-sensitivity models

    NASA Technical Reports Server (NTRS)

    Nielsen, Kenneth R. K.; Wandell, Brian A.

    1988-01-01

    Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.

  9. Analysis of Sensitivity Experiments - An Expanded Primer

    DTIC Science & Technology

    2017-03-08

    seems that the old report is in need of much improvement; also, a few additional concepts need to be addressed. Specifically, the discussion of... addition to the theory behind sensitivity testing. Also, the Logit method based upon the logistic distribution has been derived in some detail... Additional examples have been added to the exposition, and some effort has been invested in placing a more substantial level of detail within these

  10. Discrete analysis of spatial-sensitivity models

    NASA Technical Reports Server (NTRS)

    Nielsen, Kenneth R. K.; Wandell, Brian A.

    1988-01-01

    Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.

  11. Introduction to special section on sensitivity analysis and summary of NCSU/USDA workshop on sensitivity analysis.

    PubMed

    Frey, H Christopher

    2002-06-01

    This guest editorial is a summary of the NCSU/USDA Workshop on Sensitivity Analysis held June 11-12, 2001 at North Carolina State University and sponsored by the U.S. Department of Agriculture's Office of Risk Assessment and Cost Benefit Analysis. The objective of the workshop was to learn across disciplines in identifying, evaluating, and recommending sensitivity analysis methods and practices for application to food-safety process risk models. The workshop included presentations regarding the Hazard Assessment and Critical Control Points (HACCP) framework used in food-safety risk assessment, a survey of sensitivity analysis methods, invited white papers on sensitivity analysis, and invited case studies regarding risk assessment of microbial pathogens in food. Based on the sharing of interdisciplinary information represented by the presentations, the workshop participants, divided into breakout sessions, responded to three trigger questions: What are the key criteria for sensitivity analysis methods applied to food-safety risk assessment? What sensitivity analysis methods are most promising for application to food safety and risk assessment? and What are the key needs for implementation and demonstration of such methods? The workshop produced agreement regarding key criteria for sensitivity analysis methods and the need to use two or more methods to try to obtain robust insights. Recommendations were made regarding a guideline document to assist practitioners in selecting, applying, interpreting, and reporting the results of sensitivity analysis.

  12. Sensitivity Analysis of Situational Awareness Measures

    NASA Technical Reports Server (NTRS)

    Shively, R. J.; Davison, H. J.; Burdick, M. D.; Rutkowski, Michael (Technical Monitor)

    2000-01-01

    A great deal of effort has been invested in attempts to define situational awareness, and subsequently to measure this construct. However, relatively less work has focused on the sensitivity of these measures to manipulations that affect the SA of the pilot. This investigation was designed to manipulate SA and examine the sensitivity of commonly used measures of SA. In this experiment, we tested the most commonly accepted measures of SA: SAGAT, objective performance measures, and SART, against different levels of SA manipulation to determine the sensitivity of such measures in the rotorcraft flight environment. SAGAT is a measure in which the simulation blanks in the middle of a trial and the pilot is asked specific, situation-relevant questions about the state of the aircraft or the objective of a particular maneuver. In this experiment, after the pilot responded verbally to several questions, the trial continued from the point frozen. SART is a post-trial questionnaire that asked for subjective SA ratings from the pilot at certain points in the previous flight. The objective performance measures included: contacts with hazards (power lines and towers) that impeded the flight path, lateral and vertical anticipation of these hazards, response time to detection of other air traffic, and response time until an aberrant fuel gauge was detected. An SA manipulation of the flight environment was chosen that undisputedly affects a pilot's SA-- visibility. Four variations of weather conditions (clear, light rain, haze, and fog) resulted in a different level of visibility for each trial. Pilot SA was measured by either SAGAT or the objective performance measures within each level of visibility. This enabled us to not only determine the sensitivity within a measure, but also between the measures. The SART questionnaire and the NASA-TLX, a measure of workload, were distributed after every trial. Using the newly developed rotorcraft part-task laboratory (RPTL) at NASA Ames

  13. Sensitivity Analysis of Situational Awareness Measures

    NASA Technical Reports Server (NTRS)

    Shively, R. J.; Davison, H. J.; Burdick, M. D.; Rutkowski, Michael (Technical Monitor)

    2000-01-01

    A great deal of effort has been invested in attempts to define situational awareness, and subsequently to measure this construct. However, relatively less work has focused on the sensitivity of these measures to manipulations that affect the SA of the pilot. This investigation was designed to manipulate SA and examine the sensitivity of commonly used measures of SA. In this experiment, we tested the most commonly accepted measures of SA: SAGAT, objective performance measures, and SART, against different levels of SA manipulation to determine the sensitivity of such measures in the rotorcraft flight environment. SAGAT is a measure in which the simulation blanks in the middle of a trial and the pilot is asked specific, situation-relevant questions about the state of the aircraft or the objective of a particular maneuver. In this experiment, after the pilot responded verbally to several questions, the trial continued from the point frozen. SART is a post-trial questionnaire that asked for subjective SA ratings from the pilot at certain points in the previous flight. The objective performance measures included: contacts with hazards (power lines and towers) that impeded the flight path, lateral and vertical anticipation of these hazards, response time to detection of other air traffic, and response time until an aberrant fuel gauge was detected. An SA manipulation of the flight environment was chosen that undisputedly affects a pilot's SA-- visibility. Four variations of weather conditions (clear, light rain, haze, and fog) resulted in a different level of visibility for each trial. Pilot SA was measured by either SAGAT or the objective performance measures within each level of visibility. This enabled us to not only determine the sensitivity within a measure, but also between the measures. The SART questionnaire and the NASA-TLX, a measure of workload, were distributed after every trial. Using the newly developed rotorcraft part-task laboratory (RPTL) at NASA Ames

  14. New Uses for Sensitivity Analysis: How Different Movement Tasks Effect Limb Model Parameter Sensitivity

    NASA Technical Reports Server (NTRS)

    Winters, J. M.; Stark, L.

    1984-01-01

    Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.

  15. Shape design sensitivity analysis and optimal design of structural systems

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.

    1987-01-01

    The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.

  16. Toward Global Content Analysis and Media Criticism.

    ERIC Educational Resources Information Center

    Nordenstreng, Kaarle

    1995-01-01

    Presents the background, rationale, and implementation prospects for an international system of monitoring media coverage of global problems such as peace and war, human rights, and the environment. Outlines the monitoring project carried out in January 1995 concerning the representation and portrayal of women in news media. (SR)

  17. Global Proteome Analysis of Leptospira interrogans

    USDA-ARS?s Scientific Manuscript database

    Comparative global proteome analyses were performed on Leptospira interrogans serovar Copenhageni grown under conventional in vitro conditions and those mimicking in vivo conditions (iron limitation and serum presence). Proteomic analyses were conducted using iTRAQ and LC-ESI-tandem mass spectrometr...

  18. Toward Global Content Analysis and Media Criticism.

    ERIC Educational Resources Information Center

    Nordenstreng, Kaarle

    1995-01-01

    Presents the background, rationale, and implementation prospects for an international system of monitoring media coverage of global problems such as peace and war, human rights, and the environment. Outlines the monitoring project carried out in January 1995 concerning the representation and portrayal of women in news media. (SR)

  19. Global Population Genetic Analysis of Aspergillus fumigatus

    PubMed Central

    Ashu, Eta Ebasi; Hagen, Ferry; Chowdhary, Anuradha

    2017-01-01

    ABSTRACT Aspergillus fumigatus is a ubiquitous opportunistic fungal pathogen capable of causing invasive aspergillosis, a globally distributed disease with a mortality rate of up to 90% in high-risk populations. Effective control and prevention of this disease require a thorough understanding of its epidemiology. However, despite significant efforts, the global molecular epidemiology of A. fumigatus remains poorly understood. In this study, we analyzed 2,026 A. fumigatus isolates from 13 countries in four continents using nine highly polymorphic microsatellite markers. Genetic cluster analyses suggest that our global sample of A. fumigatus isolates belonged to eight genetic clusters, with seven of the eight clusters showing broad geographic distributions. We found common signatures of sexual recombination within individual genetic clusters and clear evidence of hybridization between several clusters. Limited but statistically significant genetic differentiations were found among geographic and ecological populations. However, there was abundant evidence for gene flow at the local, regional, and global scales. Interestingly, the triazole-susceptible and triazole-resistant populations showed different population structures, consistent with antifungal drug pressure playing a significant role in local adaptation. Our results suggest that global populations of A. fumigatus are shaped by historical differentiation, contemporary gene flow, sexual reproduction, and the localized antifungal drug selection that is driving clonal expansion of genotypes resistant to multiple triazole drugs. IMPORTANCE The genetic diversity and geographic structure of the human fungal pathogen A. fumigatus have been the subject of many studies. However, most previous studies had relatively limited sample ranges and sizes and/or used genetic markers with low-level polymorphisms. In this paper, we characterize a global collection of strains of A. fumigatus using a panel of 9 highly

  20. Global analysis of duality maps in quantum field theory

    SciTech Connect

    Restuccia, A.

    1997-03-15

    A global analysis of duality transformations is presented. Global constraints are introduced in order to have the correct structure of the configuration spaces. This global structure is completely determined from the quantum equivalence of dual actions. Applications to S-dual actions and to T duality of string theories and D-branes are briefly discussed. It is shown that a new topological term in the dual open string actions is required.

  1. Global functions in global-local finite-element analysis of localized stresses in prismatic structures

    NASA Technical Reports Server (NTRS)

    Dong, Stanley B.

    1989-01-01

    An important consideration in the global local finite-element method (GLFEM) is the availability of global functions for the given problem. The role and mathematical requirements of these global functions in a GLFEM analysis of localized stress states in prismatic structures are discussed. A method is described for determining these global functions. Underlying this method are theorems due to Toupin and Knowles on strain energy decay rates, which are related to a quantitative expression of Saint-Venant's principle. It is mentioned that a mathematically complete set of global functions can be generated, so that any arbitrary interface condition between the finite element and global subregions can be represented. Convergence to the true behavior can be achieved with increasing global functions and finite-element degrees of freedom. Specific attention is devoted to mathematically two-dimensional and three-dimensional prismatic structures. Comments are offered on the GLFEM analysis of NASA flat panel with a discontinuous stiffener. Methods for determining global functions for other effects are also indicated, such as steady-state dynamics and bodies under initial stress.

  2. Global sensitivity of high-resolution estimates of crop water footprint

    NASA Astrophysics Data System (ADS)

    Tuninetti, Marta; Tamea, Stefania; D'Odorico, Paolo; Laio, Francesco; Ridolfi, Luca

    2015-10-01

    Most of the human appropriation of freshwater resources is for agriculture. Water availability is a major constraint to mankind's ability to produce food. The notion of virtual water content (VWC), also known as crop water footprint, provides an effective tool to investigate the linkage between food and water resources as a function of climate, soil, and agricultural practices. The spatial variability in the virtual water content of crops is here explored, disentangling its dependency on climate and crop yields and assessing the sensitivity of VWC estimates to parameter variability and uncertainty. Here we calculate the virtual water content of four staple crops (i.e., wheat, rice, maize, and soybean) for the entire world developing a high-resolution (5 × 5 arc min) model, and we evaluate the VWC sensitivity to input parameters. We find that food production almost entirely depends on green water (>90%), but, when applied, irrigation makes crop production more water efficient, thus requiring less water. The spatial variability of the VWC is mostly controlled by the spatial patterns of crop yields with an average correlation coefficient of 0.83. The results of the sensitivity analysis show that wheat is most sensitive to the length of the growing period, rice to reference evapotranspiration, maize and soybean to the crop planting date. The VWC sensitivity varies not only among crops, but also across the harvested areas of the world, even at the subnational scale.

  3. Cost Sensitivity Analysis for Radiology Department Planning

    PubMed Central

    Sullivan, William G.; Thuesen, Gerald J.

    1971-01-01

    Two complementary computer programs have been developed for forecasting the demands and evaluating the costs of proposed radiographic facilities. The models are employed here in analyses of the sensitivity of investment and operating costs to selected design variables. The versatility of the evaluation methodology is further illustrated by a comparison of costs for alternative facility arrangements representing various degrees of decentralization. Cost differences arising from the use of conventional or high-speed x-ray equipment and from one-shift or two-shift operation are also explored for the various alternative arrangements. PMID:5133836

  4. Alanine and proline content modulate global sensitivity to discrete perturbations in disordered proteins.

    PubMed

    Perez, Romel B; Tischer, Alexander; Auton, Matthew; Whitten, Steven T

    2014-12-01

    Molecular transduction of biological signals is understood primarily in terms of the cooperative structural transitions of protein macromolecules, providing a mechanism through which discrete local structure perturbations affect global macromolecular properties. The recognition that proteins lacking tertiary stability, commonly referred to as intrinsically disordered proteins (IDPs), mediate key signaling pathways suggests that protein structures without cooperative intramolecular interactions may also have the ability to couple local and global structure changes. Presented here are results from experiments that measured and tested the ability of disordered proteins to couple local changes in structure to global changes in structure. Using the intrinsically disordered N-terminal region of the p53 protein as an experimental model, a set of proline (PRO) and alanine (ALA) to glycine (GLY) substitution variants were designed to modulate backbone conformational propensities without introducing non-native intramolecular interactions. The hydrodynamic radius (R(h)) was used to monitor changes in global structure. Circular dichroism spectroscopy showed that the GLY substitutions decreased polyproline II (PP(II)) propensities relative to the wild type, as expected, and fluorescence methods indicated that substitution-induced changes in R(h) were not associated with folding. The experiments showed that changes in local PP(II) structure cause changes in R(h) that are variable and that depend on the intrinsic chain propensities of PRO and ALA residues, demonstrating a mechanism for coupling local and global structure changes. Molecular simulations that model our results were used to extend the analysis to other proteins and illustrate the generality of the observed PRO and alanine effects on the structures of IDPs.

  5. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks

    PubMed Central

    Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over

  6. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    PubMed

    Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the

  7. Tuning the climate sensitivity of a global model to match 20th Century warming

    NASA Astrophysics Data System (ADS)

    Mauritsen, T.; Roeckner, E.

    2015-12-01

    A climate models ability to reproduce observed historical warming is sometimes viewed as a measure of quality. Yet, for practical reasons historical warming cannot be considered a purely empirical result of the modelling efforts because the desired result is known in advance and so is a potential target of tuning. Here we explain how the latest edition of the Max Planck Institute for Meteorology Earth System Model (MPI-ESM1.2) atmospheric model (ECHAM6.3) had its climate sensitivity systematically tuned to about 3 K; the MPI model to be used during CMIP6. This was deliberately done in order to improve the match to observed 20th Century warming over the previous model generation (MPI-ESM, ECHAM6.1) which warmed too much and had a sensitivity of 3.5 K. In the process we identified several controls on model cloud feedback that confirm recently proposed hypotheses concerning trade-wind cumulus and high-latitude mixed-phase clouds. We then evaluate the model fidelity with centennial global warming and discuss the relative importance of climate sensitivity, forcing and ocean heat uptake efficiency in determining the response as well as possible systematic biases. The activity of targeting historical warming during model development is polarizing the modeling community with 35 percent of modelers stating that 20th Century warming was rated very important to decisive, whereas 30 percent would not consider it at all. Likewise, opinions diverge as to which measures are legitimate means for improving the model match to observed warming. These results are from a survey conducted in conjunction with the first WCRP Workshop on Model Tuning in fall 2014 answered by 23 modelers. We argue that tuning or constructing models to match observed warming to some extent is practically unavoidable, and as such, in many cases might as well be done explicitly. For modeling groups that have the capability to tune both their aerosol forcing and climate sensitivity there is now a unique

  8. Aircraft concept optimization using the global sensitivity approach and parametric multiobjective figures of merit

    NASA Technical Reports Server (NTRS)

    Malone, Brett; Mason, W. H.

    1992-01-01

    An extension of our parametric multidisciplinary optimization method to include design results connecting multiple objective functions is presented. New insight into the effect of the figure of merit (objective function) on aircraft configuration size and shape is demonstrated using this technique. An aircraft concept, subject to performance and aerodynamic constraints, is optimized using the global sensitivity equation method for a wide range of objective functions. These figures of merit are described parametrically such that a series of multiobjective optimal solutions can be obtained. Computational speed is facilitated by use of algebraic representations of the system technologies. Using this method, the evolution of an optimum design from one objective function to another is demonstrated. Specifically, combinations of minimum takeoff gross weight, fuel weight, and maximum cruise performance and productivity parameters are used as objective functions.

  9. Sensitivity of a global climate model to the specification of convective updraft and downdraft mass fluxes

    NASA Technical Reports Server (NTRS)

    Del Genio, Anthony D.; Yao, Mao-Sung

    1988-01-01

    The response of the GISS global climate model to different parameterizations of moist convective mass flux is studied. A control run with arbitrarily specified updraft mass flux is compared to experiments predicting cumulus mass fulx on the basis of low-level convergence, convergence plus surface evaporation, or convergence and evaporation modified by varying boundary layer height. Also, an experiment that includes a simple parameterization of saturated convective-scale downdrafts is discussed. It is found that the model correctly simulates the correlation between deep convection strength and tropical sea surface temperature in each experiment with the parameterization of cumulus mass flux having little effect. The implications of the experiments for cloud effects on climate sensitivity are examined.

  10. Aircraft concept optimization using the global sensitivity approach and parametric multiobjective figures of merit

    NASA Technical Reports Server (NTRS)

    Malone, Brett; Mason, W. H.

    1992-01-01

    An extension of our parametric multidisciplinary optimization method to include design results connecting multiple objective functions is presented. New insight into the effect of the figure of merit (objective function) on aircraft configuration size and shape is demonstrated using this technique. An aircraft concept, subject to performance and aerodynamic constraints, is optimized using the global sensitivity equation method for a wide range of objective functions. These figures of merit are described parametrically such that a series of multiobjective optimal solutions can be obtained. Computational speed is facilitated by use of algebraic representations of the system technologies. Using this method, the evolution of an optimum design from one objective function to another is demonstrated. Specifically, combinations of minimum takeoff gross weight, fuel weight, and maximum cruise performance and productivity parameters are used as objective functions.

  11. A Sensitivity Study of the Thermodynamic Environment on GFDL Model Hurricane Intensity: Implications for Global Warming.

    NASA Astrophysics Data System (ADS)

    Shen, Weixing; Tuleya, Robert E.; Ginis, Isaac

    2000-01-01

    In this study, the effect of thermodynamic environmental changes on hurricane intensity is extensively investigated with the National Oceanic and Atmospheric Administration Geophysical Fluid Dynamics Laboratory hurricane model for a suite of experiments with different initial upper-tropospheric temperature anomalies up to ±4°C and sea surface temperatures ranging from 26° to 31°C given the same relative humidity profile.The results indicate that stabilization in the environmental atmosphere and sea surface temperature (SST) increase cause opposing effects on hurricane intensity. The offsetting relationship between the effects of atmospheric stability increase (decrease) and SST increase (decrease) is monotonic and systematic in the parameter space. This implies that hurricane intensity increase due to a possible global warming associated with increased CO2 is considerably smaller than that expected from warming of the oceanic waters alone. The results also indicate that the intensity of stronger (weaker) hurricanes is more (less) sensitive to atmospheric stability and SST changes. The model-attained hurricane intensity is found to be well correlated with the maximum surface evaporation and the large-scale environmental convective available potential energy. The model-attained hurricane intensity is highly correlated with the energy available from wet-adiabatic ascent near the eyewall relative to a reference sounding in the undisturbed environment for all the experiments. Coupled hurricane-ocean experiments show that hurricane intensity becomes less sensitive to atmospheric stability and SST changes since the ocean coupling causes larger (smaller) intensity reduction for stronger (weaker) hurricanes. This implies less increase of hurricane intensity related to a possible global warming due to increased CO2.

  12. A sensitivity study of the thermodynamic environment on GFDL model hurricane intensity: Implications for global warming

    SciTech Connect

    Shen, W.; Tuleya, R.E.; Ginis, I.

    2000-01-01

    In this study, the effect of thermodynamic environmental changes on hurricane intensity is extensively investigated with the National Oceanic and Atmospheric Administration Geophysical Fluid Dynamics Laboratory hurricane model for a suite of experiments with different initial upper-tropospheric temperature anomalies up to {+-}4 C and sea surface temperatures ranging from 26 to 31 C given the same relative humidity profile. The results indicate that stabilization in the environmental atmosphere and sea surface temperature (SST) increase cause opposing effects on hurricane intensity. The offsetting relationship between the effects of atmospheric stability increase (decrease) and SST increase (decrease) is monotonic and systematic in the parameter space. This implies that hurricane intensity increase due to a possible global warming associated with increased CO{sub 2} is considerably smaller than that expected from warming of the oceanic waters alone. The results also indicate that the intensity of stronger (weaker) hurricanes is more (less) sensitive to atmospheric stability and SST changes. The model-attained hurricane intensity is found to be well correlated with the maximum surface evaporation and the large-scale environmental convective available potential energy. The model-attained hurricane intensity if highly correlated with the energy available from wet-adiabatic ascent near the eyewall relative to a reference sounding in the undisturbed environment for all the experiments. Coupled hurricane-ocean experiments show that hurricane intensity becomes less sensitive to atmospheric stability and SST changes since the ocean coupling causes larger (smaller) intensity reduction for stronger (weaker) hurricanes. This implies less increase of hurricane intensity related to a possible global warming due to increased CO{sub 2}.

  13. A sensitivity analysis for a thermomechanical model of the Antarctic ice sheet and ice shelves

    NASA Astrophysics Data System (ADS)

    Baratelli, F.; Castellani, G.; Vassena, C.; Giudici, M.

    2012-04-01

    The outcomes of an ice sheet model depend on a number of parameters and physical quantities which are often estimated with large uncertainty, because of lack of sufficient experimental measurements in such remote environments. Therefore, the efforts to improve the accuracy of the predictions of ice sheet models by including more physical processes and interactions with atmosphere, hydrosphere and lithosphere can be affected by the inaccuracy of the fundamental input data. A sensitivity analysis can help to understand which are the input data that most affect the different predictions of the model. In this context, a finite difference thermomechanical ice sheet model based on the Shallow-Ice Approximation (SIA) and on the Shallow-Shelf Approximation (SSA) has been developed and applied for the simulation of the evolution of the Antarctic ice sheet and ice shelves for the last 200 000 years. The sensitivity analysis of the model outcomes (e.g., the volume of the ice sheet and of the ice shelves, the basal melt rate of the ice sheet, the mean velocity of the Ross and Ronne-Filchner ice shelves, the wet area at the base of the ice sheet) with respect to the model parameters (e.g., the basal sliding coefficient, the geothermal heat flux, the present-day surface accumulation and temperature, the mean ice shelves viscosity, the melt rate at the base of the ice shelves) has been performed by computing three synthetic numerical indices: two local sensitivity indices and a global sensitivity index. Local sensitivity indices imply a linearization of the model and neglect both non-linear and joint effects of the parameters. The global variance-based sensitivity index, instead, takes into account the complete variability of the input parameters but is usually conducted with a Monte Carlo approach which is computationally very demanding for non-linear complex models. Therefore, the global sensitivity index has been computed using a development of the model outputs in a

  14. Global Analysis of Aerosol Properties Above Clouds

    NASA Technical Reports Server (NTRS)

    Waquet, F.; Peers, F.; Ducos, F.; Goloub, P.; Platnick, S. E.; Riedi, J.; Tanre, D.; Thieuleux, F.

    2013-01-01

    The seasonal and spatial varability of Aerosol Above Cloud (AAC) properties are derived from passive satellite data for the year 2008. A significant amount of aerosols are transported above liquid water clouds on the global scale. For particles in the fine mode (i.e., radius smaller than 0.3 m), including both clear sky and AAC retrievals increases the global mean aerosol optical thickness by 25(+/- 6%). The two main regions with man-made AAC are the tropical Southeast Atlantic, for biomass burning aerosols, and the North Pacific, mainly for pollutants. Man-made AAC are also detected over the Arctic during the spring. Mineral dust particles are detected above clouds within the so-called dust belt region (5-40 N). AAC may cause a warming effect and bias the retrieval of the cloud properties. This study will then help to better quantify the impacts of aerosols on clouds and climate.

  15. Sensitivity of a global climate model to the critical Richardson number in the boundary layer parameterization

    SciTech Connect

    Zhang, Ning; Liu, Yangang; Gao, Zhiqiu; Li, Dan

    2015-04-27

    The critical bulk Richardson number (Ricr) is an important parameter in planetary boundary layer (PBL) parameterization schemes used in many climate models. This paper examines the sensitivity of a Global Climate Model, the Beijing Climate Center Atmospheric General Circulation Model, BCC_AGCM to Ricr. The results show that the simulated global average of PBL height increases nearly linearly with Ricr, with a change of about 114 m for a change of 0.5 in Ricr. The surface sensible (latent) heat flux decreases (increases) as Ricr increases. The influence of Ricr on surface air temperature and specific humidity is not significant. The increasing Ricr may affect the location of the Westerly Belt in the Southern Hemisphere. Further diagnosis reveals that changes in Ricr affect stratiform and convective precipitations differently. Increasing Ricr leads to an increase in the stratiform precipitation but a decrease in the convective precipitation. Significant changes of convective precipitation occur over the inter-tropical convergence zone, while changes of stratiform precipitation mostly appear over arid land such as North Africa and Middle East.

  16. Sensitivity of a global climate model to the critical Richardson number in the boundary layer parameterization

    DOE PAGES

    Zhang, Ning; Liu, Yangang; Gao, Zhiqiu; ...

    2015-04-27

    The critical bulk Richardson number (Ricr) is an important parameter in planetary boundary layer (PBL) parameterization schemes used in many climate models. This paper examines the sensitivity of a Global Climate Model, the Beijing Climate Center Atmospheric General Circulation Model, BCC_AGCM to Ricr. The results show that the simulated global average of PBL height increases nearly linearly with Ricr, with a change of about 114 m for a change of 0.5 in Ricr. The surface sensible (latent) heat flux decreases (increases) as Ricr increases. The influence of Ricr on surface air temperature and specific humidity is not significant. The increasingmore » Ricr may affect the location of the Westerly Belt in the Southern Hemisphere. Further diagnosis reveals that changes in Ricr affect stratiform and convective precipitations differently. Increasing Ricr leads to an increase in the stratiform precipitation but a decrease in the convective precipitation. Significant changes of convective precipitation occur over the inter-tropical convergence zone, while changes of stratiform precipitation mostly appear over arid land such as North Africa and Middle East.« less

  17. Sensitivity of global ocean heat content from reanalyses to the atmospheric reanalysis forcing: A comparative study

    NASA Astrophysics Data System (ADS)

    Storto, Andrea; Yang, Chunxue; Masina, Simona

    2016-05-01

    The global ocean heat content evolution is a key component of the Earth's energy budget and can be consistently determined by ocean reanalyses that assimilate hydrographic profiles. This work investigates the impact of the atmospheric reanalysis forcing through a multiforcing ensemble ocean reanalysis, where the ensemble members are forced by five state-of-the-art atmospheric reanalyses during the meteorological satellite era (1979-2013). Data assimilation leads the ensemble to converge toward robust estimates of ocean warming rates and significantly reduces the spread (1.48 ± 0.18 W/m2, per unit area of the World Ocean); hence, the impact of the atmospheric forcing appears only marginal for the global heat content estimates in both upper and deeper oceans. A sensitivity assessment performed through realistic perturbation of the main sources of uncertainty in ocean reanalyses highlights that bias correction and preprocessing of in situ observations represent the most crucial component of the reanalysis, whose perturbation accounts for up to 60% of the ocean heat content anomaly variability in the pre-Argo period. Although these results may depend on the single reanalysis system used, they reveal useful information for the ocean observation community and for the optimal generation of perturbations in ocean ensemble systems.

  18. Change in global temperature: A statistical analysis

    SciTech Connect

    Richards, G.R. )

    1993-03-01

    This paper investigates several issues relating to global climatic change using statistical techniques that impose minimal restrictions on the data. The main findings are as follows: (1) The global temperature increase since the last century is a systematic development. (2) Short-term variations in temperature do not have long-lasting effects on the final realizations of the series over time, stochastic perturbations dissipate and temperature reverts to trend. (3) Multivariate tests for causality demonstrate that atmospheric CO[sub 2] is a significant forcing factor. The implied change in temperature with respect to a doubling of atmospheric CO[sub 2] lies in a range of 2.17[degrees] to 2.57[degrees]C, with a mean value of 2.34[degrees]C. The contributions of solar irradiance and volcanic loading are much smaller. (4) In a multivariate system, shocks to forcing factors generate stochastic cycles in temperature comparable to the results from unforced simulations of climatological models. (5) Extrapolation of regression equations predict changes in global temperature that are marginally lower than the results from climatological simulation models.

  19. Boundary formulations for sensitivity analysis without matrix derivatives

    NASA Technical Reports Server (NTRS)

    Kane, J. H.; Guru Prasad, K.

    1993-01-01

    A new hybrid approach to continuum structural shape sensitivity analysis employing boundary element analysis (BEA) is presented. The approach uses iterative reanalysis to obviate the need to factor perturbed matrices in the determination of surface displacement and traction sensitivities via a univariate perturbation/finite difference (UPFD) step. The UPFD approach makes it possible to immediately reuse existing subroutines for computation of BEA matrix coefficients in the design sensitivity analysis process. The reanalysis technique computes economical response of univariately perturbed models without factoring perturbed matrices. The approach provides substantial computational economy without the burden of a large-scale reprogramming effort.

  20. A simple-physics global circulation model for Venus: Sensitivity assessments of atmospheric superrotation

    NASA Astrophysics Data System (ADS)

    Hollingsworth, J. L.; Young, R. E.; Schubert, G.; Covey, C.; Grossman, A. S.

    2007-03-01

    A 3D global circulation model is adapted to the atmosphere of Venus to explore the nature of the planet's atmospheric superrotation. The model employs the full meteorological primitive equations and simplified forms for diabatic and other nonconservative forcings. It is therefore economical for performing very long simulations. To assess circulation equilibration and the occurrence of atmospheric superrotation, the climate model is run for 10,000-20,000 day integrations at 4° × 5° latitude-longitude horizontal resolution, and 56 vertical levels (denoted L56). The sensitivity of these simulations to imposed Venus-like diabatic heating rates, momentum dissipation rates, and various other key parameters (e.g., near-surface momentum drag), in addition to model configuration (e.g., low versus high vertical domain and number of atmospheric levels), is examined. We find equatorial superrotation in several of our numerical experiments, but the magnitude of superrotation is often less than observed. Further, the meridional structure of the mean zonal overturning (i.e., Hadley circulation) can consist of numerous cells which are symmetric about the equator and whose depth scale appears sensitive to the number of vertical layers imposed in the model atmosphere. We find that when realistic diabatic heating is imposed in the lowest several scales heights, only extremely weak atmospheric superrotation results.

  1. Aero-Structural Interaction, Analysis, and Shape Sensitivity

    NASA Technical Reports Server (NTRS)

    Newman, James C., III

    1999-01-01

    A multidisciplinary sensitivity analysis technique that has been shown to be independent of step-size selection is examined further. The accuracy of this step-size independent technique, which uses complex variables for determining sensitivity derivatives, has been previously established. The primary focus of this work is to validate the aero-structural analysis procedure currently being used. This validation consists of comparing computed and experimental data obtained for an Aeroelastic Research Wing (ARW-2). Since the aero-structural analysis procedure has the complex variable modifications already included into the software, sensitivity derivatives can automatically be computed. Other than for design purposes, sensitivity derivatives can be used for predicting the solution at nearby conditions. The use of sensitivity derivatives for predicting the aero-structural characteristics of this configuration is demonstrated.

  2. Advanced Fuel Cycle Economic Sensitivity Analysis

    SciTech Connect

    David Shropshire; Kent Williams; J.D. Smith; Brent Boore

    2006-12-01

    A fuel cycle economic analysis was performed on four fuel cycles to provide a baseline for initial cost comparison using the Gen IV Economic Modeling Work Group G4 ECON spreadsheet model, Decision Programming Language software, the 2006 Advanced Fuel Cycle Cost Basis report, industry cost data, international papers, the nuclear power related cost study from MIT, Harvard, and the University of Chicago. The analysis developed and compared the fuel cycle cost component of the total cost of energy for a wide range of fuel cycles including: once through, thermal with fast recycle, continuous fast recycle, and thermal recycle.

  3. Global inventory of methane clathrate: sensitivity to changes in the deep ocean

    NASA Astrophysics Data System (ADS)

    Buffett, Bruce; Archer, David

    2004-11-01

    We present a mechanistic model for the distribution of methane clathrate in marine sediments, and use it to predict the sensitivity of the steady-state methane inventory to changes in the deep ocean. The methane inventory is determined by binning the seafloor area according to water depth, temperature, and O 2 concentration. Organic carbon rain to the seafloor is treated as a simple function of water depth, and carbon burial for each bin is estimated using a sediment diagenesis model called Muds [Glob. Biogeochem. Cycles 16 (2002)]. The predicted concentration of organic carbon is fed into a clathrate model [J. Geophys. Res. 108 (2003)] to calculate steady-state profiles of dissolved, frozen, and gaseous methane. We estimate the amount of methane in ocean sediments by multiplying the sediment column inventories by the corresponding binned seafloor areas. Our estimate of the methane inventory is sensitive to the efficiency of methane production from organic matter and to the rate of fluid flow within the sediment column. Preferred values for these parameters are taken from previous studies of both passive and active margins, yielding a global estimate of 3×10 18 g of carbon (3000 Gton C) in clathrate and 2×10 18 g (2000 Gton C) in methane bubbles. The predicted methane inventory decreases by 85% in response to 3 °C of warming. Conversely, the methane inventory increases by a factor of 2 if the O 2 concentration of the deep ocean decreases by 40 μM or carbon rain increases by 50% (due to an increase in primary production). Changes in sea level have a small effect. We use these sensitivities to assess the past and future state of the methane clathrate reservoir.

  4. Is equilibrium climate sensitivity the best predictor for future global warming? (Invited)

    NASA Astrophysics Data System (ADS)

    Rypdal, M.; Rypdal, K.

    2013-12-01

    When the climate system is subject to radiative forcing the planet is brought out of radiative balance and the thermal inertia of the planet makes the surface temperature lag behind the forcing. The time constant, which is the time for relaxation to a new equilibrium after a sudden change in forcing, has been considered to be an important parameter to determine. The equilibrium climate sensitivity Seq, the temperature raise per unit forcing after relaxation is complete, is another. In the industrialized epoch a major source for the present energy imbalance is the steady increase in anthropogenic forcing. If the climate system can be modeled as a hierarchy of interacting subsystems with increasing heat capacities and response times there will also be a hierarchy of climate sensitivities. One way of modeling this feature is to replace the standard exponentially decaying impulse-response function with one that is scale free, i.e., decaying like a power law. For a climate system which is subject only to random forcing modeled as a white Gaussian noise, the resulting climate variable is then a long-memory fractional Gaussian noise with a scale-free power spectral density. The final response to a step increase in the forcing is infinite for such a perfectly scale-free response function, since the response to an increase in the forcing will never saturate. This is of course unphysical, but rather than invalidating the scale-free response model it suggests the introduction of a frequency-dependent climate sensitivity S(f). Even in the exponential response model the amplitude response to an oscillation vanishes for high frequencies, but converges to Seq in the limit of low frequencies f. In the scale-free response model S(f) diverges in the low-frequency limit. We demonstrate that long-memory responses can explain important aspects of Northern hemisphere temperature variability over the last millennium and lead to new predictions of how much more warming there will be 'in

  5. Auditory Frequency Sensitivity in the Neonate: A Signal Detection Analysis

    ERIC Educational Resources Information Center

    Weir, C.

    1976-01-01

    Using signal detectability theory, analysis was performed on auditory frequency sensitivity data obtained by Hutt et al, 1968, on human neonates. Reanalysis using 12 male infants confirms superiority of lower frequencies and square waves in provoking startles in neonates. No state of arousal effects were found on sensitivity. (JH)

  6. "Competing Conceptions of Globalization" Revisited: Relocating the Tension between World-Systems Analysis and Globalization Analysis

    ERIC Educational Resources Information Center

    Clayton, Thomas

    2004-01-01

    In recent years, many scholars have become fascinated by a contemporary, multidimensional process that has come to be known as "globalization." Globalization originally described economic developments at the world level. More specifically, scholars invoked the concept in reference to the process of global economic integration and the seemingly…

  7. "Competing Conceptions of Globalization" Revisited: Relocating the Tension between World-Systems Analysis and Globalization Analysis

    ERIC Educational Resources Information Center

    Clayton, Thomas

    2004-01-01

    In recent years, many scholars have become fascinated by a contemporary, multidimensional process that has come to be known as "globalization." Globalization originally described economic developments at the world level. More specifically, scholars invoked the concept in reference to the process of global economic integration and the seemingly…

  8. Sensitivity analysis of hybrid thermoelastic techniques

    Treesearch

    W.A. Samad; J.M. Considine

    2017-01-01

    Stress functions have been used as a complementary tool to support experimental techniques, such as thermoelastic stress analysis (TSA) and digital image correlation (DIC), in an effort to evaluate the complete and separate full-field stresses of loaded structures. The need for such coupling between experimental data and stress functions is due to the fact that...

  9. Sensitivity analysis for electromagnetic topology optimization problems

    NASA Astrophysics Data System (ADS)

    Zhou, Shiwei; Li, Wei; Li, Qing

    2010-06-01

    This paper presents a level set based method to design the metal shape in electromagnetic field such that the incident current flow on the metal surface can be minimized or maximized. We represent the interface of the free space and conducting material (solid phase) by the zero-order contour of a higher dimensional level set function. Only the electrical component of the incident wave is considered in the current study and the distribution of the induced current flow on the metallic surface is governed by the electric field integral equation (EFIE). By minimizing or maximizing a costing function relative to the current flow, its distribution can be controlled to some extent. This method paves a new avenue to many electromagnetic applications such as antenna and metamaterial whose performance or properties are dominated by their surface current flow. The sensitivity of the objective function to the shape change, an integral formulation including both the solutions to the electric field integral equation and its adjoint equation, is obtained using a variational method and shape derivative. The advantages of the level set model lie in its flexibility of disposing complex topological changes and facilitating the mathematical expression of the electromagnetic configuration. Moreover, the level set model makes the optimization an elegant evolution process during which the volume of the metallic component keeps a constant while the free space/metal interface gradually approaching its optimal position. The effectiveness of this method is demonstrated through a self-adjoint 2D topology optimization example.

  10. Global Hawk: Root Cause Analysis of Projected Unit Cost Growth

    DTIC Science & Technology

    2011-05-01

    2009 (WSARA). This report describes our task analysis and findings. The Global Hawk Program Global Hawk is a family of high -altitude, high -endurance...Document (CDD) • Cost Analysis Requirements Description (CARD) • Test and Evaluation Master Plan ( TEMP ) • Acquisition Program Baseline (APB...fixed content and completion criteria as defined by the new CDD, CARD, TEMP , and ASR. The four increments shown in the table above reflect the

  11. Parameter sensitivity analysis for pesticide impacts on honeybee colonies

    EPA Science Inventory

    We employ Monte Carlo simulation and linear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed that simulate hive population trajectories, taking into account queen strength, foraging success, weather, colo...

  12. Sobol’ sensitivity analysis for stressor impacts on honeybee colonies

    EPA Science Inventory

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather...

  13. Sobol’ sensitivity analysis for stressor impacts on honeybee colonies

    EPA Science Inventory

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather...

  14. Selecting step sizes in sensitivity analysis by finite differences

    NASA Technical Reports Server (NTRS)

    Iott, J.; Haftka, R. T.; Adelman, H. M.

    1985-01-01

    This paper deals with methods for obtaining near-optimum step sizes for finite difference approximations to first derivatives with particular application to sensitivity analysis. A technique denoted the finite difference (FD) algorithm, previously described in the literature and applicable to one derivative at a time, is extended to the calculation of several simultaneously. Both the original and extended FD algorithms are applied to sensitivity analysis for a data-fitting problem in which derivatives of the coefficients of an interpolation polynomial are calculated with respect to uncertainties in the data. The methods are also applied to sensitivity analysis of the structural response of a finite-element-modeled swept wing. In a previous study, this sensitivity analysis of the swept wing required a time-consuming trial-and-error effort to obtain a suitable step size, but it proved to be a routine application for the extended FD algorithm herein.

  15. Parameter sensitivity analysis for pesticide impacts on honeybee colonies

    EPA Science Inventory

    We employ Monte Carlo simulation and linear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed that simulate hive population trajectories, taking into account queen strength, foraging success, weather, colo...

  16. Sensitivity Analysis and Computation for Partial Differential Equations

    DTIC Science & Technology

    2008-03-14

    Example, Journal of Mathematical Analysis and Applications , to appear. 11 [22] John R. Singler, Transition to Turbulence, Small Disturbances, and...Sensitivity Analysis II: The Navier-Stokes Equations, Journal of Mathematical Analysis and Applications , to appear. [23] A. M. Stuart and A. R. Humphries

  17. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    PubMed

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.

  18. Efficient stochastic sensitivity analysis of discrete event systems

    SciTech Connect

    Plyasunov, Sergey . E-mail: teleserg@uclink.berkeley.edu; Arkin, Adam P. . E-mail: aparkin@lbl.gov

    2007-02-10

    Sensitivity analysis quantifies the dependence of a system's behavior on the parameters that could possibly affect the dynamics. Calculation of sensitivities of stochastic chemical systems using Kinetic Monte Carlo and finite-difference-based methods is not only computationally intensive, but direct calculation of sensitivities by finite-difference-based methods of parameter perturbations converges very poorly. In this paper we develop an approach to this issue using a method based on the Girsanov measure transformation for jump processes to smooth the estimate of the sensitivity coefficients and make this estimation more accurate. We demonstrate the method with simple examples and discuss its appropriate use.

  19. Sensitivity Analysis of QSAR Models for Assessing Novel Military Compounds

    DTIC Science & Technology

    2009-01-01

    erties, such as log P, would aid in estimating a chemical’s environmental fate and toxicology when applied to QSAR modeling. Granted, QSAR mod- els, such...ER D C TR -0 9 -3 Strategic Environmental Research and Development Program Sensitivity Analysis of QSAR Models for Assessing Novel...Environmental Research and Development Program ERDC TR-09-3 January 2009 Sensitivity Analysis of QSAR Models for Assessing Novel Military Compound

  20. Sensitivity Analysis of the Gap Heat Transfer Model in BISON.

    SciTech Connect

    Swiler, Laura Painton; Schmidt, Rodney C.; Williamson, Richard; Perez, Danielle

    2014-10-01

    This report summarizes the result of a NEAMS project focused on sensitivity analysis of the heat transfer model in the gap between the fuel rod and the cladding used in the BISON fuel performance code of Idaho National Laboratory. Using the gap heat transfer models in BISON, the sensitivity of the modeling parameters and the associated responses is investigated. The study results in a quantitative assessment of the role of various parameters in the analysis of gap heat transfer in nuclear fuel.

  1. Global Gene Expression Analysis for the Assessment of Nanobiomaterials.

    PubMed

    Hanagata, Nobutaka

    2015-01-01

    Using global gene expression analysis, the effects of biomaterials and nanomaterials can be analyzed at the genetic level. Even though information obtained from global gene expression analysis can be useful for the evaluation and design of biomaterials and nanomaterials, its use for these purposes is not widespread. This is due to the difficulties involved in data analysis. Because the expression data of about 20,000 genes can be obtained at once with global gene expression analysis, the data must be analyzed using bioinformatics. A method of bioinformatic analysis called gene ontology can estimate the kinds of changes on cell functions caused by genes whose expression level is changed by biomaterials and nanomaterials. Also, by applying a statistical analysis technique called hierarchical clustering to global gene expression data between a variety of biomaterials, the effects of the properties of materials on cell functions can be estimated. In this chapter, these theories of analysis and examples of applications to nanomaterials and biomaterials are described. Furthermore, global microRNA analysis, a method that has gained attention in recent years, and its application to nanomaterials are introduced.

  2. Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems With Switching [Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems

    DOE PAGES

    Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil; ...

    2017-01-24

    Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less

  3. Sensitivity of contemporary sea level trends in a global ocean state estimate to effects of geothermal fluxes

    NASA Astrophysics Data System (ADS)

    Piecuch, Christopher G.; Heimbach, Patrick; Ponte, Rui M.; Forget, Gaël

    2015-12-01

    Geothermal fluxes constitute a sizable fraction of the present-day Earth net radiative imbalance and corresponding ocean heat uptake. Model simulations of contemporary sea level that impose a geothermal flux boundary condition are becoming increasingly common. To quantify the impact of geothermal fluxes on model estimates of contemporary (1993-2010) sea level changes, two ocean circulation model experiments are compared. The two simulations are based on a global ocean state estimate, produced by the Estimating the Circulation and Climate of the Ocean (ECCO) consortium, and differ only with regard to whether geothermal forcing is applied as a boundary condition. Geothermal forcing raises the global-mean sea level trend by 0.11 mm yr-1 in the perturbation experiment by suppressing a cooling trend present in the baseline solution below 2000 m. The imposed forcing also affects regional sea level trends. The Southern Ocean is particularly sensitive. In this region, anomalous heat redistribution due to geothermal fluxes results in steric height trends of up to ± 1 mm yr-1 in the perturbation experiment relative to the baseline simulation. Analysis of a passive tracer experiment suggests that the geothermal input itself is transported by horizontal diffusion, resulting in more thermal expansion over deeper ocean basins. Thermal expansion in the perturbation simulation gives rise to bottom pressure increase over shallower regions and decrease over deeper areas relative to the baseline run, consistent with mass redistribution expected for deep ocean warming. These results elucidate the influence of geothermal fluxes on sea level rise and global heat budgets in model simulations of contemporary ocean circulation and climate.

  4. Analysis and visualization of global magnetospheric processes

    SciTech Connect

    Winske, D.; Mozer, F.S.; Roth, I.

    1998-12-31

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). The purpose of this project is to develop new computational and visualization tools to analyze particle dynamics in the Earth`s magnetosphere. These tools allow the construction of a global picture of particle fluxes, which requires only a small number of in situ spacecraft measurements as input parameters. The methods developed in this project have led to a better understanding of particle dynamics in the Earth`s magnetotail in the presence of turbulent wave fields. They have also been used to demonstrate how large electromagnetic pulses in the solar wind can interact with the magnetosphere to increase the population of energetic particles and even form new radiation belts.

  5. Polyandry in nature: a global analysis.

    PubMed

    Taylor, Michelle L; Price, Tom A R; Wedell, Nina

    2014-07-01

    A popular notion in sexual selection is that females are polyandrous and their offspring are commonly sired by more than a single male. We now have large-scale evidence from natural populations to be able to verify this assumption. Although we concur that polyandry is a generally common and ubiquitous phenomenon, we emphasise that it remains variable. In particular, the persistence of single paternity, both within and between populations, requires more careful consideration. We also explore an intriguing relation of polyandry with latitude. Several recent large-scale analyses of the relations between key population fitness variables, such as heterozygosity, effective population size (Ne), and inbreeding coefficients, make it possible to examine the global effects of polyandry on population fitness for the first time. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. National health expenditures: a global analysis.

    PubMed

    Murray, C J; Govindaraj, R; Musgrove, P

    1994-01-01

    As part of the background research to the World development report 1993: investing in health, an effort was made to estimate public, private and total expenditures on health for all countries of the world. Estimates could be found for public spending for most countries, but for private expenditure in many fewer countries. Regressions were used to predict the missing values of regional and global estimates. These econometric exercises were also used to relate expenditure to measures of health status. In 1990 the world spent an estimated US$ 1.7 trillion (1.7 x 10(12) on health, or $1.9 trillion (1.9 x 10(12)) in dollars adjusted for higher purchasing power in poorer countries. This amount was about 60% public and 40% private in origin. However, as incomes rise, public health expenditure tends to displace private spending and to account for the increasing share of incomes devoted to health.

  7. Water Grabbing analysis at global scale

    NASA Astrophysics Data System (ADS)

    Rulli, M.; Saviori, A.; D'Odorico, P.

    2012-12-01

    "Land grabbing" is the acquisition of agricultural land by foreign governments and corporations, a phenomenon that has greatly intensified over the last few years as a result of the increase in food prices and biofuel demand. Land grabbing is inherently associated with an appropriation of freshwater resources that has never been investigated before. Here we provide a global assessment of the total grabbed land and water resources. Using process-based agro-hydrological models we estimate the rates of freshwater grabbing worldwide. We find that this phenomenon is occurring at alarming rates in all continents except Antarctica. The per capita volume of grabbed water often exceeds the water requirements for a balanced diet and would be sufficient to abate malnourishment in the grabbed countries. High rates of water grabbing are often associated with deforestation and the increase in water withdrawals for irrigation.

  8. National health expenditures: a global analysis.

    PubMed Central

    Murray, C. J.; Govindaraj, R.; Musgrove, P.

    1994-01-01

    As part of the background research to the World development report 1993: investing in health, an effort was made to estimate public, private and total expenditures on health for all countries of the world. Estimates could be found for public spending for most countries, but for private expenditure in many fewer countries. Regressions were used to predict the missing values of regional and global estimates. These econometric exercises were also used to relate expenditure to measures of health status. In 1990 the world spent an estimated US$ 1.7 trillion (1.7 x 10(12) on health, or $1.9 trillion (1.9 x 10(12)) in dollars adjusted for higher purchasing power in poorer countries. This amount was about 60% public and 40% private in origin. However, as incomes rise, public health expenditure tends to displace private spending and to account for the increasing share of incomes devoted to health. PMID:7923542

  9. Advancing sensitivity analysis to precisely characterize temporal parameter dominance

    NASA Astrophysics Data System (ADS)

    Guse, Björn; Pfannerstill, Matthias; Strauch, Michael; Reusser, Dominik; Lüdtke, Stefan; Volk, Martin; Gupta, Hoshin; Fohrer, Nicola

    2016-04-01

    Parameter sensitivity analysis is a strategy for detecting dominant model parameters. A temporal sensitivity analysis calculates daily sensitivities of model parameters. This allows a precise characterization of temporal patterns of parameter dominance and an identification of the related discharge conditions. To achieve this goal, the diagnostic information as derived from the temporal parameter sensitivity is advanced by including discharge information in three steps. In a first step, the temporal dynamics are analyzed by means of daily time series of parameter sensitivities. As sensitivity analysis method, we used the Fourier Amplitude Sensitivity Test (FAST) applied directly onto the modelled discharge. Next, the daily sensitivities are analyzed in combination with the flow duration curve (FDC). Through this step, we determine whether high sensitivities of model parameters are related to specific discharges. Finally, parameter sensitivities are separately analyzed for five segments of the FDC and presented as monthly averaged sensitivities. In this way, seasonal patterns of dominant model parameter are provided for each FDC segment. For this methodical approach, we used two contrasting catchments (upland and lowland catchment) to illustrate how parameter dominances change seasonally in different catchments. For all of the FDC segments, the groundwater parameters are dominant in the lowland catchment, while in the upland catchment the controlling parameters change seasonally between parameters from different runoff components. The three methodical steps lead to clear temporal patterns, which represent the typical characteristics of the study catchments. Our methodical approach thus provides a clear idea of how the hydrological dynamics are controlled by model parameters for certain discharge magnitudes during the year. Overall, these three methodical steps precisely characterize model parameters and improve the understanding of process dynamics in hydrological

  10. VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.

    2016-12-01

    VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.

  11. Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem

    DOE PAGES

    Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; ...

    2015-01-01

    In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.

  12. Global review of open access risk assessment software packages valid for global or continental scale analysis

    NASA Astrophysics Data System (ADS)

    Daniell, James; Simpson, Alanna; Gunasekara, Rashmin; Baca, Abigail; Schaefer, Andreas; Ishizawa, Oscar; Murnane, Rick; Tijssen, Annegien; Deparday, Vivien; Forni, Marc; Himmelfarb, Anne; Leder, Jan

    2015-04-01

    -defined exposure and vulnerability. Without this function, many tools can only be used regionally and not at global or continental scale. It is becoming increasingly easy to use multiple packages for a single region and/or hazard to characterize the uncertainty in the risk, or use as checks for the sensitivities in the analysis. There is a potential for valuable synergy between existing software. A number of open source software packages could be combined to generate a multi-risk model with multiple views of a hazard. This extensive review has simply attempted to provide a platform for dialogue between all open source and open access software packages and to hopefully inspire collaboration between developers, given the great work done by all open access and open source developers.

  13. Global Observations of Cloud-Sensitive Aerosol Loadings in Low Level Marine Clouds

    NASA Astrophysics Data System (ADS)

    Cermak, J.; Andersen, H.; Fuchs, J.; Schwarz, K.

    2016-12-01

    This contribution presents a method to characterize the nonlinear relationship between aerosols and cloud droplets in marine boundary layer clouds based on global MODIS observations.Clouds play a crucial role in the climate system as their radiative properties and precipitation patterns significantly impact the Earth's energy balance. Cloud properties are determined by environmental conditions, as cloud formation requires the availability of water vapour ("precipitable water") and condensation nuclei in sufficiently saturated conditions. The ways in which aerosols as condensation nuclei in particular influence the optical, micro- and macrophysical properties of clouds are one of the largest remaining uncertainties in climate-change research. In particular, cloud droplet size is believed to be impacted, and thereby cloud reflectivity, lifetime, and precipitation susceptibility. However, the connection between aerosols and cloud droplets is nonlinear, due to various factors and processes. The impact of aerosols on cloud properties is thought to be strongest with low aerosol loadings, whereas it saturates with high aerosol loadings. To gain understanding of the processes that govern low cloud water properties in order to increase accuracy of climate models and predictions of future changes in the climate system is thus of great importance. In this study, global Terra MODIS L3 data sets are used to identify at what aerosol loadings cloud droplet size shows the greatest sensitivity to changes in aerosol loading in marine boundary layer clouds. MODIS observations are binned in classes of aerosol loading to identify at what loading aerosol impact on cloud droplets is the strongest and at which loading it saturates. Results are connected to ERA-Interim and MACC data sets to identify connections of detected patterns to meteorology and aerosol species.

  14. FOCUS - An experimental environment for fault sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Choi, Gwan S.; Iyer, Ravishankar K.

    1992-01-01

    FOCUS, a simulation environment for conducting fault-sensitivity analysis of chip-level designs, is described. The environment can be used to evaluate alternative design tactics at an early design stage. A range of user specified faults is automatically injected at runtime, and their propagation to the chip I/O pins is measured through the gate and higher levels. A number of techniques for fault-sensitivity analysis are proposed and implemented in the FOCUS environment. These include transient impact assessment on latch, pin and functional errors, external pin error distribution due to in-chip transients, charge-level sensitivity analysis, and error propagation models to depict the dynamic behavior of latch errors. A case study of the impact of transient faults on a microprocessor-based jet-engine controller is used to identify the critical fault propagation paths, the module most sensitive to fault propagation, and the module with the highest potential for causing external errors.

  15. Design sensitivity analysis using EAL. Part 1: Conventional design parameters

    NASA Technical Reports Server (NTRS)

    Dopker, B.; Choi, Kyung K.; Lee, J.

    1986-01-01

    A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.

  16. A study of turbulent flow with sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Dwyer, H. A.; Peterson, T.

    1980-07-01

    In this paper a new type of analysis is introduced that can be used in numerical fluid mechanics. The method is known as sensitivity analysis and it has been widely used in the field of automatic control theory. Sensitivity analysis addresses in a systematic way to the question of 'how' the solution to an equation will change due to variations in the equation's parameters and boundary conditions. An important application is turbulent flow where there exists a large uncertainty in the models used for closure. In the present work the analysis is applied to the three-dimensional planetary boundary layer equations, and sensitivity equations are generated for various parameters in turbulence model. The solution of these equations with the proper techniques leads to considerable insight into the flow field and its dependence on turbulence parameters. Also, the analysis allows for unique decompositions of the parameter dependence and is efficient.

  17. Sensitivity of Global Modeling Initiative Model Predictions of Antarctic Ozone Recovery to Input Meteorological Fields

    NASA Technical Reports Server (NTRS)

    Considine, David B.; Connell, Peter S.; Bergmann, Daniel J.; Rotman, Douglas A.; Strahan, Susan E.

    2004-01-01

    We use the Global Modeling Initiative chemistry and transport model to simulate the evolution of stratospheric ozone between 1995 and 2030, using boundary conditions consistent with the recent World Meteorological Organization ozone assessment. We compare the Antarctic ozone recovery predictions of two simulations, one driven by an annually repeated year of meteorological data from a general circulation model (GCM), the other using a year of output from a data assimilation system (DAS), to examine the sensitivity of Antarctic ozone recovery predictions to the characteristic dynamical differences between GCM- and DAS-generated meteorological data. Although the age of air in the Antarctic lower stratosphere differs by a factor of 2 between the simulations, we find little sensitivity of the 1995-2030 Antarctic ozone recovery between 350 and 650 K to the differing meteorological fields, particularly when the recovery is specified in mixing ratio units. Percent changes are smaller in the DAS-driven simulation compared to the GCM-driven simulation because of a surplus of Antarctic ozone in the DAS-driven simulation which is not consistent with observations. The peak ozone change between 1995 and 2030 in both simulations is approx.20% lower than photochemical expectations, indicating that changes in ozone transport due to changing ozone gradients at 450 K between 1995 and 2030 constitute a small negative feedback. Total winter/spring ozone loss during the base year (1995) of both simulations and the rate of ozone loss during August and September is somewhat weaker than observed. This appears to be due to underestimates of Antarctic Cl(sub y) at the 450 K potential temperature level.

  18. Advanced nuclear measurements LDRD -- Sensitivity analysis

    SciTech Connect

    Dreicer, J.S.

    1999-02-01

    This component of the Advanced Nuclear Measurements LDRD-PD has focused on the analysis and methodologies to quantify and characterize existing inventories of weapons and commercial fissile materials, as well as to, anticipate future forms and quantities to fissile materials. Historically, domestic safeguards had been applied to either pure uniform homogeneous material or to well characterized materials. The future is different simplistically, measurement challenges will be associated with the materials recovered from dismantled nuclear weapons in the US and Russia subject to disposition, the residues and wastes left over from the weapons production process, and from the existing and growing inventory of materials in commercial/civilian programs. Nuclear measurement issues for the fissile materials coming from these sources are associated with homogeneity, purity, and matrix effects. Specifically, these difficult-to-measure fissile materials are heterogeneous, impure, and embedded in highly shielding non-uniform matrices. Currently, each of these effects creates problems for radiation-based assay and it is impossible to measure material that has a combination of all these effects. Nuclear materials control and measurement is a dynamic problem requiring a predictive capability. This component has been tasked with helping select which future problems are the most important to target, during the last year accomplishments include: characterization of weapons waste fissile materials, identification of measurement problem areas, defining instrument requirements, and characterization of commercial fissile materials. A discussion of accomplishments in each of these areas is presented.

  19. The resolution sensitivity of the South Asian monsoon and Indo-Pacific in a global 0.35° AGCM

    NASA Astrophysics Data System (ADS)

    Johnson, Stephanie J.; Levine, Richard C.; Turner, Andrew G.; Martin, Gill M.; Woolnough, Steven J.; Schiemann, Reinhard; Mizielinski, Matthew S.; Roberts, Malcolm J.; Vidale, Pier Luigi; Demory, Marie-Estelle; Strachan, Jane

    2016-02-01

    The South Asian monsoon is one of the most significant manifestations of the seasonal cycle. It directly impacts nearly one third of the world's population and also has substantial global influence. Using 27-year integrations of a high-resolution atmospheric general circulation model (Met Office Unified Model), we study changes in South Asian monsoon precipitation and circulation when horizontal resolution is increased from approximately 200-40 km at the equator (N96-N512, 1.9°-0.35°). The high resolution, integration length and ensemble size of the dataset make this the most extensive dataset used to evaluate the resolution sensitivity of the South Asian monsoon to date. We find a consistent pattern of JJAS precipitation and circulation changes as resolution increases, which include a slight increase in precipitation over peninsular India, changes in Indian and Indochinese orographic rain bands, increasing wind speeds in the Somali Jet, increasing precipitation over the Maritime Continent islands and decreasing precipitation over the northern Maritime Continent seas. To diagnose which resolution-related processes cause these changes, we compare them to published sensitivity experiments that change regional orography and coastlines. Our analysis indicates that improved resolution of the East African Highlands results in the improved representation of the Somali Jet and further suggests that improved resolution of orography over Indochina and the Maritime Continent results in more precipitation over the Maritime Continent islands at the expense of reduced precipitation further north. We also evaluate the resolution sensitivity of monsoon depressions and lows, which contribute more precipitation over northeast India at higher resolution. We conclude that while increasing resolution at these scales does not solve the many monsoon biases that exist in GCMs, it has a number of small, beneficial impacts.

  20. Global kinetic analysis of seeded BSA aggregation.

    PubMed

    Sahin, Ziya; Demir, Yusuf Kemal; Kayser, Veysel

    2016-04-30

    Accelerated aggregation studies were conducted around the melting temperature (Tm) to elucidate the kinetics of seeded BSA aggregation. Aggregation was tracked by SEC-HPLC and intrinsic fluorescence spectroscopy. Time evolution of monomer, dimer and soluble aggregate concentrations were globally analysed to reliably deduce mechanistic details pertinent to the process. Results showed that BSA aggregated irreversibly through both sequential monomer addition and aggregate-aggregate interactions. Sequential monomer addition proceeded only via non-native monomers, starting to occur only by 1-2°C below the Tm. Aggregate-aggregate interactions were the dominant mechanism below the Tm due to an initial presence of small aggregates that acted as seeds. Aggregate-aggregate interactions were significant also above the Tm, particularly at later stages of aggregation when sequential monomer addition seemed to cease, leading in some cases to insoluble aggregate formation. The adherence (or non-thereof) of the mechanisms to Arrhenius kinetics were discussed alongside possible implications of seeding for biopharmaceutical shelf-life and spectroscopic data interpretation, the latter of which was found to often be overlooked in BSA aggregation studies. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Sensitivity of agro-environmental zones in Spain to global climatic change

    NASA Astrophysics Data System (ADS)

    Vanwalleghem, T.; Guzmán, G.; Vanderlinden, K.; Laguna, A.; Giraldez, J. V.

    2014-12-01

    Soil has a key role in the regulation of carbon, water and nutrient cycles. Traditionally, agricultural soil management was oriented towards optimizing productivity. Nowadays, mitigation of climate change effects and maintaining long-term soil quality are evenly important. Developing policy guidelines for best management practices need to be site-specific, given the large spatial variability of environmental conditions within the EU. Therefore, it is necessary to classify the different farming zones that are susceptible to soil degradation. Especially in Mediterranean areas, this variability and its susceptibility to degradation is higher than in other areas of the EU. The objective of this study is therefore to delineate current agro-environmental zones in Spain and to determine the effect of global climate change on this classification in the future. The final objective is to assist policy makers in scenario analysis with respect to soil conservation. Our classification scheme is based on soil, topography and climate (seasonal temperature and rainfall) variables. We calculated slope and elevation based on a SRTM-derived DEM, soil texture was extracted from the European Soil Database and seasonal mean, minimum and maximum precipitation and temperature data were gridded from publically available weather station data (Aemet). Global change scenarios are average downscaled ensemble predictions for the emission scenarios A2 and B2. The k-means method was used for classification of the 10 km x 10 km gridded variables. Using the before-mentioned input variables, the optimal number of agro-environmental zones we obtained is 8. The classification corresponds well with the observed distribution of farming typologies in Spain. The advantage of this method is that it is a simple, objective method which uses only readily available, public data. As such, its extrapolation to other countries of the EU is straightforward. Finally, it presents a tool for policy makers to assess

  2. Global synthesis of the temperature sensitivity of leaf litter breakdown in streams and rivers.

    PubMed

    Follstad Shah, Jennifer J; Kominoski, John S; Ardón, Marcelo; Dodds, Walter K; Gessner, Mark O; Griffiths, Natalie A; Hawkins, Charles P; Johnson, Sherri L; Lecerf, Antoine; LeRoy, Carri J; Manning, David W P; Rosemond, Amy D; Sinsabaugh, Robert L; Swan, Christopher M; Webster, Jackson R; Zeglin, Lydia H

    2016-12-31

    Streams and rivers are important conduits of terrestrially derived carbon (C) to atmospheric and marine reservoirs. Leaf litter breakdown rates are expected to increase as water temperatures rise in response to climate change. The magnitude of increase in breakdown rates is uncertain, given differences in litter quality and microbial and detritivore community responses to temperature, factors that can influence the apparent temperature sensitivity of breakdown and the relative proportion of C lost to the atmosphere vs. stored or transported downstream. Here, we synthesized 1025 records of litter breakdown in streams and rivers to quantify its temperature sensitivity, as measured by the activation energy (Ea , in eV). Temperature sensitivity of litter breakdown varied among twelve plant genera for which Ea could be calculated. Higher values of Ea were correlated with lower-quality litter, but these correlations were influenced by a single, N-fixing genus (Alnus). Ea values converged when genera were classified into three breakdown rate categories, potentially due to continual water availability in streams and rivers modulating the influence of leaf chemistry on breakdown. Across all data representing 85 plant genera, the Ea was 0.34 ± 0.04 eV, or approximately half the value (0.65 eV) predicted by metabolic theory. Our results indicate that average breakdown rates may increase by 5-21% with a 1-4 °C rise in water temperature, rather than a 10-45% increase expected, according to metabolic theory. Differential warming of tropical and temperate biomes could result in a similar proportional increase in breakdown rates, despite variation in Ea values for these regions (0.75 ± 0.13 eV and 0.27 ± 0.05 eV, respectively). The relative proportions of gaseous C loss and organic matter transport downstream should not change with rising temperature given that Ea values for breakdown mediated by microbes alone and microbes plus detritivores were similar at the global

  3. Global/local stress analysis of composite panels

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Knight, Norman F., Jr.

    1989-01-01

    A method for performing a global/local stress analysis is described, and its capabilities are demonstrated. The method employs spline interpolation functions which satisfy the linear plate bending equation to determine displacements and rotations from a global model which are used as boundary conditions for the local model. Then, the local model is analyzed independent of the global model of the structure. This approach can be used to determine local, detailed stress states for specific structural regions using independent, refined local models which exploit information from less-refined global models. The method presented is not restricted to having a priori knowledge of the location of the regions requiring local detailed stress analysis. This approach also reduces the computational effort necessary to obtain the detailed stress state. Criteria for applying the method are developed. The effectiveness of the method is demonstrated using a classical stress concentration problem and a graphite-epoxy blade-stiffened panel with a discontinuous stiffener.

  4. Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)

    1996-01-01

    Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite

  5. Aeroacoustic sensitivity analysis and optimal aeroacoustic design of turbomachinery blades

    NASA Technical Reports Server (NTRS)

    Hall, Kenneth C.

    1994-01-01

    During the first year of the project, we have developed a theoretical analysis - and wrote a computer code based on this analysis - to compute the sensitivity of unsteady aerodynamic loads acting on airfoils in cascades due to small changes in airfoil geometry. The steady and unsteady flow though a cascade of airfoils is computed using the full potential equation. Once the nominal solutions have been computed, one computes the sensitivity. The analysis takes advantage of the fact that LU decomposition is used to compute the nominal steady and unsteady flow fields. If the LU factors are saved, then the computer time required to compute the sensitivity of both the steady and unsteady flows to changes in airfoil geometry is quite small. The results to date are quite encouraging, and may be summarized as follows: (1) The sensitivity procedure has been validated by comparing the results obtained by 'finite difference' techniques, that is, computing the flow using the nominal flow solver for two slightly different airfoils and differencing the results. The 'analytic' solution computed using the method developed under this grant and the finite difference results are found to be in almost perfect agreement. (2) The present sensitivity analysis is computationally much more efficient than finite difference techniques. We found that using a 129 by 33 node computational grid, the present sensitivity analysis can compute the steady flow sensitivity about ten times more efficiently that the finite difference approach. For the unsteady flow problem, the present sensitivity analysis is about two and one-half times as fast as the finite difference approach. We expect that the relative efficiencies will be even larger for the finer grids which will be used to compute high frequency aeroacoustic solutions. Computational results show that the sensitivity analysis is valid for small to moderate sized design perturbations. (3) We found that the sensitivity analysis provided important

  6. Sensitivity analysis of small circular cylinders as wake control

    NASA Astrophysics Data System (ADS)

    Meneghini, Julio; Patino, Gustavo; Gioria, Rafael

    2016-11-01

    We apply a sensitivity analysis to a steady external force regarding control vortex shedding from a circular cylinder using active and passive small control cylinders. We evaluate the changes on the flow produced by the device on the flow near the primary instability, transition to wake. We numerically predict by means of sensitivity analysis the effective regions to place the control devices. The quantitative effect of the hydrodynamic forces produced by the control devices is also obtained by a sensitivity analysis supporting the prediction of minimum rotation rate. These results are extrapolated for higher Reynolds. Also, the analysis provided the positions of combined passive control cylinders that suppress the wake. The latter shows that these particular positions for the devices are adequate to suppress the wake unsteadiness. In both cases the results agree very well with experimental cases of control devices previously published.

  7. Development and sensitivity analysis of a fully kinetic model of sequential reductive dechlorination in groundwater.

    PubMed

    Malaguerra, Flavio; Chambon, Julie C; Bjerg, Poul L; Scheutz, Charlotte; Binning, Philip J

    2011-10-01

    A fully kinetic biogeochemical model of sequential reductive dechlorination (SERD) occurring in conjunction with lactate and propionate fermentation, iron reduction, sulfate reduction, and methanogenesis was developed. Production and consumption of molecular hydrogen (H(2)) by microorganisms have been modeled using modified Michaelis-Menten kinetics and has been implemented in the geochemical code PHREEQC. The model have been calibrated using a Shuffled Complex Evolution Metropolis algorithm to observations of chlorinated solvents, organic acids, and H(2) concentrations in laboratory batch experiments of complete trichloroethene (TCE) degradation in natural sediments. Global sensitivity analysis was performed using the Morris method and Sobol sensitivity indices to identify the most influential model parameters. Results show that the sulfate concentration and fermentation kinetics are the most important factors influencing SERD. The sensitivity analysis also suggests that it is not possible to simplify the model description if all system behaviors are to be well described.

  8. Global Methods for Image Motion Analysis

    DTIC Science & Technology

    1992-10-01

    including the time for reviewing instructions , searching existing data sources, gathering and maintaining the data needed, and completing and reviewing...thanks go to Pankaj who inspired me in research , to Prasad from whom I have learned so much, and to Ronie and Laureen, the memories of whose company...of images to determine egomotion and to extract information from the scene. Research in motion analysis has been focussed on the problems of

  9. Quantitative uncertainty and sensitivity analysis of a PWR control rod ejection accident

    SciTech Connect

    Pasichnyk, I.; Perin, Y.; Velkov, K.

    2013-07-01

    The paper describes the results of the quantitative Uncertainty and Sensitivity (U/S) Analysis of a Rod Ejection Accident (REA) which is simulated by the coupled system code ATHLET-QUABOX/CUBBOX applying the GRS tool for U/S analysis SUSA/XSUSA. For the present study, a UOX/MOX mixed core loading based on a generic PWR is modeled. A control rod ejection is calculated for two reactor states: Hot Zero Power (HZP) and 30% of nominal power. The worst cases for the rod ejection are determined by steady-state neutronic simulations taking into account the maximum reactivity insertion in the system and the power peaking factor. For the U/S analysis 378 uncertain parameters are identified and quantified (thermal-hydraulic initial and boundary conditions, input parameters and variations of the two-group cross sections). Results for uncertainty and sensitivity analysis are presented for safety important global and local parameters. (authors)

  10. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    NASA Technical Reports Server (NTRS)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral

  11. Analysis of phase sensitivity for binary computer-generated holograms.

    PubMed

    Chang, Yu-Chun; Zhou, Ping; Burge, James H

    2006-06-20

    A binary diffraction model is introduced to study the sensitivity of the wavefront phase of binary computer-generated holograms on groove depth and duty-cycle variations. Analytical solutions to diffraction efficiency, diffracted wavefront phase functions, and wavefront sensitivity functions are derived. The derivation of these relationships is obtained by using the Fourier method. Results from experimental data confirm the analysis. Several phase anomalies were discovered, and a simple graphical model of the complex fields is applied to explain these phenomena.

  12. Dynamic Bayesian sensitivity analysis of a myocardial metabolic model.

    PubMed

    Calvetti, D; Hageman, R; Occhipinti, R; Somersalo, E

    2008-03-01

    Dynamic compartmentalized metabolic models are identified by a large number of parameters, several of which are either non-physical or extremely difficult to measure. Typically, the available data and prior information is insufficient to fully identify the system. Since the models are used to predict the behavior of unobserved quantities, it is important to understand how sensitive the output of the system is to perturbations in the poorly identifiable parameters. Classically, it is the goal of sensitivity analysis to asses how much the output changes as a function of the parameters. In the case of dynamic models, the output is a function of time and therefore its sensitivity is a time dependent function. If the output is a differentiable function of the parameters, the sensitivity at one time instance can be computed from its partial derivatives with respect to the parameters. The time course of these partial derivatives describes how the sensitivity varies in time. When the model is not uniquely identifiable, or if the solution of the parameter identification problem is known only approximately, we may have not one, but a distribution of possible parameter values. This is always the case when the parameter identification problem is solved in a statistical framework. In that setting, the proper way to perform sensitivity analysis is to not rely on the values of the sensitivity functions corresponding to a single model, but to consider the distributed nature of the sensitivity functions, inherited from the distribution of the vector of the model parameters. In this paper we propose a methodology for analyzing the sensitivity of dynamic metabolic models which takes into account the variability of the sensitivity over time and across a sample. More specifically, we draw a representative sample from the posterior density of the vector of model parameters, viewed as a random variable. To interpret the output of this doubly varying sensitivity analysis, we propose

  13. Geostationary Coastal and Air Pollution Events (GEO-CAPE) Sensitivity Analysis Experiment

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Bowman, Kevin

    2014-01-01

    Geostationary Coastal and Air pollution Events (GEO-CAPE) is a NASA decadal survey mission to be designed to provide surface reflectance at high spectral, spatial, and temporal resolutions from a geostationary orbit necessary for studying regional-scale air quality issues and their impact on global atmospheric composition processes. GEO-CAPE's Atmospheric Science Questions explore the influence of both gases and particles on air quality, atmospheric composition, and climate. The objective of the GEO-CAPE Observing System Simulation Experiment (OSSE) is to analyze the sensitivity of ozone to the global and regional NOx emissions and improve the science impact of GEO-CAPE with respect to the global air quality. The GEO-CAPE OSSE team at Jet propulsion Laboratory has developed a comprehensive OSSE framework that can perform adjoint-sensitivity analysis for a wide range of observation scenarios and measurement qualities. This report discusses the OSSE framework and presents the sensitivity analysis results obtained from the GEO-CAPE OSSE framework for seven observation scenarios and three instrument systems.

  14. Sensitivity of Surface Air Quality and Global Mortality to Global, Regional, and Sectoral Black Carbon Emission Reductions

    NASA Astrophysics Data System (ADS)

    Anenberg, S.; Talgo, K.; Dolwick, P.; Jang, C.; Arunachalam, S.; West, J.

    2010-12-01

    Black carbon (BC), a component of fine particulate matter (PM2.5) released during incomplete combustion, is associated with atmospheric warming and deleterious health impacts, including premature cardiopulmonary and lung cancer mortality. A growing body of literature suggests that controlling emissions may therefore have dual benefits for climate and health. Several studies have focused on quantifying the potential impacts of reducing BC emissions from various world regions and economic sectors on radiative forcing. However, the impacts of these reductions on human health have been less well studied. Here, we use a global chemical transport model (MOZART-4) and a health impact function to quantify the surface air quality and human health benefits of controlling BC emissions. We simulate a base case and several emission control scenarios, where anthropogenic BC emissions are reduced by half globally, individually in each of eight world regions, and individually from the residential, industrial, and transportation sectors. We also simulate a global 50% reduction of both BC and organic carbon (OC) together, since they are co-emitted and both are likely to be impacted by actual control measures. Meteorology and biomass burning emissions are for the year 2002 with anthropogenic BC and OC emissions for 2000 from the IPCC AR5 inventory. Model performance is evaluated by comparing to global surface measurements of PM2.5 components. Avoided premature mortalities are calculated using the change in PM2.5 concentration between the base case and emission control scenarios and a concentration-response factor for chronic mortality from the epidemiology literature.

  15. Sensitivity of a global coupled ocean-sea ice model to the parameterization of vertical mixing

    NASA Astrophysics Data System (ADS)

    Goosse, H.; Deleersnijder, E.; Fichefet, T.; England, M. H.

    1999-06-01

    Three numerical experiments have been carried out with a global coupled ice-ocean model to investigate its sensitivity to the treatment of vertical mixing in the upper ocean. In the first experiment, a widely used fixed profile of vertical diffusivity and viscosity is imposed, with large values in the upper 50 m to crudely represent wind-driven mixing. In the second experiment, the eddy coefficients are functions of the Richardson number, and, in the third case, a relatively sophisticated parameterization, based on the turbulence closure scheme of Mellor and Yamada version 2.5, is introduced. We monitor the way the different mixing schemes affect the simulated ocean ventilation, water mass properties, and sea ice distributions. CFC uptake is also diagnosed in the model experiments. The simulation of the mixed layer depth is improved in the experiment which includes the sophisticated turbulence closure scheme. This results in a good representation of the upper ocean thermohaline structure and in heat exchange with the atmosphere within the range of current estimates. However, the error in heat flux in the experiment with simple fixed vertical mixing coefficients can be as high as 50 W m-2 in zonal mean during summer. Using CFC tracers allows us to demonstrate that the ventilation of the deep ocean is not significantly influenced by the parameterization of vertical mixing in the upper ocean. The only exception is the Southern Ocean. There, the ventilation is too strong in all three experiments. However, modifications of the vertical diffusivity and, surprisingly, the vertical viscosity significantly affect the stability of the water column in this region through their influence on upper ocean salinity, resulting in a more realistic Southern Ocean circulation. The turbulence scheme also results in an improved simulation of Antarctic sea ice coverage. This is due to to a better simulation of the mixed layer depth and thus of heat exchanges between ice and ocean. The

  16. Fast and sensitive optical toxicity bioassay based on dual wavelength analysis of bacterial ferricyanide reduction kinetics.

    PubMed

    Pujol-Vila, F; Vigués, N; Díaz-González, M; Muñoz-Berbel, X; Mas, J

    2015-05-15

    Global urban and industrial growth, with the associated environmental contamination, is promoting the development of rapid and inexpensive general toxicity methods. Current microbial methodologies for general toxicity determination rely on either bioluminescent bacteria and specific medium solution (i.e. Microtox(®)) or low sensitivity and diffusion limited protocols (i.e. amperometric microbial respirometry). In this work, fast and sensitive optical toxicity bioassay based on dual wavelength analysis of bacterial ferricyanide reduction kinetics is presented, using Escherichia coli as a bacterial model. Ferricyanide reduction kinetic analysis (variation of ferricyanide absorption with time), much more sensitive than single absorbance measurements, allowed for direct and fast toxicity determination without pre-incubation steps (assay time=10 min) and minimizing biomass interference. Dual wavelength analysis at 405 (ferricyanide and biomass) and 550 nm (biomass), allowed for ferricyanide monitoring without interference of biomass scattering. On the other hand, refractive index (RI) matching with saccharose reduced bacterial light scattering around 50%, expanding the analytical linear range in the determination of absorbent molecules. With this method, different toxicants such as metals and organic compounds were analyzed with good sensitivities. Half maximal effective concentrations (EC50) obtained after 10 min bioassay, 2.9, 1.0, 0.7 and 18.3 mg L(-1) for copper, zinc, acetic acid and 2-phenylethanol respectively, were in agreement with previously reported values for longer bioassays (around 60 min). This method represents a promising alternative for fast and sensitive water toxicity monitoring, opening the possibility of quick in situ analysis.

  17. Recurrence quantification analysis of global stock markets

    NASA Astrophysics Data System (ADS)

    Bastos, João A.; Caiado, Jorge

    2011-04-01

    This study investigates the presence of deterministic dependencies in international stock markets using recurrence plots and recurrence quantification analysis (RQA). The results are based on a large set of free float-adjusted market capitalization stock indices, covering a period of 15 years. The statistical tests suggest that the dynamics of stock prices in emerging markets is characterized by higher values of RQA measures when compared to their developed counterparts. The behavior of stock markets during critical financial events, such as the burst of the technology bubble, the Asian currency crisis, and the recent subprime mortgage crisis, is analyzed by performing RQA in sliding windows. It is shown that during these events stock markets exhibit a distinctive behavior that is characterized by temporary decreases in the fraction of recurrence points contained in diagonal and vertical structures.

  18. Global/local finite element analysis of composite materials

    NASA Technical Reports Server (NTRS)

    Griffin, O. Hayden, Jr.; Vidussoni, M. A.

    1988-01-01

    The motivation for performing global/local finite element analysis in composite materials is described. An example of such an analysis of a composite plate with a central circular hole is presented. Deformed finite element grids and interlaminar normal stress distributions are presented to aid understanding of the plate response. Such distribution at the plate edge is shown to be basically unaffected, although transverse displacements of the edge were slightly different from an analysis of a similar plate with no hole.

  19. Design and analysis of numerical experiments. [applicable to fully nonlinear, global, equivalent-barotropic model

    NASA Technical Reports Server (NTRS)

    Bowman, Kenneth P.; Sacks, Jerome; Chang, Yue-Fang

    1993-01-01

    Methods for the design and analysis of numerical experiments that are especially useful and efficient in multidimensional parameter spaces are presented. The analysis method, which is similar to kriging in the spatial analysis literature, fits a statistical model to the output of the numerical model. The method is applied to a fully nonlinear, global, equivalent-barotropic dynamical model. The statistical model also provides estimates for the uncertainty of predicted numerical model output, which can provide guidance on where in the parameter space to conduct further experiments, if necessary. The method can provide significant improvements in the efficiency with which numerical sensitivity experiments are conducted.

  20. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  1. Automation of primal and sensitivity analysis of transient coupled problems

    NASA Astrophysics Data System (ADS)

    Korelc, Jože

    2009-10-01

    The paper describes a hybrid symbolic-numeric approach to automation of primal and sensitivity analysis of computational models formulated and solved by finite element method. The necessary apparatus for the automation of steady-state, steady-state coupled, transient and transient coupled problems is introduced as combination of a symbolic system, an automatic differentiation (AD) technique and an automatic code generation. For this purpose the paper extends the classical formulation of AD by additional operators necessary for a high abstract description of primal and sensitivity analysis of the typical computational models. An appropriate abstract description for the fully implicit primal and sensitivity analysis of hyperelastic and elasto-plastic problems and a symbolic input for the generation of necessary user subroutines for the two-dimensional, hyperelastic finite element are presented at the end.

  2. Sensitivity analysis of a sound absorption model with correlated inputs

    NASA Astrophysics Data System (ADS)

    Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.

    2017-04-01

    Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.

  3. Sensitivity analysis for missing data in regulatory submissions.

    PubMed

    Permutt, Thomas

    2016-07-30

    The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  4. New Methods for Sensitivity Analysis in Chaotic, Turbulent Fluid Flows

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Wang, Qiqi

    2012-11-01

    Computational methods for sensitivity analysis are invaluable tools for fluid mechanics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods break down when applied to long-time averaged quantities in chaotic fluid flowfields, such as those obtained using high-fidelity turbulence simulations. Also, a number of dynamical properties of chaotic fluid flows, most notably the ``Butterfly Effect,'' make the formulation of new sensitivity analysis methods difficult. This talk will outline two chaotic sensitivity analysis methods. The first method, the Fokker-Planck adjoint method, forms a probability density function on the strange attractor associated with the system and uses its adjoint to find gradients. The second method, the Least Squares Sensitivity method, finds some ``shadow trajectory'' in phase space for which perturbations do not grow exponentially. This method is formulated as a quadratic programing problem with linear constraints. This talk is concluded with demonstrations of these new methods on some example problems, including the Lorenz attractor and flow around an airfoil at a high angle of attack.

  5. Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC

    NASA Astrophysics Data System (ADS)

    Yang, J.; Castelli, F.; Chen, Y.

    2014-10-01

    Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ɛ-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more

  6. From screening to quantitative sensitivity analysis. A unified approach

    NASA Astrophysics Data System (ADS)

    Campolongo, Francesca; Saltelli, Andrea; Cariboni, Jessica

    2011-04-01

    The present work is a sequel to a recent one published on this journal where the superiority of 'radial design' to compute the 'total sensitivity index' was ascertained. Both concepts belong to sensitivity analysis of model output. A radial design is the one whereby starting from a random point in the hyperspace of the input factors one step in turn is taken for each factor. The procedure is iterated a number of times with a different starting random point as to collect a sample of elementary shifts for each factor. The total sensitivity index is a powerful sensitivity measure which can be estimated based on such a sample. Given the similarity between the total sensitivity index and a screening test known as method of the elementary effects (or method of Morris), we test the radial design on this method. Both methods are best practices: the total sensitivity index in the class of the quantitative measures and the elementary effects in that of the screening methods. We find that the radial design is indeed superior even for the computation of the elementary effects method. This opens the door to a sensitivity analysis strategy whereby the analyst can start with a small number of points (screening-wise) and then - depending on the results - possibly increase the numeral of points up to compute a fully quantitative measure. Also of interest to practitioners is that a radial design is nothing else than an iterated 'One factor At a Time' (OAT) approach. OAT is a radial design of size one. While OAT is not a good practice, modelers in all domains keep using it for sensitivity analysis for reasons discussed elsewhere (Saltelli and Annoni, 2010) [23]. With the present approach modelers are offered a straightforward and economic upgrade of their OAT which maintain OAT's appeal of having just one factor moved at each step.

  7. Global carbon monoxide cycle: Modeling and data analysis

    NASA Astrophysics Data System (ADS)

    Arellano, Avelino F., Jr.

    The overarching goal of this dissertation is to develop robust, spatially and temporally resolved CO sources, using global chemical transport modeling, CO measurements from Climate Monitoring and Diagnostic Laboratory (CMDL) and Measurement of Pollution In The Troposphere (MOPITT), under the framework of Bayesian synthesis inversion. To rigorously quantify the CO sources, I conducted five sets of inverse analyses, with each set investigating specific methodological and scientific issues. The first two inverse analyses separately explored two different CO observations to estimate CO sources by region and sector. Under a range of scenarios relating to inverse methodology and data quality issues, top-down estimates using CMDL CO surface and MOPITT CO remote-sensed measurements show consistent results particularly on a significantly large fossil fuel/biofuel (FFBF) emission in East Asia than present bottom-up estimates. The robustness of this estimate is strongly supported by forward and inverse modeling studies in the region particularly from TRansport and Chemical Evolution over the Pacific (TRACE-P) campaign. The use of high-resolution measurement for the first time in CO inversion also draws attention to a methodology issue that the range of estimates from the scenarios is larger than posterior uncertainties, suggesting that estimate uncertainties may be underestimated. My analyses highlight the utility of top-down approach to provide additional constraints on present global estimates by also pointing to other discrepancies including apparent underestimation of FFBF from Africa/Latin America and biomass burning (BIOM) sources in Africa, southeast Asia and north-Latin America, indicating inconsistencies on our current understanding of fuel use and land-use patterns in these regions. Inverse analysis using MOPITT is extended to determine the extent of MOPITT information and estimate monthly regional CO sources. A major finding, which is consistent with other

  8. On global energy scenario, dye-sensitized solar cells and the promise of nanotechnology.

    PubMed

    Reddy, K Govardhan; Deepak, T G; Anjusree, G S; Thomas, Sara; Vadukumpully, Sajini; Subramanian, K R V; Nair, Shantikumar V; Nair, A Sreekumaran

    2014-04-21

    One of the major problems that humanity has to face in the next 50 years is the energy crisis. The rising population, rapidly changing life styles of people, heavy industrialization and changing landscape of cities have increased energy demands, enormously. The present annual worldwide electricity consumption is 12 TW and is expected to become 24 TW by 2050, leaving a challenging deficit of 12 TW. The present energy scenario of using fossil fuels to meet the energy demand is unable to meet the increase in demand effectively, as these fossil fuel resources are non-renewable and limited. Also, they cause significant environmental hazards, like global warming and the associated climatic issues. Hence, there is an urgent necessity to adopt renewable sources of energy, which are eco-friendly and not extinguishable. Of the various renewable sources available, such as wind, tidal, geothermal, biomass, solar, etc., solar serves as the most dependable option. Solar energy is freely and abundantly available. Once installed, the maintenance cost is very low. It is eco-friendly, safely fitting into our society without any disturbance. Producing electricity from the Sun requires the installation of solar panels, which incurs a huge initial cost and requires large areas of lands for installation. This is where nanotechnology comes into the picture and serves the purpose of increasing the efficiency to higher levels, thus bringing down the overall cost for energy production. Also, emerging low-cost solar cell technologies, e.g. thin film technologies and dye-sensitized solar cells (DSCs) help to replace the use of silicon, which is expensive. Again, nanotechnological implications can be applied in these solar cells, to achieve higher efficiencies. This paper vividly deals with the various available solar cells, choosing DSCs as the most appropriate ones. The nanotechnological implications which help to improve their performance are dealt with, in detail. Additionally, the

  9. Connectivity diagnostics in the Mediterranean obtained from Lagrangian Flow Networks; global patterns, sensitivity and robustness

    NASA Astrophysics Data System (ADS)

    Monroy, Pedro; Rossi, Vincent; Ser-Giacomi, Enrico; López, Cristóbal; Hernández-García, Emilio

    2017-04-01

    Lagrangian Flow Network (LFN) is a modeling framework in which geographical sub-areas of the ocean are represented as nodes in a network and are interconnected by links representing the transport of water, substances or propagules (eggs and larvae) by currents. Here we compute for the surface of the whole Mediterranean basin four connectivity metrics derived from LFN that measure retention and exchange processes, thus providing a systematic characterization of propagule dispersal driven by the ocean circulation. Then we assess the sensitivity and robustness of the results with respect to the most relevant parameters: the density of released particles, the node size (spatial-scales of discretization), the Pelagic Larval Duration (PLD) and the modality of spawning. We find a threshold for the number of particles per node that guarantees reliable values for most of the metrics examined, independently of node size. For our setup, this threshold is 100 particles per node. We also find that the size of network nodes has a non-trivial influence on the spatial variability of both exchange and retention metrics. Although the spatio-temporal fluctuations of the circulation affect larval transport in a complex and unpredictable manner, our analyses evidence how specific biological parametrization impact the robustness of connectivity diagnostics. Connectivity estimates for long PLDs are more robust against biological uncertainties (PLD and spawning date) than for short PLDs. Furthermore, our model suggests that for mass-spawners that release propagules over short periods (≃ 2 to 10 days), daily release must be simulated to properly consider connectivity fluctuations. In contrast, average connectivity estimates for species that spawn repeatedly over longer duration (a few weeks to a few months) remain robust even using longer periodicity (5 to 10 days). Our results give a global view of the surface connectivity of the Mediterranean Sea and have implications for the design of

  10. Drought-Net: A global network to assess terrestrial ecosystem sensitivity to drought

    NASA Astrophysics Data System (ADS)

    Smith, Melinda; Sala, Osvaldo; Phillips, Richard

    2015-04-01

    All ecosystems will be impacted to some extent by climate change, with forecasts for more frequent and severe drought likely to have the greatest impact on terrestrial ecosystems. Terrestrial ecosystems are known to vary dramatically in their responses to drought. However, the factors that may make some ecosystems respond more or less than others remains unknown, but such understanding is critical for predicting drought impacts at regional and continental scales. To effectively forecast terrestrial ecosystem responses to drought, ecologists must assess responses of a range of different ecosystems to drought, and then improve existing models by incorporating the factors that cause such variation in response. Traditional site-based research cannot provide this knowledge because experiments conducted at individual sites are often not directly comparable due to differences in methodologies employed. Coordinated experimental networks, with identical protocols and comparable measurements, are ideally suited for comparative studies at regional to global scales. The US National Science Foundation-funded Drought-Net Research Coordination Network (www.drought-net.org) will advance understanding of the determinants of terrestrial ecosystem responses to drought by bringing together an international group of scientists to conduct two key activities conducted over the next five years: 1) planning and coordinating new research using standardized measurements to leverage the value of existing drought experiments across the globe (Enhancing Existing Experiments, EEE), and 2) finalizing the design and facilitating the establishment of a new international network of coordinated drought experiments (the International Drought Experiment, IDE). The primary goals of these activities are to assess: (1) patterns of differential terrestrial ecosystem sensitivity to drought and (2) potential mechanisms underlying those patterns.

  11. Breastfeeding policy: a globally comparative analysis

    PubMed Central

    Raub, Amy; Earle, Alison

    2013-01-01

    Abstract Objective To explore the extent to which national policies guaranteeing breastfeeding breaks to working women may facilitate breastfeeding. Methods An analysis was conducted of the number of countries that guarantee breastfeeding breaks, the daily number of hours guaranteed, and the duration of guarantees. To obtain current, detailed information on national policies, original legislation as well as secondary sources on 182 of the 193 Member States of the United Nations were examined. Regression analyses were conducted to test the association between national policy and rates of exclusive breastfeeding while controlling for national income level, level of urbanization, female percentage of the labour force and female literacy rate. Findings Breastfeeding breaks with pay are guaranteed in 130 countries (71%) and unpaid breaks are guaranteed in seven (4%). No policy on breastfeeding breaks exists in 45 countries (25%). In multivariate models, the guarantee of paid breastfeeding breaks for at least 6 months was associated with an increase of 8.86 percentage points in the rate of exclusive breastfeeding (P < 0.05). Conclusion A greater percentage of women practise exclusive breastfeeding in countries where laws guarantee breastfeeding breaks at work. If these findings are confirmed in longitudinal studies, health outcomes could be improved by passing legislation on breastfeeding breaks in countries that do not yet ensure the right to breastfeed. PMID:24052676

  12. Annual flood sensitivities to El Niño-Southern Oscillation at the global scale

    NASA Astrophysics Data System (ADS)

    Ward, P. J.; Eisner, S.; Flörke, M.; Dettinger, M. D.; Kummu, M.

    2014-01-01

    Floods are amongst the most dangerous natural hazards in terms of economic damage. Whilst a growing number of studies have examined how river floods are influenced by climate change, the role of natural modes of interannual climate variability remains poorly understood. We present the first global assessment of the influence of El Niño-Southern Oscillation (ENSO) on annual river floods, defined here as the peak daily discharge in a given year. The analysis was carried out by simulating daily gridded discharges using the WaterGAP model (Water - a Global Assessment and Prognosis), and examining statistical relationships between these discharges and ENSO indices. We found that, over the period 1958-2000, ENSO exerted a significant influence on annual floods in river basins covering over a third of the world's land surface, and that its influence on annual floods has been much greater than its influence on average flows. We show that there are more areas in which annual floods intensify with La Niña and decline with El Niño than vice versa. However, we also found that in many regions the strength of the relationships between ENSO and annual floods have been non-stationary, with either strengthening or weakening trends during the study period. We discuss the implications of these findings for science and management. Given the strong relationships between ENSO and annual floods, we suggest that more research is needed to assess relationships between ENSO and flood impacts (e.g. loss of lives or economic damage). Moreover, we suggest that in those regions where useful relationships exist, this information could be combined with ongoing advances in ENSO prediction research, in order to provide year-to-year probabilistic flood risk forecasts.

  13. Annual flood sensitivities to El Niño-Southern Oscillation at the global scale

    USGS Publications Warehouse

    Ward, Philip J.; Eisner, S.; Flörke, M.; Dettinger, Michael D.; Kummu, M.

    2013-01-01

    Floods are amongst the most dangerous natural hazards in terms of economic damage. Whilst a growing number of studies have examined how river floods are influenced by climate change, the role of natural modes of interannual climate variability remains poorly understood. We present the first global assessment of the influence of El Niño–Southern Oscillation (ENSO) on annual river floods, defined here as the peak daily discharge in a given year. The analysis was carried out by simulating daily gridded discharges using the WaterGAP model (Water – a Global Assessment and Prognosis), and examining statistical relationships between these discharges and ENSO indices. We found that, over the period 1958–2000, ENSO exerted a significant influence on annual floods in river basins covering over a third of the world's land surface, and that its influence on annual floods has been much greater than its influence on average flows. We show that there are more areas in which annual floods intensify with La Niña and decline with El Niño than vice versa. However, we also found that in many regions the strength of the relationships between ENSO and annual floods have been non-stationary, with either strengthening or weakening trends during the study period. We discuss the implications of these findings for science and management. Given the strong relationships between ENSO and annual floods, we suggest that more research is needed to assess relationships between ENSO and flood impacts (e.g. loss of lives or economic damage). Moreover, we suggest that in those regions where useful relationships exist, this information could be combined with ongoing advances in ENSO prediction research, in order to provide year-to-year probabilistic flood risk forecasts.

  14. Investigating the Role of Diabatic Heating in Global Atmospheric Circulation and Climate Sensitivity: An Energetics Approach

    NASA Astrophysics Data System (ADS)

    Romanski, J.; Rossow, W. B.

    2009-12-01

    The generation of zonal and eddy available potential energy (Gz and Ge) as envisioned by Lorenz (1955) are computed on a global, daily, synoptic-scale basis. Using global, mostly satellite-derived datasets for the diabatic heating components and the temperature enables us to obtain Gz and especially Ge with greater accuracy and at higher temporal and spatial resolution than previously possible. In particular, we are able to consider the contribution of each diabatic heating component separately and in combination. We use this information to determine how various processes contribute to the energy available for the general and eddy circulations. Contributions to the global mean daily mean Gz and Ge are computed at a horizontal resolution of 2.5 degrees for the lower troposphere (surface to 680mb), middle troposphere (680-440mb), upper troposphere (440mb to 100mb) and stratosphere (100mb to TOA) for 1997 through 2000. Comparisons to earlier estimates of the generation terms by Lorenz (1967) and Peixoto and Oort (1992) are made. The seasonal and spatial variability of the total generation and of the individual contributions of heating by radiative flux convergence, latent heating and sensible heat flux from the surface are discussed. The generation of zonal and eddy potential energy (Gz and Ge) as envisioned by Lorenz (1955) is computed for seven climate models from the World Climate Research Programme's (WCRP's) Coupled Model Intercomparison Project phase 3 (CMIP3) multi-model dataset (Meehl et al 2007). Gz and Ge are computed directly from the temperature and diabatic heating fields of the current climate and a doubled CO2 climate. The seasonal and spatial variability of the total generation and of the individual contributions of heating by radiative flux convergence, latent heating and sensible heat flux from the surface of each model are compared to one another, and evaluated with respect to the same quantities computed from observations. In contrast to a recent

  15. SEDPHAT – a platform for global ITC analysis and global multi-method analysis of molecular interactions

    PubMed Central

    Zhao, Huaying; Piszczek, Grzegorz; Schuck, Peter

    2014-01-01

    Isothermal titration calorimetry experiments can provide significantly more detailed information about molecular interactions when combined in global analysis. For example, global analysis can improve the precision of binding affinity and enthalpy, and of possible linkage parameters, even for simple bimolecular interactions, and greatly facilitate the study of multi-site and multi-component systems with competition or cooperativity. A pre-requisite for global analysis is the departure from the traditional binding model, including an ‘n’-value describing unphysical, non-integral numbers of sites. Instead, concentration correction factors can be introduced to account for either errors in the concentration determination or for the presence of inactive fractions of material. SEDPHAT is a computer program that embeds these ideas and provides a graphical user interface for the seamless combination of biophysical experiments to be globally modeled with a large number of different binding models. It offers statistical tools for the rigorous determination of parameter errors, correlations, as well as advanced statistical functions for global ITC (gITC) and global multi-method analysis (GMMA). SEDPHAT will also take full advantage of error bars of individual titration data points determined with the unbiased integration software NITPIC. The present communication reviews principles and strategies of global analysis for ITC and its extension to GMMA in SEDPHAT. We will also introduce a new graphical tool for aiding experimental design by surveying the concentration space and generating simulated data sets, which can be subsequently statistically examined for their information content. This procedure can replace the ‘c’-value as an experimental design parameter, which ceases to be helpful for multi-site systems and in the context of gITC. PMID:25477226

  16. SEDPHAT--a platform for global ITC analysis and global multi-method analysis of molecular interactions.

    PubMed

    Zhao, Huaying; Piszczek, Grzegorz; Schuck, Peter

    2015-04-01

    Isothermal titration calorimetry experiments can provide significantly more detailed information about molecular interactions when combined in global analysis. For example, global analysis can improve the precision of binding affinity and enthalpy, and of possible linkage parameters, even for simple bimolecular interactions, and greatly facilitate the study of multi-site and multi-component systems with competition or cooperativity. A pre-requisite for global analysis is the departure from the traditional binding model, including an 'n'-value describing unphysical, non-integral numbers of sites. Instead, concentration correction factors can be introduced to account for either errors in the concentration determination or for the presence of inactive fractions of material. SEDPHAT is a computer program that embeds these ideas and provides a graphical user interface for the seamless combination of biophysical experiments to be globally modeled with a large number of different binding models. It offers statistical tools for the rigorous determination of parameter errors, correlations, as well as advanced statistical functions for global ITC (gITC) and global multi-method analysis (GMMA). SEDPHAT will also take full advantage of error bars of individual titration data points determined with the unbiased integration software NITPIC. The present communication reviews principles and strategies of global analysis for ITC and its extension to GMMA in SEDPHAT. We will also introduce a new graphical tool for aiding experimental design by surveying the concentration space and generating simulated data sets, which can be subsequently statistically examined for their information content. This procedure can replace the 'c'-value as an experimental design parameter, which ceases to be helpful for multi-site systems and in the context of gITC.

  17. Sensitivity analysis of dynamic biological systems with time-delays

    PubMed Central

    2010-01-01

    Background Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. Results We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. Conclusions By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex

  18. Sensitivity Analysis for Dynamic Failure and Damage in Metallic Structures

    DTIC Science & Technology

    2005-03-01

    respect to the nominal alloy composition at the center of weld surface (Point 6 of Figure 7) -21 - U CO 2000 - * cE axc -2000 o" "....". . -401.11𔃺 1󈧄...Final Report Sensitivity Analysis for Dynamic Failure and Damage in Metallic Structures Office of Naval Research 800 North Quincy Street Arlington...3/31/05 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Sensitivity Analysis for Dynamic Failure and Damage in Metallic Structures Sb. GRANT NUMBER N000

  19. Preliminary sensitivity analysis of the Devonian shale in Ohio

    SciTech Connect

    Covatch, G.L.

    1985-06-01

    A preliminary sensitivity analysis of gas reserves in Devonian shale in Ohio was made on the six partitioned areas, based on a payout time of 3 years. Data sets were obtained from Lewin and Associates for the six partitioned areas in Ohio and used as a base case for the METC sensitivity analysis. A total of five different well stimulation techniques were evaluated in both the METC and Lewin studies. The five techniques evaluated were borehole shooting, a small radial stimulation, a large radial stimulation, a small vertical fracture, and a large vertical fracture.

  20. Stable locality sensitive discriminant analysis for image recognition.

    PubMed

    Gao, Quanxue; Liu, Jingjing; Cui, Kai; Zhang, Hailin; Wang, Xiaogang

    2014-06-01

    Locality Sensitive Discriminant Analysis (LSDA) is one of the prevalent discriminant approaches based on manifold learning for dimensionality reduction. However, LSDA ignores the intra-class variation that characterizes the diversity of data, resulting in unstableness of the intra-class geometrical structure representation and not good enough performance of the algorithm. In this paper, a novel approach is proposed, namely stable locality sensitive discriminant analysis (SLSDA), for dimensionality reduction. SLSDA constructs an adjacency graph to model the diversity of data and then integrates it in the objective function of LSDA. Experimental results in five databases show the effectiveness of the proposed approach.

  1. Sensitivity analysis of the fission gas behavior model in BISON.

    SciTech Connect

    Swiler, Laura Painton; Pastore, Giovanni; Perez, Danielle; Williamson, Richard

    2013-05-01

    This report summarizes the result of a NEAMS project focused on sensitivity analysis of a new model for the fission gas behavior (release and swelling) in the BISON fuel performance code of Idaho National Laboratory. Using the new model in BISON, the sensitivity of the calculated fission gas release and swelling to the involved parameters and the associated uncertainties is investigated. The study results in a quantitative assessment of the role of intrinsic uncertainties in the analysis of fission gas behavior in nuclear fuel.

  2. Efficient sensitivity analysis method for chaotic dynamical systems

    SciTech Connect

    Liao, Haitao

    2016-05-15

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.

  3. Parameter sensitivity analysis for different complexity land surface models using multicriteria methods

    NASA Astrophysics Data System (ADS)

    Bastidas, L. A.; Hogue, T. S.; Sorooshian, S.; Gupta, H. V.; Shuttleworth, W. J.

    2006-10-01

    A multicriteria algorithm, the MultiObjective Generalized Sensitivity Analysis (MOGSA), was used to investigate the parameter sensitivity of five different land surface models with increasing levels of complexity in the physical representation of the vegetation (BUCKET, CHASM, BATS 1, Noah, and BATS 2) at five different sites representing crop land/pasture, grassland, rain forest, cropland, and semidesert areas. The methodology allows for the inclusion of parameter interaction and does not require assumptions of independence between parameters, while at the same time allowing for the ranking of several single-criterion and a global multicriteria sensitivity indices. The analysis required on the order of 50 thousand model runs. The results confirm that parameters with similar "physical meaning" across different model structures behave in different ways depending on the model and the locations. It is also shown that after a certain level an increase in model structure complexity does not necessarily lead to better parameter identifiability, i.e., higher sensitivity, and that a certain level of overparameterization is observed. For the case of the BATS 1 and BATS 2 models, with essentially the same model structure but a more sophisticated vegetation model, paradoxically, the effect on parameter sensitivity is mainly reflected in the sensitivity of the soil-related parameters.

  4. Confidence and sensitivity study of the OAFlux multisensor synthesis of the global ocean surface vector wind from 1987 onward

    NASA Astrophysics Data System (ADS)

    Yu, Lisan; Jin, Xiangze

    2014-10-01

    This study presented an uncertainty assessment of the high-resolution global analysis of daily-mean ocean-surface vector winds (1987 onward) by the Objectively Analyzed air-sea Fluxes (OAFlux) project. The time series was synthesized from multiple satellite sensors using a variational approach to find a best fit to input data in a weighted least-squares cost function. The variational framework requires the a priori specification of the weights, or equivalently, the error covariances of input data, which are seldom known. Two key issues were investigated. The first issue examined the specification of the weights for the OAFlux synthesis. This was achieved by designing a set of weight-varying experiments and applying the criteria requiring that the chosen weights should make the best-fit of the cost function be optimal with regard to both input satellite observations and the independent wind time series measurements at 126 buoy locations. The weights thus determined represent an approximation to the error covariances, which inevitably contain a degree of uncertainty. Hence, the second issue addressed the sensitivity of the OAFlux synthesis to the uncertainty in the weight assignments. Weight perturbation experiments were conducted and ensemble statistics were used to estimate the sensitivity. The study showed that the leading sources of uncertainty for the weight selection are high winds (>15 ms-1) and heavy rain, which are the conditions that cause divergence in wind retrievals from different sensors. Future technical advancement made in wind retrieval algorithms would be key to further improvement of the multisensory synthesis in events of severe storms.

  5. A revised view on the sensitivity of global freshwater availability to changes in precipitation, potential evaporation, and other factors

    NASA Astrophysics Data System (ADS)

    Berghuijs, Wouter; Woods, Ross; van Emmerik, Tim; Larsen, Josh

    2017-04-01

    Precipitation (P) and potential evaporation (Ep) are commonly studied drivers of changing freshwater availability, as aridity (Ep/P) explains 90% of the spatial differences in mean runoff across the globe. However, it is unclear if changes in aridity over time are also the most important cause for temporal changes in mean runoff and how this varies across regions. Here, we resolve shortcomings of previous Budyko-based global assessments on the relative role of aridity for changes in water availability. We argue that previous assessments do not properly account for precipitation effects. To resolve this issue, the effects of changes in Ep and P need to be considered separately. We present a new global assessment of the elasticity of runoff to changes in precipitation, potential evaporation, and other factors. The global pattern suggests that for 83% of the land surface runoff is most sensitive to precipitation changes, while other factors dominate for the remaining 17%. Runoff elasticity to changes in potential evaporation is always lower than elasticity to precipitation, and in many arid regions this difference can reach an order of magnitude. Although surface water resources in dryland regions are highly sensitive to precipitation changes, their sensitivity to changes in other factors (e.g. changing climatic variability, CO2 - vegetation feedbacks and anthropogenic modifications to the landscape) is often far higher. Nonetheless, at the global scale we find precipitation changes have the greatest impact on water availability, which contrasts markedly with recent assessments.

  6. A global optimization approach to multi-polarity sentiment analysis.

    PubMed

    Li, Xinmiao; Li, Jing; Wu, Yukeng

    2015-01-01

    Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From

  7. Global multi-level analysis of the 'scientific food web'.

    PubMed

    Mazloumian, Amin; Helbing, Dirk; Lozano, Sergi; Light, Robert P; Börner, Katy

    2013-01-01

    We introduce a network-based index analyzing excess scientific production and consumption to perform a comprehensive global analysis of scholarly knowledge production and diffusion on the level of continents, countries, and cities. Compared to measures of scientific production and consumption such as number of publications or citation rates, our network-based citation analysis offers a more differentiated picture of the 'ecosystem of science'. Quantifying knowledge flows between 2000 and 2009, we identify global sources and sinks of knowledge production. Our knowledge flow index reveals, where ideas are born and consumed, thereby defining a global 'scientific food web'. While Asia is quickly catching up in terms of publications and citation rates, we find that its dependence on knowledge consumption has further increased.

  8. Uncertainty and sensitivity analysis and its applications in OCD measurements

    NASA Astrophysics Data System (ADS)

    Vagos, Pedro; Hu, Jiangtao; Liu, Zhuan; Rabello, Silvio

    2009-03-01

    This article describes an Uncertainty & Sensitivity Analysis package, a mathematical tool that can be an effective time-shortcut for optimizing OCD models. By including real system noises in the model, an accurate method for predicting measurements uncertainties is shown. The assessment, in an early stage, of the uncertainties, sensitivities and correlations of the parameters to be measured drives the user in the optimization of the OCD measurement strategy. Real examples are discussed revealing common pitfalls like hidden correlations and simulation results are compared with real measurements. Special emphasis is given to 2 different cases: 1) the optimization of the data set of multi-head metrology tools (NI-OCD, SE-OCD), 2) the optimization of the azimuth measurement angle in SE-OCD. With the uncertainty and sensitivity analysis result, the right data set and measurement mode (NI-OCD, SE-OCD or NI+SE OCD) can be easily selected to achieve the best OCD model performance.

  9. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  10. Sensitivity analysis in a Lassa fever deterministic mathematical model

    NASA Astrophysics Data System (ADS)

    Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman

    2015-05-01

    Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.

  11. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  12. The Volatility of Data Space: Topology Oriented Sensitivity Analysis

    PubMed Central

    Du, Jing; Ligmann-Zielinska, Arika

    2015-01-01

    Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929

  13. Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.

    2007-01-01

    To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.

  14. Global land cover mapping: a review and uncertainty analysis

    USGS Publications Warehouse

    Congalton, Russell G.; Gu, Jianyu; Yadav, Kamini; Thenkabail, Prasad S.; Ozdogan, Mutlu

    2014-01-01

    Given the advances in remotely sensed imagery and associated technologies, several global land cover maps have been produced in recent times including IGBP DISCover, UMD Land Cover, Global Land Cover 2000 and GlobCover 2009. However, the utility of these maps for specific applications has often been hampered due to considerable amounts of uncertainties and inconsistencies. A thorough review of these global land cover projects including evaluating the sources of error and uncertainty is prudent and enlightening. Therefore, this paper describes our work in which we compared, summarized and conducted an uncertainty analysis of the four global land cover mapping projects using an error budget approach. The results showed that the classification scheme and the validation methodology had the highest error contribution and implementation priority. A comparison of the classification schemes showed that there are many inconsistencies between the definitions of the map classes. This is especially true for the mixed type classes for which thresholds vary for the attributes/discriminators used in the classification process. Examination of these four global mapping projects provided quite a few important lessons for the future global mapping projects including the need for clear and uniform definitions of the classification scheme and an efficient, practical, and valid design of the accuracy assessment.

  15. Ecological network analysis on global virtual water trade.

    PubMed

    Yang, Zhifeng; Mao, Xufeng; Zhao, Xu; Chen, Bin

    2012-02-07

    Global water interdependencies are likely to increase with growing virtual water trade. To address the issues of the indirect effects of water trade through the global economic circulation, we use ecological network analysis (ENA) to shed insight into the complicated system interactions. A global model of virtual water flow among agriculture and livestock production trade in 1995-1999 is also built as the basis for network analysis. Control analysis is used to identify the quantitative control or dependency relations. The utility analysis provides more indicators for describing the mutual relationship between two regions/countries by imitating the interactions in the ecosystem and distinguishes the beneficiary and the contributor of virtual water trade system. Results show control and utility relations can well depict the mutual relation in trade system, and direct observable relations differ from integral ones with indirect interactions considered. This paper offers a new way to depict the interrelations between trade components and can serve as a meaningful start as we continue to use ENA in providing more valuable implications for freshwater study on a global scale.

  16. Global Analysis of Horizontal Gene Transfer in Fusarium verticillioides

    USDA-ARS?s Scientific Manuscript database

    The co-occurrence of microbes within plants and other specialized niches may facilitate horizontal gene transfer (HGT) affecting host-pathogen interactions. We recently identified fungal-to-fungal HGTs involving metabolic gene clusters. For a global analysis of HGTs in the maize pathogen Fusarium ve...

  17. Globalization and International Student Mobility: A Network Analysis

    ERIC Educational Resources Information Center

    Shields, Robin

    2013-01-01

    This article analyzes changes to the network of international student mobility in higher education over a 10-year period (1999-2008). International student flows have increased rapidly, exceeding 3 million in 2009, and extensive data on mobility provide unique insight into global educational processes. The analysis is informed by three theoretical…

  18. Global/local finite element analysis for textile composites

    NASA Technical Reports Server (NTRS)

    Woo, Kyeongsik; Whitcomb, John

    1993-01-01

    Conventional analysis of textile composites is impractical because of the complex microstructure. Global/local methodology combined with special macro elements is proposed herein as a practical alternative. Initial tests showed dramatic reductions in the computational effort with only small loss in accuracy.

  19. Global/local finite element analysis for textile composites

    SciTech Connect

    Woo, K.; Whitcomb, J.

    1993-01-01

    Conventional analysis of textile composites is impractical because of the complex microstructure. Global/local methodology combined with special macro elements is proposed herein as a practical alternative. Initial tests showed dramatic reductions in the computational effort with only small loss in accuracy. 9 refs.

  20. Globalization and International Student Mobility: A Network Analysis

    ERIC Educational Resources Information Center

    Shields, Robin

    2013-01-01

    This article analyzes changes to the network of international student mobility in higher education over a 10-year period (1999-2008). International student flows have increased rapidly, exceeding 3 million in 2009, and extensive data on mobility provide unique insight into global educational processes. The analysis is informed by three theoretical…