Global and Local Sensitivity Analysis Methods for a Physical System
ERIC Educational Resources Information Center
Morio, Jerome
2011-01-01
Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
Towards More Efficient and Effective Global Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin
2014-05-01
Sensitivity analysis (SA) is an important paradigm in the context of model development and application. There are a variety of approaches towards sensitivity analysis that formally describe different "intuitive" understandings of the sensitivity of a single or multiple model responses to different factors such as model parameters or forcings. These approaches are based on different philosophies and theoretical definitions of sensitivity and range from simple local derivatives to rigorous Sobol-type analysis-of-variance approaches. In general, different SA methods focus and identify different properties of the model response and may lead to different, sometimes even conflicting conclusions about the underlying sensitivities. This presentation revisits the theoretical basis for sensitivity analysis, critically evaluates the existing approaches in the literature, and demonstrates their shortcomings through simple examples. Important properties of response surfaces that are associated with the understanding and interpretation of sensitivities are outlined. A new approach towards global sensitivity analysis is developed that attempts to encompass the important, sensitivity-related properties of response surfaces. Preliminary results show that the new approach is superior to the standard approaches in the literature in terms of effectiveness and efficiency.
Optimizing human activity patterns using global sensitivity analysis
Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.
2014-01-01
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080
Optimizing human activity patterns using global sensitivity analysis
Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.
2013-12-10
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.
Optimizing human activity patterns using global sensitivity analysis
Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.
2013-12-10
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less
A global sensitivity analysis of crop virtual water content
NASA Astrophysics Data System (ADS)
Tamea, S.; Tuninetti, M.; D'Odorico, P.; Laio, F.; Ridolfi, L.
2015-12-01
The concepts of virtual water and water footprint are becoming widely used in the scientific literature and they are proving their usefulness in a number of multidisciplinary contexts. With such growing interest a measure of data reliability (and uncertainty) is becoming pressing but, as of today, assessments of data sensitivity to model parameters, performed at the global scale, are not known. This contribution aims at filling this gap. Starting point of this study is the evaluation of the green and blue virtual water content (VWC) of four staple crops (i.e. wheat, rice, maize, and soybean) at a global high resolution scale. In each grid cell, the crop VWC is given by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield, where evapotranspiration is determined with a detailed daily soil water balance and actual yield is estimated using country-based data, adjusted to account for spatial variability. The model provides estimates of the VWC at a 5x5 arc minutes and it improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The model is then used as the basis for a sensitivity analysis, in order to evaluate the role of model parameters in affecting the VWC and to understand how uncertainties in input data propagate and impact the VWC accounting. In each cell, small changes are exerted to one parameter at a time, and a sensitivity index is determined as the ratio between the relative change of VWC and the relative change of the input parameter with respect to its reference value. At the global scale, VWC is found to be most sensitive to the planting date, with a positive (direct) or negative (inverse) sensitivity index depending on the typical season of crop planting date. VWC is also markedly dependent on the length of the growing period, with an increase in length always producing an increase of VWC, but with higher spatial variability for rice than for
Global sensitivity analysis of the radiative transfer model
NASA Astrophysics Data System (ADS)
Neelam, Maheshwari; Mohanty, Binayak P.
2015-04-01
With the recently launched Soil Moisture Active Passive (SMAP) mission, it is very important to have a complete understanding of the radiative transfer model for better soil moisture retrievals and to direct future research and field campaigns in areas of necessity. Because natural systems show great variability and complexity with respect to soil, land cover, topography, precipitation, there exist large uncertainties and heterogeneities in model input factors. In this paper, we explore the possibility of using global sensitivity analysis (GSA) technique to study the influence of heterogeneity and uncertainties in model inputs on zero order radiative transfer (ZRT) model and to quantify interactions between parameters. GSA technique is based on decomposition of variance and can handle nonlinear and nonmonotonic functions. We direct our analyses toward growing agricultural fields of corn and soybean in two different regions, Iowa, USA (SMEX02) and Winnipeg, Canada (SMAPVEX12). We noticed that, there exists a spatio-temporal variation in parameter interactions under different soil moisture and vegetation conditions. Radiative Transfer Model (RTM) behaves more non-linearly in SMEX02 and linearly in SMAPVEX12, with average parameter interactions of 14% in SMEX02 and 5% in SMAPVEX12. Also, parameter interactions increased with vegetation water content (VWC) and roughness conditions. Interestingly, soil moisture shows an exponentially decreasing sensitivity function whereas parameters such as root mean square height (RMS height) and vegetation water content show increasing sensitivity with 0.05 v/v increase in soil moisture range. Overall, considering the SMAPVEX12 fields to be water rich environment (due to higher observed SM) and SMEX02 fields to be energy rich environment (due to lower SM and wide ranges of TSURF), our results indicate that first order as well as interactions between the parameters change with water and energy rich environments.
Global sensitivity analysis of the Indian monsoon during the Pleistocene
NASA Astrophysics Data System (ADS)
Araya-Melo, P. A.; Crucifix, M.; Bounceur, N.
2015-01-01
The sensitivity of the Indian monsoon to the full spectrum of climatic conditions experienced during the Pleistocene is estimated using the climate model HadCM3. The methodology follows a global sensitivity analysis based on the emulator approach of Oakley and O'Hagan (2004) implemented following a three-step strategy: (1) development of an experiment plan, designed to efficiently sample a five-dimensional input space spanning Pleistocene astronomical configurations (three parameters), CO2 concentration and a Northern Hemisphere glaciation index; (2) development, calibration and validation of an emulator of HadCM3 in order to estimate the response of the Indian monsoon over the full input space spanned by the experiment design; and (3) estimation and interpreting of sensitivity diagnostics, including sensitivity measures, in order to synthesise the relative importance of input factors on monsoon dynamics, estimate the phase of the monsoon intensity response with respect to that of insolation, and detect potential non-linear phenomena. By focusing on surface temperature, precipitation, mixed-layer depth and sea-surface temperature over the monsoon region during the summer season (June-July-August-September), we show that precession controls the response of four variables: continental temperature in phase with June to July insolation, high glaciation favouring a late-phase response, sea-surface temperature in phase with May insolation, continental precipitation in phase with July insolation, and mixed-layer depth in antiphase with the latter. CO2 variations control temperature variance with an amplitude similar to that of precession. The effect of glaciation is dominated by the albedo forcing, and its effect on precipitation competes with that of precession. Obliquity is a secondary effect, negligible on most variables except sea-surface temperature. It is also shown that orography forcing reduces the glacial cooling, and even has a positive effect on precipitation
Global sensitivity analysis of Indian Monsoon during the Pleistocene
NASA Astrophysics Data System (ADS)
Araya-Melo, P. A.; Crucifix, M.; Bounceur, N.
2014-04-01
The sensitivity of Indian Monsoon to the full spectrum of climatic conditions experienced during the Pleistocene is estimated using the climate model HadCM3. The methodology follows a global sensitivity analysis based on the emulator approach of Oakley and O'Hagan (2004) implemented following a three-step strategy: (1) develop an experiment plan, designed to efficiently sample a 5-dimensional input space spanning Pleistocene astronomical configurations (3 parameters), CO2 concentration and a Northern Hemisphere glaciation index, (2) develop, calibrate and validate an emulator of HadCM3, in order to estimate the response of the Indian Monsoon over the full input space spanned by the experiment design, and (3) estimate and interpret sensitivity diagnostics, including sensitivity measures, in order to synthesize the relative importance of input factors on monsoon dynamics, estimate the phase of the monsoon intensity response with respect to that of insolation, and detect potential non-linear phenomena. Specifically, we focus on four variables: summer (JJAS) temperature and precipitation over North India, and JJAS sea-surface temperature and mixed-layer depth over the north-western side of the Indian ocean. It is shown that precession controls the response of four variables: continental temperature in phase with June to July insolation, high glaciation favouring a late-phase response, sea-surface temperature in phase with May insolation, and continental precipitation in phase with July insolation, and mixed-layer depth in antiphase with the latter. CO2 variations controls temperature variance with an amplitude similar to that of precession. The effect of glaciation is dominated by the albedo forcing, and its effect on precipitation competes with that of precession. Obliquity is a secondary effect, negligible on most variables except sea-surface temperature. It is also shown that orography forcing reduces the glacial cooling, and even has a positive effect on
Simulation of the global contrail radiative forcing: A sensitivity analysis
NASA Astrophysics Data System (ADS)
Yi, Bingqi; Yang, Ping; Liou, Kuo-Nan; Minnis, Patrick; Penner, Joyce E.
2012-12-01
The contrail radiative forcing induced by human aviation activity is one of the most uncertain contributions to climate forcing. An accurate estimation of global contrail radiative forcing is imperative, and the modeling approach is an effective and prominent method to investigate the sensitivity of contrail forcing to various potential factors. We use a simple offline model framework that is particularly useful for sensitivity studies. The most-up-to-date Community Atmospheric Model version 5 (CAM5) is employed to simulate the atmosphere and cloud conditions during the year 2006. With updated natural cirrus and additional contrail optical property parameterizations, the RRTMG Model (RRTM-GCM application) is used to simulate the global contrail radiative forcing. Global contrail coverage and optical depth derived from the literature for the year 2002 is used. The 2006 global annual averaged contrail net (shortwave + longwave) radiative forcing is estimated to be 11.3 mW m-2. Regional contrail radiative forcing over dense air traffic areas can be more than ten times stronger than the global average. A series of sensitivity tests are implemented and show that contrail particle effective size, contrail layer height, the model cloud overlap assumption, and contrail optical properties are among the most important factors. The difference between the contrail forcing under all and clear skies is also shown.
Robust global sensitivity analysis of a river management model
NASA Astrophysics Data System (ADS)
Peeters, L. J. M.; Podger, G. M.; Smith, T.; Pickett, T.; Bark, R.; Cuddy, S. M.
2014-03-01
The simulation of routing and distribution of water through a regulated river system with a river management model will quickly results in complex and non-linear model behaviour. A robust sensitivity analysis increases the transparency of the model and provide both the modeller and the system manager with better understanding and insight on how the model simulates reality and management operations. In this study, a robust, density-based sensitivity analysis, developed by Plischke et al. (2013), is applied to an eWater Source river management model. The sensitivity analysis is extended to not only account for main but also for interaction effects and is able to identify major linear effects as well as subtle minor and non-linear effects. The case study is an idealised river management model representing typical conditions of the Southern Murray-Darling Basin in Australia for which the sensitivity of a variety of model outcomes to variations in the driving forces, inflow to the system, rainfall and potential evapotranspiration, is examined. The model outcomes are most sensitive to the inflow to the system, but the sensitivity analysis identified minor effects of potential evapotranspiration as well as non-linear interaction effects between inflow and potential evapotranspiration.
Economic impact analysis for global warming: Sensitivity analysis for cost and benefit estimates
Ierland, E.C. van; Derksen, L.
1994-12-31
Proper policies for the prevention or mitigation of the effects of global warming require profound analysis of the costs and benefits of alternative policy strategies. Given the uncertainty about the scientific aspects of the process of global warming, in this paper a sensitivity analysis for the impact of various estimates of costs and benefits of greenhouse gas reduction strategies is carried out to analyze the potential social and economic impacts of climate change.
How to assess the Efficiency and "Uncertainty" of Global Sensitivity Analysis?
NASA Astrophysics Data System (ADS)
Haghnegahdar, Amin; Razavi, Saman
2016-04-01
Sensitivity analysis (SA) is an important paradigm for understanding model behavior, characterizing uncertainty, improving model calibration, etc. Conventional "global" SA (GSA) approaches are rooted in different philosophies, resulting in different and sometime conflicting and/or counter-intuitive assessment of sensitivity. Moreover, most global sensitivity techniques are highly computationally demanding to be able to generate robust and stable sensitivity metrics over the entire model response surface. Accordingly, a novel sensitivity analysis method called Variogram Analysis of Response Surfaces (VARS) is introduced to overcome the aforementioned issues. VARS uses the Variogram concept to efficiently provide a comprehensive assessment of global sensitivity across a range of scales within the parameter space. Based on the VARS principles, in this study we present innovative ideas to assess (1) the efficiency of GSA algorithms and (2) the level of confidence we can assign to a sensitivity assessment. We use multiple hydrological models with different levels of complexity to explain the new ideas.
Global sensitivity analysis of analytical vibroacoustic transmission models
NASA Astrophysics Data System (ADS)
Christen, Jean-Loup; Ichchou, Mohamed; Troclet, Bernard; Bareille, Olivier; Ouisse, Morvan
2016-04-01
Noise reduction issues arise in many engineering problems. One typical vibroacoustic problem is the transmission loss (TL) optimisation and control. The TL depends mainly on the mechanical parameters of the considered media. At early stages of the design, such parameters are not well known. Decision making tools are therefore needed to tackle this issue. In this paper, we consider the use of the Fourier Amplitude Sensitivity Test (FAST) for the analysis of the impact of mechanical parameters on features of interest. FAST is implemented with several structural configurations. FAST method is used to estimate the relative influence of the model parameters while assuming some uncertainty or variability on their values. The method offers a way to synthesize the results of a multiparametric analysis with large variability. Results are presented for transmission loss of isotropic, orthotropic and sandwich plates excited by a diffuse field on one side. Qualitative trends found to agree with the physical expectation. Design rules can then be set up for vibroacoustic indicators. The case of a sandwich plate is taken as an example of the use of this method inside an optimisation process and for uncertainty quantification.
Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao
2015-01-01
There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants.
Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao
2015-01-01
There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants. PMID:25459861
Global in Time Analysis and Sensitivity Analysis for the Reduced NS-α Model of Incompressible Flow
NASA Astrophysics Data System (ADS)
Rebholz, Leo; Zerfas, Camille; Zhao, Kun
2016-09-01
We provide a detailed global in time analysis, and sensitivity analysis and testing, for the recently proposed (by the authors) reduced NS-α model. We extend the known analysis of the model to the global in time case by proving it is globally well-posed, and also prove some new results for its long time treatment of energy. We also derive PDE system that describes the sensitivity of the model with respect to the filtering radius parameter, and prove it is well-posed. An efficient numerical scheme for the sensitivity system is then proposed and analyzed, and proven to be stable and optimally accurate. Finally, two physically meaningful test problems are simulated: channel flow past a cylinder (including lift and drag calculations) and turbulent channel flow with {Re_{τ}=590} . The numerical results reveal that sensitivity is created near boundaries, and thus this is where the choice of the filtering radius is most critical.
NASA Astrophysics Data System (ADS)
Dai, Heng; Ye, Ming
2015-09-01
Sensitivity analysis is a vital tool in hydrological modeling to identify influential parameters for inverse modeling and uncertainty analysis, and variance-based global sensitivity analysis has gained popularity. However, the conventional global sensitivity indices are defined with consideration of only parametric uncertainty. Based on a hierarchical structure of parameter, model, and scenario uncertainties and on recently developed techniques of model- and scenario-averaging, this study derives new global sensitivity indices for multiple models and multiple scenarios. To reduce computational cost of variance-based global sensitivity analysis, sparse grid collocation method is used to evaluate the mean and variance terms involved in the variance-based global sensitivity analysis. In a simple synthetic case of groundwater flow and reactive transport, it is demonstrated that the global sensitivity indices vary substantially between the four models and three scenarios. Not considering the model and scenario uncertainties, might result in biased identification of important model parameters. This problem is resolved by using the new indices defined for multiple models and/or multiple scenarios. This is particularly true when the sensitivity indices and model/scenario probabilities vary substantially. The sparse grid collocation method dramatically reduces the computational cost, in comparison with the popular quasi-random sampling method. The new framework of global sensitivity analysis is mathematically general, and can be applied to a wide range of hydrologic and environmental problems.
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.
A Methodology For Performing Global Uncertainty And Sensitivity Analysis In Systems Biology
Marino, Simeone; Hogue, Ian B.; Ray, Christian J.; Kirschner, Denise E.
2008-01-01
Accuracy of results from mathematical and computer models of biological systems is often complicated by the presence of uncertainties in experimental data that are used to estimate parameter values. Current mathematical modeling approaches typically use either single-parameter or local sensitivity analyses. However, these methods do not accurately assess uncertainty and sensitivity in the system as, by default they hold all other parameters fixed at baseline values. Using techniques described within we demonstrate how a multi-dimensional parameter space can be studied globally so all uncertainties can be identified. Further, uncertainty and sensitivity analysis techniques can help to identify and ultimately control uncertainties. In this work we develop methods for applying existing analytical tools to perform analyses on a variety of mathematical and computer models. We compare two specific types of global sensitivity analysis indexes that have proven to be among the most robust and efficient. Through familiar and new examples of mathematical and computer models, we provide a complete methodology for performing these analyses, both in deterministic and stochastic settings, and propose novel techniques to handle problems encountered during this type of analyses. PMID:18572196
Quantitative global sensitivity analysis of the RZWQM to warrant a robust and effective calibration
NASA Astrophysics Data System (ADS)
Esmaeili, Sara; Thomson, Neil R.; Tolson, Bryan A.; Zebarth, Bernie J.; Kuchta, Shawn H.; Neilsen, Denise
2014-04-01
Sensitivity analysis is a useful tool to identify key model parameters as well as to quantify simulation errors resulting from parameter uncertainty. The Root Zone Water Quality Model (RZWQM) has been subjected to various sensitivity analyses; however, in most of these efforts a local sensitivity analysis method was implemented, the nonlinear response was neglected, and the dependency among parameters was not examined. In this study we employed a comprehensive global sensitivity analysis to quantify the contribution of 70 model input parameters (including 35 hydrological parameters and 35 nitrogen cycle parameters) on the uncertainty of key RZWQM outputs relevant to raspberry row crops in Abbotsford, BC, Canada. Specifically, 9 model outputs that capture various vertical-spatial and temporal domains were investigated. A rank transformation method was used to account for the nonlinear behavior of the model. The variance of the model outputs was decomposed into correlated and uncorrelated partial variances to provide insight into parameter dependency and interaction. The results showed that, in general, the field capacity (soil water content at -33 kPa) in upper 30 cm of the soil horizon had the greatest contribution (>30%) to the estimate of the water flux and evapotranspiration uncertainty. The most influential parameters affecting the simulation of soil nitrate content, mineralization, denitrification, nitrate leaching and plant nitrogen uptake were the transient coefficient of fast to intermediate humus pool, the carbon to nitrogen ratio of the fast humus pool, the organic matter decay rate in fast humus pool, and field capacity. The correlated contribution to the model output uncertainty was <10% for the set of parameters investigated. The findings from this effort were utilized in two calibration case studies to demonstrate the utility of this global sensitivity analysis to reduce the risk of over-parameterization, and to identify the vertical location of
Development and sensitivity analysis of a global drinking water quality index.
Rickwood, C J; Carr, G M
2009-09-01
The UNEP GEMS/Water Programme is the leading international agency responsible for the development of water quality indicators and maintains the only global database of water quality for inland waters (GEMStat). The protection of source water quality for domestic use (drinking water, abstraction etc) was identified by GEMS/Water as a priority for assessment. A composite index was developed to assess source water quality across a range of inland water types, globally, and over time. The approach for development was three-fold: (1) Select guidelines from the World Health Organisation that are appropriate in assessing global water quality for human health, (2) Select variables from GEMStat that have an appropriate guideline and reasonable global coverage, and (3) determine, on an annual basis, an overall index rating for each station using the water quality index equation endorsed by the Canadian Council of Ministers of the Environment. The index allowed measurements of the frequency and extent to which variables exceeded their respective WHO guidelines, at each individual monitoring station included within GEMStat, allowing both spatial and temporal assessment of global water quality. Development of the index was followed by preliminary sensitivity analysis and verification of the index against real water quality data.
SAFE(R): A Matlab/Octave Toolbox (and R Package) for Global Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Pianosi, Francesca; Sarrazin, Fanny; Gollini, Isabella; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis (GSA) is increasingly used in the development and assessment of hydrological models, as well as for dominant control analysis and for scenario discovery to support water resource management under deep uncertainty. Here we present a toolbox for the application of GSA, called SAFE (Sensitivity Analysis For Everybody) that implements several established GSA methods, including method of Morris, Regional Sensitivity Analysis, variance-based sensitivity Analysis (Sobol') and FAST. It also includes new approaches and visualization tools to complement these established methods. The Toolbox is released in two versions, one running under Matlab/Octave (called SAFE) and one running in R (called SAFER). Thanks to its modular structure, SAFE(R) can be easily integrated with other toolbox and packages, and with models running in a different computing environment. Another interesting feature of SAFE(R) is that all the implemented methods include specific functions for assessing the robustness and convergence of the sensitivity estimates. Furthermore, SAFE(R) includes numerous visualisation tools for the effective investigation and communication of GSA results. The toolbox is designed to make GSA accessible to non-specialist users, and to provide a fully commented code for more experienced users to complement their own tools. The documentation includes a set of workflow scripts with practical guidelines on how to apply GSA and how to use the toolbox. SAFE(R) is open source and freely available from the following website: http://bristol.ac.uk/cabot/resources/safe-toolbox/ Ultimately, SAFE(R) aims at improving the diffusion and quality of GSA practice in the hydrological modelling community.
Toward a more robust variance-based global sensitivity analysis of model outputs
Tong, C
2007-10-15
Global sensitivity analysis (GSA) measures the variation of a model output as a function of the variations of the model inputs given their ranges. In this paper we consider variance-based GSA methods that do not rely on certain assumptions about the model structure such as linearity or monotonicity. These variance-based methods decompose the output variance into terms of increasing dimensionality called 'sensitivity indices', first introduced by Sobol' [25]. Sobol' developed a method of estimating these sensitivity indices using Monte Carlo simulations. McKay [13] proposed an efficient method using replicated Latin hypercube sampling to compute the 'correlation ratios' or 'main effects', which have been shown to be equivalent to Sobol's first-order sensitivity indices. Practical issues with using these variance estimators are how to choose adequate sample sizes and how to assess the accuracy of the results. This paper proposes a modified McKay main effect method featuring an adaptive procedure for accuracy assessment and improvement. We also extend our adaptive technique to the computation of second-order sensitivity indices. Details of the proposed adaptive procedure as wells as numerical results are included in this paper.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin
2015-04-01
Earth and Environmental Systems (EES) models are essential components of research, development, and decision-making in science and engineering disciplines. With continuous advances in understanding and computing power, such models are becoming more complex with increasingly more factors to be specified (model parameters, forcings, boundary conditions, etc.). To facilitate better understanding of the role and importance of different factors in producing the model responses, the procedure known as 'Sensitivity Analysis' (SA) can be very helpful. Despite the availability of a large body of literature on the development and application of various SA approaches, two issues continue to pose major challenges: (1) Ambiguous Definition of Sensitivity - Different SA methods are based in different philosophies and theoretical definitions of sensitivity, and can result in different, even conflicting, assessments of the underlying sensitivities for a given problem, (2) Computational Cost - The cost of carrying out SA can be large, even excessive, for high-dimensional problems and/or computationally intensive models. In this presentation, we propose a new approach to sensitivity analysis that addresses the dual aspects of 'effectiveness' and 'efficiency'. By effective, we mean achieving an assessment that is both meaningful and clearly reflective of the objective of the analysis (the first challenge above), while by efficiency we mean achieving statistically robust results with minimal computational cost (the second challenge above). Based on this approach, we develop a 'global' sensitivity analysis framework that efficiently generates a newly-defined set of sensitivity indices that characterize a range of important properties of metric 'response surfaces' encountered when performing SA on EES models. Further, we show how this framework embraces, and is consistent with, a spectrum of different concepts regarding 'sensitivity', and that commonly-used SA approaches (e.g., Sobol
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; Turner, Adrian Keith; Jeffery, Nicole
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Multi-objective global sensitivity analysis of the WRF model parameters
NASA Astrophysics Data System (ADS)
Quan, Jiping; Di, Zhenhua; Duan, Qingyun; Gong, Wei; Wang, Chen
2015-04-01
Tuning model parameters to match model simulations with observations can be an effective way to enhance the performance of numerical weather prediction (NWP) models such as Weather Research and Forecasting (WRF) model. However, this is a very complicated process as a typical NWP model involves many model parameters and many output variables. One must take a multi-objective approach to ensure all of the major simulated model outputs are satisfactory. This talk presents the results of an investigation of multi-objective parameter sensitivity analysis of the WRF model to different model outputs, including conventional surface meteorological variables such as precipitation, surface temperature, humidity and wind speed, as well as atmospheric variables such as total precipitable water, cloud cover, boundary layer height and outgoing long radiation at the top of the atmosphere. The goal of this study is to identify the most important parameters that affect the predictive skill of short-range meteorological forecasts by the WRF model. The study was performed over the Greater Beijing Region of China. A total of 23 adjustable parameters from seven different physical parameterization schemes were considered. Using a multi-objective global sensitivity analysis method, we examined the WRF model parameter sensitivities to the 5-day simulations of the aforementioned model outputs. The results show that parameter sensitivities vary with different model outputs. But three to four of the parameters are shown to be sensitive to all model outputs considered. The sensitivity results from this research can be the basis for future model parameter optimization of the WRF model.
NASA Astrophysics Data System (ADS)
Vanrolleghem, Peter A.; Mannina, Giorgio; Cosenza, Alida; Neumann, Marc B.
2015-03-01
Sensitivity analysis represents an important step in improving the understanding and use of environmental models. Indeed, by means of global sensitivity analysis (GSA), modellers may identify both important (factor prioritisation) and non-influential (factor fixing) model factors. No general rule has yet been defined for verifying the convergence of the GSA methods. In order to fill this gap this paper presents a convergence analysis of three widely used GSA methods (SRC, Extended FAST and Morris screening) for an urban drainage stormwater quality-quantity model. After the convergence was achieved the results of each method were compared. In particular, a discussion on peculiarities, applicability, and reliability of the three methods is presented. Moreover, a graphical Venn diagram based classification scheme and a precise terminology for better identifying important, interacting and non-influential factors for each method is proposed. In terms of convergence, it was shown that sensitivity indices related to factors of the quantity model achieve convergence faster. Results for the Morris screening method deviated considerably from the other methods. Factors related to the quality model require a much higher number of simulations than the number suggested in literature for achieving convergence with this method. In fact, the results have shown that the term "screening" is improperly used as the method may exclude important factors from further analysis. Moreover, for the presented application the convergence analysis shows more stable sensitivity coefficients for the Extended-FAST method compared to SRC and Morris screening. Substantial agreement in terms of factor fixing was found between the Morris screening and Extended FAST methods. In general, the water quality related factors exhibited more important interactions than factors related to water quantity. Furthermore, in contrast to water quantity model outputs, water quality model outputs were found to be
A Protocol for the Global Sensitivity Analysis of Impact Assessment Models in Life Cycle Assessment.
Cucurachi, S; Borgonovo, E; Heijungs, R
2016-02-01
The life cycle assessment (LCA) framework has established itself as the leading tool for the assessment of the environmental impact of products. Several works have established the need of integrating the LCA and risk analysis methodologies, due to the several common aspects. One of the ways to reach such integration is through guaranteeing that uncertainties in LCA modeling are carefully treated. It has been claimed that more attention should be paid to quantifying the uncertainties present in the various phases of LCA. Though the topic has been attracting increasing attention of practitioners and experts in LCA, there is still a lack of understanding and a limited use of the available statistical tools. In this work, we introduce a protocol to conduct global sensitivity analysis in LCA. The article focuses on the life cycle impact assessment (LCIA), and particularly on the relevance of global techniques for the development of trustable impact assessment models. We use a novel characterization model developed for the quantification of the impacts of noise on humans as a test case. We show that global SA is fundamental to guarantee that the modeler has a complete understanding of: (i) the structure of the model and (ii) the importance of uncertain model inputs and the interaction among them.
A new framework for comprehensive, robust, and efficient global sensitivity analysis: 2. Application
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2016-01-01
Based on the theoretical framework for sensitivity analysis called "Variogram Analysis of Response Surfaces" (VARS), developed in the companion paper, we develop and implement a practical "star-based" sampling strategy (called STAR-VARS), for the application of VARS to real-world problems. We also develop a bootstrap approach to provide confidence level estimates for the VARS sensitivity metrics and to evaluate the reliability of inferred factor rankings. The effectiveness, efficiency, and robustness of STAR-VARS are demonstrated via two real-data hydrological case studies (a 5-parameter conceptual rainfall-runoff model and a 45-parameter land surface scheme hydrology model), and a comparison with the "derivative-based" Morris and "variance-based" Sobol approaches are provided. Our results show that STAR-VARS provides reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being 1-2 orders of magnitude more efficient than the Morris or Sobol approaches.
NASA Technical Reports Server (NTRS)
Davies, Misty D.; Gundy-Burlet, Karen
2010-01-01
A useful technique for the validation and verification of complex flight systems is Monte Carlo Filtering -- a global sensitivity analysis that tries to find the inputs and ranges that are most likely to lead to a subset of the outputs. A thorough exploration of the parameter space for complex integrated systems may require thousands of experiments and hundreds of controlled and measured variables. Tools for analyzing this space often have limitations caused by the numerical problems associated with high dimensionality and caused by the assumption of independence of all of the dimensions. To combat both of these limitations, we propose a technique that uses a combination of the original variables with the derived variables obtained during a principal component analysis.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2015-05-01
Sensitivity analysis is an essential paradigm in Earth and Environmental Systems modeling. However, the term "sensitivity" has a clear definition, based in partial derivatives, only when specified locally around a particular point (e.g., optimal solution) in the problem space. Accordingly, no unique definition exists for "global sensitivity" across the problem space, when considering one or more model responses to different factors such as model parameters or forcings. A variety of approaches have been proposed for global sensitivity analysis, based on different philosophies and theories, and each of these formally characterizes a different "intuitive" understanding of sensitivity. These approaches focus on different properties of the model response at a fundamental level and may therefore lead to different (even conflicting) conclusions about the underlying sensitivities. Here we revisit the theoretical basis for sensitivity analysis, summarize and critically evaluate existing approaches in the literature, and demonstrate their flaws and shortcomings through conceptual examples. We also demonstrate the difficulty involved in interpreting "global" interaction effects, which may undermine the value of existing interpretive approaches. With this background, we identify several important properties of response surfaces that are associated with the understanding and interpretation of sensitivities in the context of Earth and Environmental System models. Finally, we highlight the need for a new, comprehensive framework for sensitivity analysis that effectively characterizes all of the important sensitivity-related properties of model response surfaces.
A Global Analysis of CYP51 Diversity and Azole Sensitivity in Rhynchosporium commune.
Brunner, Patrick C; Stefansson, Tryggvi S; Fountaine, James; Richina, Veronica; McDonald, Bruce A
2016-04-01
CYP51 encodes the target site of the azole class of fungicides widely used in plant protection. Some ascomycete pathogens carry two CYP51 paralogs called CYP51A and CYP51B. A recent analysis of CYP51 sequences in 14 European isolates of the barley scald pathogen Rhynchosporium commune revealed three CYP51 paralogs, CYP51A, CYP51B, and a pseudogene called CYP51A-p. The same analysis showed that CYP51A exhibits a presence/absence polymorphism, with lower sensitivity to azole fungicides associated with the presence of a functional CYP51A. We analyzed a global collection of nearly 400 R. commune isolates to determine if these findings could be extended beyond Europe. Our results strongly support the hypothesis that CYP51A played a key role in the emergence of azole resistance globally and provide new evidence that the CYP51A gene in R. commune has further evolved, presumably in response to azole exposure. We also present evidence for recent long-distance movement of evolved CYP51A alleles, highlighting the risk associated with movement of fungicide resistance alleles among international trading partners.
Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.
Designing novel cellulase systems through agent-based modeling and global sensitivity analysis
Apte, Advait A; Senger, Ryan S; Fong, Stephen S
2014-01-01
Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736
Spatial heterogeneity and sensitivity analysis of crop virtual water content at a global scale
NASA Astrophysics Data System (ADS)
Tuninetti, Marta; Tamea, Stefania; D'Odorico, Paolo; Laio, Francesco; Ridolfi, Luca
2015-04-01
In this study, the green and blue virtual water content (VWC) of four staple crops (i.e., wheat, rice, maize, and soybean) is quantified at a high resolution scale, for the period 1996-2005, and a sensitivity analysis is performed for model parameters. In each grid cell, the crop VWC is obtained by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield. The evapotranspiration is determined with a daily soil water balance that takes into account crop and soil properties, production conditions, and climate. The actual yield is estimated using country-based values provided by the FAOSTAT database multiplied by a coefficient adjusting for the spatial variability within countries. The model improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The overall water use (blue+green) for the global production of the four grains investigated is 2673 km3/yr. Food production almost entirely depends on green water (>90%), but, when applied, irrigation makes production more water efficient, thus requiring lower VWC. The spatial variability of the virtual water content is partly driven by the yield pattern with an average correlation coefficient of 0.83, and partly by reference evapotranspiration with correlation coefficient of 0.27. Wheat shows the highest spatial variability since it is grown under a wide range of climatic conditions, soil properties, and agricultural practices. The sensitivity analysis is performed to understand how uncertainties in input data propagate and impact the virtual water content accounting. In each cell fixed changes are introduced to one input parameters at a time, and a sensitivity index, SI, is determined as the ratio between the variation of VWC referred to its baseline value and the variation of the input parameter with respect to its reference value. VWC is found to be most sensitive to planting date (PD), followed by the length of
Harper, Elizabeth B; Stella, John C; Fremier, Alexander K
2011-06-01
Mechanism-based ecological models are a valuable tool for understanding the drivers of complex ecological systems and for making informed resource-management decisions. However, inaccurate conclusions can be drawn from models with a large degree of uncertainty around multiple parameter estimates if uncertainty is ignored. This is especially true in nonlinear systems with multiple interacting variables. We addressed these issues for a mechanism-based, demographic model of Populus fremontii (Fremont cottonwood), the dominant riparian tree species along southwestern U.S. rivers. Many cottonwood populations have declined following widespread floodplain conversion and flow regulation. As a result, accurate predictive models are needed to analyze effects of future climate change and water management decisions. To quantify effects of parameter uncertainty, we developed an analytical approach that combines global sensitivity analysis (GSA) with classification and regression trees (CART) and Random Forest, a bootstrapping CART method. We used GSA to quantify the interacting effects of the full range of uncertainty around all parameter estimates, Random Forest to rank parameters according to their total effect on model predictions, and CART to identify higher-order interactions. GSA simulations yielded a wide range of predictions, including annual germination frequency of 10-100%, annual first-year survival frequency of 0-50%, and patch occupancy of 0-100%. This variance was explained primarily by complex interactions among abiotic parameters including capillary fringe height, stage-discharge relationship, and floodplain accretion rate, which interacted with biotic factors to affect survival. Model precision was primarily influenced by well-studied parameter estimates with minimal associated uncertainty and was virtually unaffected by parameter estimates for which there are no available empirical data and thus a large degree of uncertainty. Therefore, research to improve
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2015-12-01
Earth and environmental systems models (EESMs) are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. Complexity and dimensionality are manifested by introducing many different factors in EESMs (i.e., model parameters, forcings, boundary conditions, etc.) to be identified. Sensitivity Analysis (SA) provides an essential means for characterizing the role and importance of such factors in producing the model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to 'variogram analysis', that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are limiting cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
NASA Astrophysics Data System (ADS)
Younes, A.; Delay, F.; Fajraoui, N.; Fahs, M.; Mara, T. A.
2016-08-01
The concept of dual flowing continuum is a promising approach for modeling solute transport in porous media that includes biofilm phases. The highly dispersed transit time distributions often generated by these media are taken into consideration by simply stipulating that advection-dispersion transport occurs through both the porous and the biofilm phases. Both phases are coupled but assigned with contrasting hydrodynamic properties. However, the dual flowing continuum suffers from intrinsic equifinality in the sense that the outlet solute concentration can be the result of several parameter sets of the two flowing phases. To assess the applicability of the dual flowing continuum, we investigate how the model behaves with respect to its parameters. For the purpose of this study, a Global Sensitivity Analysis (GSA) and a Statistical Calibration (SC) of model parameters are performed for two transport scenarios that differ by the strength of interaction between the flowing phases. The GSA is shown to be a valuable tool to understand how the complex system behaves. The results indicate that the rate of mass transfer between the two phases is a key parameter of the model behavior and influences the identifiability of the other parameters. For weak mass exchanges, the output concentration is mainly controlled by the velocity in the porous medium and by the porosity of both flowing phases. In the case of large mass exchanges, the kinetics of this exchange also controls the output concentration. The SC results show that transport with large mass exchange between the flowing phases is more likely affected by equifinality than transport with weak exchange. The SC also indicates that weakly sensitive parameters, such as the dispersion in each phase, can be accurately identified. Removing them from calibration procedures is not recommended because it might result in biased estimations of the highly sensitive parameters.
Lumen, Annie; McNally, Kevin; George, Nysia; Fisher, Jeffrey W; Loizou, George D
2015-01-01
A deterministic biologically based dose-response model for the thyroidal system in a near-term pregnant woman and the fetus was recently developed to evaluate quantitatively thyroid hormone perturbations. The current work focuses on conducting a quantitative global sensitivity analysis on this complex model to identify and characterize the sources and contributions of uncertainties in the predicted model output. The workflow and methodologies suitable for computationally expensive models, such as the Morris screening method and Gaussian Emulation processes, were used for the implementation of the global sensitivity analysis. Sensitivity indices, such as main, total and interaction effects, were computed for a screened set of the total thyroidal system descriptive model input parameters. Furthermore, a narrower sub-set of the most influential parameters affecting the model output of maternal thyroid hormone levels were identified in addition to the characterization of their overall and pair-wise parameter interaction quotients. The characteristic trends of influence in model output for each of these individual model input parameters over their plausible ranges were elucidated using Gaussian Emulation processes. Through global sensitivity analysis we have gained a better understanding of the model behavior and performance beyond the domains of observation by the simultaneous variation in model inputs over their range of plausible uncertainties. The sensitivity analysis helped identify parameters that determine the driving mechanisms of the maternal and fetal iodide kinetics, thyroid function and their interactions, and contributed to an improved understanding of the system modeled. We have thus demonstrated the use and application of global sensitivity analysis for a biologically based dose-response model for sensitive life-stages such as pregnancy that provides richer information on the model and the thyroidal system modeled compared to local sensitivity analysis.
Global sensitivity analysis of the BSM2 dynamic influent disturbance scenario generator.
Flores-Alsina, Xavier; Gernaey, Krist V; Jeppsson, Ulf
2012-01-01
This paper presents the results of a global sensitivity analysis (GSA) of a phenomenological model that generates dynamic wastewater treatment plant (WWTP) influent disturbance scenarios. This influent model is part of the Benchmark Simulation Model (BSM) family and creates realistic dry/wet weather files describing diurnal, weekend and seasonal variations through the combination of different generic model blocks, i.e. households, industry, rainfall and infiltration. The GSA is carried out by combining Monte Carlo simulations and standardized regression coefficients (SRC). Cluster analysis is then applied, classifying the influence of the model parameters into strong, medium and weak. The results show that the method is able to decompose the variance of the model predictions (R(2)> 0.9) satisfactorily, thus identifying the model parameters with strongest impact on several flow rate descriptors calculated at different time resolutions. Catchment size (PE) and the production of wastewater per person equivalent (QperPE) are two parameters that strongly influence the yearly average dry weather flow rate and its variability. Wet weather conditions are mainly affected by three parameters: (1) the probability of occurrence of a rain event (Llrain); (2) the catchment size, incorporated in the model as a parameter representing the conversion from mm rain · day(-1) to m(3) · day(-1) (Qpermm); and, (3) the quantity of rain falling on permeable areas (aH). The case study also shows that in both dry and wet weather conditions the SRC ranking changes when the time scale of the analysis is modified, thus demonstrating the potential to identify the effect of the model parameters on the fast/medium/slow dynamics of the flow rate. The paper ends with a discussion on the interpretation of GSA results and of the advantages of using synthetic dynamic flow rate data for WWTP influent scenario generation. This section also includes general suggestions on how to use the proposed
A comparison of five forest interception models using global sensitivity and uncertainty analysis
NASA Astrophysics Data System (ADS)
Linhoss, Anna C.; Siegert, Courtney M.
2016-07-01
Interception by the forest canopy plays a critical role in the hydrologic cycle by removing a significant portion of incoming precipitation from the terrestrial component. While there are a number of existing physical models of forest interception, few studies have summarized or compared these models. The objective of this work is to use global sensitivity and uncertainty analysis to compare five mechanistic interception models including the Rutter, Rutter Sparse, Gash, Sparse Gash, and Liu models. Using parameter probability distribution functions of values from the literature, our results show that on average storm duration [Dur], gross precipitation [PG], canopy storage [S] and solar radiation [Rn] are the most important model parameters. On the other hand, empirical parameters used in calculating evaporation and drip (i.e. trunk evaporation as a proportion of evaporation from the saturated canopy [ɛ], the empirical drainage parameter [b], the drainage partitioning coefficient [pd], and the rate of water dripping from the canopy when canopy storage has been reached [Ds]) have relatively low levels of importance in interception modeling. As such, future modeling efforts should aim to decompose parameters that are the most influential in determining model outputs into easily measurable physical components. Because this study compares models, the choices regarding the parameter probability distribution functions are applied across models, which enables a more definitive ranking of model uncertainty.
Sun, Huaiwei; Zhu, Yan; Yang, Jinzhong; Wang, Xiugui
2015-11-01
As the amount of water resources that can be utilized for agricultural production is limited, the reuse of treated wastewater (TWW) for irrigation is a practical solution to alleviate the water crisis in China. The process-based models, which estimate nitrogen dynamics under irrigation, are widely used to investigate the best irrigation and fertilization management practices in developed and developing countries. However, for modeling such a complex system for wastewater reuse, it is critical to conduct a sensitivity analysis to determine numerous input parameters and their interactions that contribute most to the variance of the model output for the development of process-based model. In this study, application of a comprehensive global sensitivity analysis for nitrogen dynamics was reported. The objective was to compare different global sensitivity analysis (GSA) on the key parameters for different model predictions of nitrogen and crop growth modules. The analysis was performed as two steps. Firstly, Morris screening method, which is one of the most commonly used screening method, was carried out to select the top affected parameters; then, a variance-based global sensitivity analysis method (extended Fourier amplitude sensitivity test, EFAST) was used to investigate more thoroughly the effects of selected parameters on model predictions. The results of GSA showed that strong parameter interactions exist in crop nitrogen uptake, nitrogen denitrification, crop yield, and evapotranspiration modules. Among all parameters, one of the soil physical-related parameters named as the van Genuchten air entry parameter showed the largest sensitivity effects on major model predictions. These results verified that more effort should be focused on quantifying soil parameters for more accurate model predictions in nitrogen- and crop-related predictions, and stress the need to better calibrate the model in a global sense. This study demonstrates the advantages of the GSA on a
NASA Astrophysics Data System (ADS)
Peeters, L. J. M.; Podger, G. M.; Smith, T.; Pickett, T.; Bark, R. H.; Cuddy, S. M.
2014-09-01
The simulation of routing and distribution of water through a regulated river system with a river management model will quickly result in complex and nonlinear model behaviour. A robust sensitivity analysis increases the transparency of the model and provides both the modeller and the system manager with a better understanding and insight on how the model simulates reality and management operations. In this study, a robust, density-based sensitivity analysis, developed by Plischke et al. (2013), is applied to an eWater Source river management model. This sensitivity analysis methodology is extended to not only account for main effects but also for interaction effects. The combination of sensitivity indices and scatter plots enables the identification of major linear effects as well as subtle minor and nonlinear effects. The case study is an idealized river management model representing typical conditions of the southern Murray-Darling Basin in Australia for which the sensitivity of a variety of model outcomes to variations in the driving forces, inflow to the system, rainfall and potential evapotranspiration, is examined. The model outcomes are most sensitive to the inflow to the system, but the sensitivity analysis identified minor effects of potential evapotranspiration and nonlinear interaction effects between inflow and potential evapotranspiration.
Global Sensitivity Analysis for Large-scale Socio-hydrological Models using the Cloud
NASA Astrophysics Data System (ADS)
Hu, Y.; Garcia-Cabrejo, O.; Cai, X.; Valocchi, A. J.; Dupont, B.
2014-12-01
In the context of coupled human and natural system (CHNS), incorporating human factors into water resource management provides us with the opportunity to understand the interactions between human and environmental systems. A multi-agent system (MAS) model is designed to couple with the physically-based Republican River Compact Administration (RRCA) groundwater model, in an attempt to understand the declining water table and base flow in the heavily irrigated Republican River basin. For MAS modelling, we defined five behavioral parameters (κ_pr, ν_pr, κ_prep, ν_prep and λ) to characterize the agent's pumping behavior given the uncertainties of the future crop prices and precipitation. κ and ν describe agent's beliefs in their prior knowledge of the mean and variance of crop prices (κ_pr, ν_pr) and precipitation (κ_prep, ν_prep), and λ is used to describe the agent's attitude towards the fluctuation of crop profits. Notice that these human behavioral parameters as inputs to the MAS model are highly uncertain and even not measurable. Thus, we estimate the influences of these behavioral parameters on the coupled models using Global Sensitivity Analysis (GSA). In this paper, we address two main challenges arising from GSA with such a large-scale socio-hydrological model by using Hadoop-based Cloud Computing techniques and Polynomial Chaos Expansion (PCE) based variance decomposition approach. As a result, 1,000 scenarios of the coupled models are completed within two hours with the Hadoop framework, rather than about 28days if we run those scenarios sequentially. Based on the model results, GSA using PCE is able to measure the impacts of the spatial and temporal variations of these behavioral parameters on crop profits and water table, and thus identifies two influential parameters, κ_pr and λ. The major contribution of this work is a methodological framework for the application of GSA in large-scale socio-hydrological models. This framework attempts to
The analysis sensitivity to tropical winds from the Global Weather Experiment
NASA Technical Reports Server (NTRS)
Paegle, J.; Paegle, J. N.; Baker, W. E.
1986-01-01
The global scale divergent and rotational flow components of the Global Weather Experiment (GWE) are diagnosed from three different analyses of the data. The rotational flow shows closer agreement between the analyses than does the divergent flow. Although the major outflow and inflow centers are similarly placed in all analyses, the global kinetic energy of the divergent wind varies by about a factor of 2 between different analyses while the global kinetic energy of the rotational wind varies by only about 10 percent between the analyses. A series of real data assimilation experiments has been performed with the GLA general circulation model using different amounts of tropical wind data during the First Special Observing Period of the Global Weather Experiment. In exeriment 1, all available tropical wind data were used; in the second experiment, tropical wind data were suppressed; while, in the third and fourth experiments, only tropical wind data with westerly and easterly components, respectively, were assimilated. The rotational wind appears to be more sensitive to the presence or absence of tropical wind data than the divergent wind. It appears that the model, given only extratropical observations, generates excessively strong upper tropospheric westerlies. These biases are sufficiently pronounced to amplify the globally integrated rotational flow kinetic energy by about 10 percent and the global divergent flow kinetic energy by about a factor of 2. Including only easterly wind data in the tropics is more effective in controlling the model error than including only westerly wind data. This conclusion is especially noteworthy because approximately twice as many upper tropospheric westerly winds were available in these cases as easterly winds.
NASA Astrophysics Data System (ADS)
Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.
2014-12-01
Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping
High-Throughput Analysis of Global DNA Methylation Using Methyl-Sensitive Digestion
Feinweber, Carmen; Knothe, Claudia; Lötsch, Jörn; Thomas, Dominique; Geisslinger, Gerd; Parnham, Michael J.; Resch, Eduard
2016-01-01
DNA methylation is a major regulatory process of gene transcription, and aberrant DNA methylation is associated with various diseases including cancer. Many compounds have been reported to modify DNA methylation states. Despite increasing interest in the clinical application of drugs with epigenetic effects, and the use of diagnostic markers for genome-wide hypomethylation in cancer, large-scale screening systems to measure the effects of drugs on DNA methylation are limited. In this study, we improved the previously established fluorescence polarization-based global DNA methylation assay so that it is more suitable for application to human genomic DNA. Our methyl-sensitive fluorescence polarization (MSFP) assay was highly repeatable (inter-assay coefficient of variation = 1.5%) and accurate (r2 = 0.99). According to signal linearity, only 50–80 ng human genomic DNA per reaction was necessary for the 384-well format. MSFP is a simple, rapid approach as all biochemical reactions and final detection can be performed in one well in a 384-well plate without purification steps in less than 3.5 hours. Furthermore, we demonstrated a significant correlation between MSFP and the LINE-1 pyrosequencing assay, a widely used global DNA methylation assay. MSFP can be applied for the pre-screening of compounds that influence global DNA methylation states and also for the diagnosis of certain types of cancer. PMID:27749902
A new framework for comprehensive, robust, and efficient global sensitivity analysis: 1. Theory
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2016-01-01
Computer simulation models are continually growing in complexity with increasingly more factors to be identified. Sensitivity Analysis (SA) provides an essential means for understanding the role and importance of these factors in producing model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to "variogram analysis," that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. Synthetic functions that resemble actual model response surfaces are used to illustrate the concepts, and show VARS to be as much as two orders of magnitude more computationally efficient than the state-of-the-art Sobol approach. In a companion paper, we propose a practical implementation strategy, and demonstrate the effectiveness, efficiency, and reliability (robustness) of the VARS framework on real-data case studies.
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
El Habachi, Aimad; Moissenet, Florent; Duprey, Sonia; Cheze, Laurence; Dumas, Raphaël
2015-07-01
Sensitivity analysis is a typical part of biomechanical models evaluation. For lower limb multi-body models, sensitivity analyses have been mainly performed on musculoskeletal parameters, more rarely on the parameters of the joint models. This study deals with a global sensitivity analysis achieved on a lower limb multi-body model that introduces anatomical constraints at the ankle, tibiofemoral, and patellofemoral joints. The aim of the study was to take into account the uncertainty of parameters (e.g. 2.5 cm on the positions of the skin markers embedded in the segments, 5° on the orientation of hinge axis, 2.5 mm on the origin and insertion of ligaments) using statistical distributions and propagate it through a multi-body optimisation method used for the computation of joint kinematics from skin markers during gait. This will allow us to identify the most influential parameters on the minimum of the objective function of the multi-body optimisation (i.e. the sum of the squared distances between measured and model-determined skin marker positions) and on the joint angles and displacements. To quantify this influence, a Fourier-based algorithm of global sensitivity analysis coupled with a Latin hypercube sampling is used. This sensitivity analysis shows that some parameters of the motor constraints, that is to say the distances between measured and model-determined skin marker positions, and the kinematic constraints are highly influencing the joint kinematics obtained from the lower limb multi-body model, for example, positions of the skin markers embedded in the shank and pelvis, parameters of the patellofemoral hinge axis, and parameters of the ankle and tibiofemoral ligaments. The resulting standard deviations on the joint angles and displacements reach 36° and 12 mm. Therefore, personalisation, customisation or identification of these most sensitive parameters of the lower limb multi-body models may be considered as essential. PMID:25783762
Anthony, Neil R.; Berland, Keith M.
2014-01-01
Fluorescence fluctuation methods have become invaluable research tools for characterizing the molecular-level physical and chemical properties of complex systems, such as molecular concentrations, dynamics, and the stoichiometry of molecular interactions. However, information recovery via curve fitting analysis of fluctuation data is complicated by limited resolution and challenges associated with identifying accurate fit models. We introduce a new approach to fluorescence fluctuation spectroscopy that couples multi-modal fluorescence measurements with multi-modal global curve fitting analysis. This approach yields dramatically enhanced resolution and fitting model discrimination capabilities in fluctuation measurements. The resolution enhancement allows the concentration of a secondary species to be accurately measured even when it constitutes only a few percent of the molecules within a sample mixture, an important new capability that will allow accurate measurements of molecular concentrations and interaction stoichiometry of minor sample species that can be functionally important but difficult to measure experimentally. We demonstrate this capability using τFCS, a new fluctuation method which uses simultaneous global analysis of fluorescence correlation spectroscopy and fluorescence lifetime data, and show that τFCS can accurately recover the concentrations, diffusion coefficients, lifetimes, and molecular brightness values for a two component mixture over a wide range of relative concentrations. PMID:24587370
NASA Technical Reports Server (NTRS)
Bittker, David A.
1996-01-01
A generalized version of the NASA Lewis general kinetics code, LSENS, is described. The new code allows the use of global reactions as well as molecular processes in a chemical mechanism. The code also incorporates the capability of performing sensitivity analysis calculations for a perfectly stirred reactor rapidly and conveniently at the same time that the main kinetics calculations are being done. The GLSENS code has been extensively tested and has been found to be accurate and efficient. Nine example problems are presented and complete user instructions are given for the new capabilities. This report is to be used in conjunction with the documentation for the original LSENS code.
Lee, Yeonok; Wu, Hulin
2012-01-01
Differential equation models are widely used for the study of natural phenomena in many fields. The study usually involves unknown factors such as initial conditions and/or parameters. It is important to investigate the impact of unknown factors (parameters and initial conditions) on model outputs in order to better understand the system the model represents. Apportioning the uncertainty (variation) of output variables of a model according to the input factors is referred to as sensitivity analysis. In this paper, we focus on the global sensitivity analysis of ordinary differential equation (ODE) models over a time period using the multivariate adaptive regression spline (MARS) as a meta model based on the concept of the variance of conditional expectation (VCE). We suggest to evaluate the VCE analytically using the MARS model structure of univariate tensor-product functions which is more computationally efficient. Our simulation studies show that the MARS model approach performs very well and helps to significantly reduce the computational cost. We present an application example of sensitivity analysis of ODE models for influenza infection to further illustrate the usefulness of the proposed method. PMID:21656089
NASA Astrophysics Data System (ADS)
Shahkarami, Pirouz; Liu, Longcheng; Moreno, Luis; Neretnieks, Ivars
2015-01-01
This study presents an analytical approach to simulate nuclide migration through a channel in a fracture accounting for an arbitrary-length decay chain. The nuclides are retarded as they diffuse in the porous rock matrix and stagnant zones in the fracture. The Laplace transform and similarity transform techniques are applied to solve the model. The analytical solution to the nuclide concentrations at the fracture outlet is governed by nine parameters representing different mechanisms acting on nuclide transport through a fracture, including diffusion into the rock matrices, diffusion into the stagnant water zone, chain decay and hydrodynamic dispersion. Furthermore, to assess how sensitive the results are to parameter uncertainties, the Sobol method is applied in variance-based global sensitivity analyses of the model output. The Sobol indices show how uncertainty in the model output is apportioned to the uncertainty in the model input. This method takes into account both direct effects and interaction effects between input parameters. The simulation results suggest that in the case of pulse injections, ignoring the effect of a stagnant water zone can lead to significant errors in the time of first arrival and the peak value of the nuclides. Likewise, neglecting the parent and modeling its daughter as a single stable species can result in a significant overestimation of the peak value of the daughter nuclide. It is also found that as the dispersion increases, the early arrival time and the peak time of the daughter decrease while the peak value increases. More importantly, the global sensitivity analysis reveals that for time periods greater than a few thousand years, the uncertainty of the model output is more sensitive to the values of the individual parameters than to the interaction between them. Moreover, if one tries to evaluate the true values of the input parameters at the same cost and effort, the determination of priorities should follow a certain
Baumuratova, Tatiana; Dobre, Simona; Bastogne, Thierry; Sauter, Thomas
2013-01-01
Systems with bifurcations may experience abrupt irreversible and often unwanted shifts in their performance, called critical transitions. For many systems like climate, economy, ecosystems it is highly desirable to identify indicators serving as early warnings of such regime shifts. Several statistical measures were recently proposed as early warnings of critical transitions including increased variance, autocorrelation and skewness of experimental or model-generated data. The lack of automatized tool for model-based prediction of critical transitions led to designing DyGloSA – a MATLAB toolbox for dynamical global parameter sensitivity analysis (GPSA) of ordinary differential equations models. We suggest that the switch in dynamics of parameter sensitivities revealed by our toolbox is an early warning that a system is approaching a critical transition. We illustrate the efficiency of our toolbox by analyzing several models with bifurcations and predicting the time periods when systems can still avoid going to a critical transition by manipulating certain parameter values, which is not detectable with the existing SA techniques. DyGloSA is based on the SBToolbox2 and contains functions, which compute dynamically the global sensitivity indices of the system by applying four main GPSA methods: eFAST, Sobol's ANOVA, PRCC and WALS. It includes parallelized versions of the functions enabling significant reduction of the computational time (up to 12 times). DyGloSA is freely available as a set of MATLAB scripts at http://bio.uni.lu/systems_biology/software/dyglosa. It requires installation of MATLAB (versions R2008b or later) and the Systems Biology Toolbox2 available at www.sbtoolbox2.org. DyGloSA can be run on Windows and Linux systems, -32 and -64 bits. PMID:24367574
NASA Astrophysics Data System (ADS)
Muneepeerakul, Chitsomanus; Huffaker, Ray; Munoz-Carpena, Rafael
2016-04-01
The weather index insurance promises financial resilience to farmers struck by harsh weather conditions with swift compensation at affordable premium thanks to its minimal adverse selection and moral hazard. Despite these advantages, the very nature of indexing causes the presence of "production basis risk" that the selected weather indexes and their thresholds do not correspond to actual damages. To reduce basis risk without additional data collection cost, we propose the use of rain intensity and frequency as indexes as it could offer better protection at the lower premium by avoiding basis risk-strike trade-off inherent in the total rainfall index. We present empirical evidences and modeling results that even under the similar cumulative rainfall and temperature environment, yield can significantly differ especially for drought sensitive crops. We further show that deriving the trigger level and payoff function from regression between historical yield and total rainfall data may pose significant basis risk owing to their non-unique relationship in the insured range of rainfall. Lastly, we discuss the design of index insurance in terms of contract specifications based on the results from global sensitivity analysis.
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi
2016-06-01
The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.
Sensitivity Analysis in Engineering
NASA Technical Reports Server (NTRS)
Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)
1987-01-01
The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.
NASA Astrophysics Data System (ADS)
Khorashadi Zadeh, Farkhondeh; Sarrazin, Fanny; Nossent, Jiri; Pianosi, Francesca; van Griensven, Ann; Wagener, Thorsten; Bauwens, Willy
2015-04-01
Uncertainty in parameters is a well-known reason of model output uncertainty which, undermines model reliability and restricts model application. A large number of parameters, in addition to the lack of data, limits calibration efficiency and also leads to higher parameter uncertainty. Global Sensitivity Analysis (GSA) is a set of mathematical techniques that provides quantitative information about the contribution of different sources of uncertainties (e.g. model parameters) to the model output uncertainty. Therefore, identifying influential and non-influential parameters using GSA can improve model calibration efficiency and consequently reduce model uncertainty. In this paper, moment-independent density-based GSA methods that consider the entire model output distribution - i.e. Probability Density Function (PDF) or Cumulative Distribution Function (CDF) - are compared with the widely-used variance-based method and their differences are discussed. Moreover, the effect of model output definition on parameter ranking results is investigated using Nash-Sutcliffe Efficiency (NSE) and model bias as example outputs. To this end, 26 flow parameters of a SWAT model of the River Zenne (Belgium) are analysed. In order to assess the robustness of the sensitivity indices, bootstrapping is applied and 95% confidence intervals are estimated. The results show that, although the variance-based method is easy to implement and interpret, it provides wider confidence intervals, especially for non-influential parameters, compared to the density-based methods. Therefore, density-based methods may be a useful complement to variance-based methods for identifying non-influential parameters.
NASA Astrophysics Data System (ADS)
Le Cozannet, Gonéri; Oliveros, Carlos; Castelle, Bruno; Garcin, Manuel; Idier, Déborah; Pedreros, Rodrigo; Rohmer, Jeremy
2016-04-01
Future sandy shoreline changes are often assed by summing the contributions of longshore and cross-shore effects. In such approaches, a contribution of sea-level rise can be incorporated by adding a supplementary term based on the Bruun rule. Here, our objective is to identify where and when the use of the Bruun rule can be (in)validated, in the case of wave-exposed beaches with gentle slopes. We first provide shoreline change scenarios that account for all uncertain hydrosedimentary processes affecting the idealized low- and high-energy coasts described by Stive (2004)[Stive, M. J. F. 2004, How important is global warming for coastal erosion? an editorial comment, Climatic Change, vol. 64, n 12, doi:10.1023/B:CLIM.0000024785.91858. ISSN 0165-0009]. Then, we generate shoreline change scenarios based on probabilistic sea-level rise projections based on IPCC. For scenario RCP 6.0 and 8.5 and in the absence of coastal defenses, the model predicts an observable shift toward generalized beach erosion by the middle of the 21st century. On the contrary, the model predictions are unlikely to differ from the current situation in case of scenario RCP 2.6. To get insight into the relative importance of each source of uncertainties, we quantify each contributions to the variance of the model outcome using a global sensitivity analysis. This analysis shows that by the end of the 21st century, a large part of shoreline change uncertainties are due to the climate change scenario if all anthropogenic greenhousegas emission scenarios are considered equiprobable. To conclude, the analysis shows that under the assumptions above, (in)validating the Bruun rule should be straightforward during the second half of the 21st century and for the RCP 8.5 scenario. Conversely, for RCP 2.6, the noise in shoreline change evolution should continue dominating the signal due to the Bruun effect. This last conclusion can be interpreted as an important potential benefit of climate change mitigation.
NASA Astrophysics Data System (ADS)
Bounceur, N.; Crucifix, M.; Wilkinson, R. D.
2015-05-01
A global sensitivity analysis is performed to describe the effects of astronomical forcing on the climate-vegetation system simulated by the model of intermediate complexity LOVECLIM in interglacial conditions. The methodology relies on the estimation of sensitivity measures, using a Gaussian process emulator as a fast surrogate of the climate model, calibrated on a set of well-chosen experiments. The outputs considered are the annual mean temperature and precipitation and the growing degree days (GDD). The experiments were run on two distinct land surface schemes to estimate the importance of vegetation feedbacks on climate variance. This analysis provides a spatial description of the variance due to the factors and their combinations, in the form of "fingerprints" obtained from the covariance indices. The results are broadly consistent with the current under-standing of Earth's climate response to the astronomical forcing. In particular, precession and obliquity are found to contribute in LOVECLIM equally to GDD in the Northern Hemisphere, and the effect of obliquity on the response of Southern Hemisphere temperature dominates precession effects. Precession dominates precipitation changes in subtropical areas. Compared to standard approaches based on a small number of simulations, the methodology presented here allows us to identify more systematically regions susceptible to experiencing rapid climate change in response to the smooth astronomical forcing change. In particular, we find that using interactive vegetation significantly enhances the expected rates of climate change, specifically in the Sahel (up to 50% precipitation change in 1000 years) and in the Canadian Arctic region (up to 3° in 1000 years). None of the tested astronomical configurations were found to induce multiple steady states, but, at low obliquity, we observed the development of an oscillatory pattern that has already been reported in LOVECLIM. Although the mathematics of the analysis are
He, Li; Huang, Gordon; Lu, Hongwei; Wang, Shuo; Xu, Yi
2012-06-15
This paper presents a global uncertainty and sensitivity analysis (GUSA) framework based on global sensitivity analysis (GSA) and generalized likelihood uncertainty estimation (GLUE) methods. Quasi-Monte Carlo (QMC) is employed by GUSA to obtain realizations of uncertain parameters, which are then input to the simulation model for analysis. Compared to GLUE, GUSA can not only evaluate global sensitivity and uncertainty of modeling parameter sets, but also quantify the uncertainty in modeling prediction sets. Moreover, GUSA's another advantage lies in alleviation of computational effort, since those globally-insensitive parameters can be identified and removed from the uncertain-parameter set. GUSA is applied to a practical petroleum-contaminated site in Canada to investigate free product migration and recovery processes under aquifer remediation operations. Results from global sensitivity analysis show that (1) initial free product thickness has the most significant impact on total recovery volume but least impact on residual free product thickness and recovery rate; (2) total recovery volume and recovery rate are sensitive to residual LNAPL phase saturations and soil porosity. Results from uncertainty predictions reveal that the residual thickness would remain high and almost unchanged after about half-year of skimmer-well scheme; the rather high residual thickness (0.73-1.56 m 20 years later) indicates that natural attenuation would not be suitable for the remediation. The largest total recovery volume would be from water pumping, followed by vacuum pumping, and then skimmer. The recovery rates of the three schemes would rapidly decrease after 2 years (less than 0.05 m(3)/day), thus short-term remediation is not suggested.
A Sensitivity Analysis of the Impact of Rain on Regional and Global Sea-Air Fluxes of CO2
Shutler, J. D.; Land, P. E.; Woolf, D. K.; Quartly, G. D.
2016-01-01
The global oceans are considered a major sink of atmospheric carbon dioxide (CO2). Rain is known to alter the physical and chemical conditions at the sea surface, and thus influence the transfer of CO2 between the ocean and atmosphere. It can influence gas exchange through enhanced gas transfer velocity, the direct export of carbon from the atmosphere to the ocean, by altering the sea skin temperature, and through surface layer dilution. However, to date, very few studies quantifying these effects on global net sea-air fluxes exist. Here, we include terms for the enhanced gas transfer velocity and the direct export of carbon in calculations of the global net sea-air fluxes, using a 7-year time series of monthly global climate quality satellite remote sensing observations, model and in-situ data. The use of a non-linear relationship between the effects of rain and wind significantly reduces the estimated impact of rain-induced surface turbulence on the rate of sea-air gas transfer, when compared to a linear relationship. Nevertheless, globally, the rain enhanced gas transfer and rain induced direct export increase the estimated annual oceanic integrated net sink of CO2 by up to 6%. Regionally, the variations can be larger, with rain increasing the estimated annual net sink in the Pacific Ocean by up to 15% and altering monthly net flux by > ± 50%. Based on these analyses, the impacts of rain should be included in the uncertainty analysis of studies that estimate net sea-air fluxes of CO2 as the rain can have a considerable impact, dependent upon the region and timescale. PMID:27673683
Sánchez-Canales, M; López-Benito, A; Acuña, V; Ziv, G; Hamel, P; Chaplin-Kramer, R; Elorza, F J
2015-01-01
Climate change and land-use change are major factors influencing sediment dynamics. Models can be used to better understand sediment production and retention by the landscape, although their interpretation is limited by large uncertainties, including model parameter uncertainties. The uncertainties related to parameter selection may be significant and need to be quantified to improve model interpretation for watershed management. In this study, we performed a sensitivity analysis of the InVEST (Integrated Valuation of Environmental Services and Tradeoffs) sediment retention model in order to determine which model parameters had the greatest influence on model outputs, and therefore require special attention during calibration. The estimation of the sediment loads in this model is based on the Universal Soil Loss Equation (USLE). The sensitivity analysis was performed in the Llobregat basin (NE Iberian Peninsula) for exported and retained sediment, which support two different ecosystem service benefits (avoided reservoir sedimentation and improved water quality). Our analysis identified the model parameters related to the natural environment as the most influential for sediment export and retention. Accordingly, small changes in variables such as the magnitude and frequency of extreme rainfall events could cause major changes in sediment dynamics, demonstrating the sensitivity of these dynamics to climate change in Mediterranean basins. Parameters directly related to human activities and decisions (such as cover management factor, C) were also influential, especially for sediment exported. The importance of these human-related parameters in the sediment export process suggests that mitigation measures have the potential to at least partially ameliorate climate-change driven changes in sediment exportation.
NASA Astrophysics Data System (ADS)
Harp, D.; Vesselinov, V. V.
2011-12-01
A newly developed methodology to model-based decision analysis is presented. The methodology incorporates a sampling approach, referred to as Agent-Based Analysis of Global Uncertainty and Sensitivity (ABAGUS; Harp & Vesselinov; 2011), that efficiently collects sets of acceptable solutions (i.e. acceptable model parameter sets) for different levels of a model performance metric representing the consistency of model predictions to observations. In this case, the performance metric is based on model residuals (i.e. discrepancies between observations and simulations). ABAGUS collects acceptable solutions from a discretized parameter space and stores them in a KD-tree for efficient retrieval. The parameter space domain (parameter minimum/maximum ranges) and discretization are predefined. On subsequent visits to collected locations, agents are provided with a modified value of the performance metric, and the model solution is not recalculated. The modified values of the performance metric sculpt the response surface (convexities become concavities), repulsing agents from collected regions. This promotes global exploration of the parameter space and discourages reinvestigation of regions of previously collected acceptable solutions. The resulting sets of acceptable solutions are formulated into a decision analysis using concepts from info-gap theory (Ben-Haim, 2006). Using info-gap theory, the decision robustness and opportuneness are quantified, providing measures of the immunity to failure and windfall, respectively, of alternative decisions. The approach is intended for cases where the information is extremely limited, resulting in non-probabilistic uncertainties concerning model properties such as boundary and initial conditions, model parameters, conceptual model elements, etc. The information provided by this analysis is weaker than the information provided by probabilistic decision analyses (i.e. posterior parameter distributions are not produced), however, this
NASA Astrophysics Data System (ADS)
Furfaro, R.; Morris, R. D.; Kottas, A.; Taddy, M.; Ganapol, B. D.
2007-12-01
Analyzing, quantifying and reporting the uncertainty in remote sensed data products is critical for our understanding of Earth's coupled system. It is the only way in which the uncertainty of further analyses using these data products as inputs can be quantified. Analyzing the source of the data product uncertainties can identify where the models must be improved, or where better input information must be obtained. Here we focus on developing a probabilistic framework for analysis of uncertainties occurring when satellite data (e.g., MODIS) are employed to retrieve biophysical properties of vegetation. Indeed, the process of remotely estimating vegetation properties involves inverting a Radiative Transfer Model (RTM), as in the case of the MOD15 algorithm where seven atmospherically corrected reflectance factors are ingested and compared to a set of computed, RTM-based, reflectances (look-up table) to infer the Leaf Area Index (LAI). Since inversion is generally ill-conditioned, and since a-priori information is important in constraining the inverse model, sensitivity analysis plays a key role in defining which parameters have the greatest impact to the computed observation. We develop a framework to perform global sensitivity analysis, i.e., to determine how the output changes as all inputs vary continuously. We used a coupled Leaf-Canopy radiative transfer Model (LCM) to approximate the functional relationship between the observed reflectance and vegetation biophysical parameters. LCM was designed to study the feasibility of detecting leaf/canopy biochemistry using remote sensed observations and has the unique capability to include leaf biochemistry (e.g., chlorophyll, water, lignin, protein) as input parameters. The influence of LCM input parameters (including canopy morphological and biochemical parameters) on the hemispherical reflectance is captured by computing the "main effects", which give information about the influence of each input, and the "sensitivity
Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine
2016-01-01
A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement.
Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine
2016-01-01
A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement. PMID:26799483
Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine
2016-01-01
A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement. PMID:26799483
Cosmopolitan Sensitivities, Vulnerability, and Global Englishes
ERIC Educational Resources Information Center
Jacobsen, Ushma Chauhan
2015-01-01
This paper is the outcome of an afterthought that assembles connections between three elements: the ambitions of cultivating cosmopolitan sensitivities that circulate vibrantly in connection with the internationalization of higher education, a course on Global Englishes at a Danish university and the sensation of vulnerability. It discusses the…
Sensitivity Analysis Using Risk Measures.
Tsanakas, Andreas; Millossovich, Pietro
2016-01-01
In a quantitative model with uncertain inputs, the uncertainty of the output can be summarized by a risk measure. We propose a sensitivity analysis method based on derivatives of the output risk measure, in the direction of model inputs. This produces a global sensitivity measure, explicitly linking sensitivity and uncertainty analyses. We focus on the case of distortion risk measures, defined as weighted averages of output percentiles, and prove a representation of the sensitivity measure that can be evaluated on a Monte Carlo sample, as a weighted average of gradients over the input space. When the analytical model is unknown or hard to work with, nonparametric techniques are used for gradient estimation. This process is demonstrated through the example of a nonlinear insurance loss model. Furthermore, the proposed framework is extended in order to measure sensitivity to constant model parameters, uncertain statistical parameters, and random factors driving dependence between model inputs.
1992-02-20
SENSIT,MUSIG,COMSEN is a set of three related programs for sensitivity test analysis. SENSIT conducts sensitivity tests. These tests are also known as threshold tests, LD50 tests, gap tests, drop weight tests, etc. SENSIT interactively instructs the experimenter on the proper level at which to stress the next specimen, based on the results of previous responses. MUSIG analyzes the results of a sensitivity test to determine the mean and standard deviation of the underlying population bymore » computing maximum likelihood estimates of these parameters. MUSIG also computes likelihood ratio joint confidence regions and individual confidence intervals. COMSEN compares the results of two sensitivity tests to see if the underlying populations are significantly different. COMSEN provides an unbiased method of distinguishing between statistical variation of the estimates of the parameters of the population and true population difference.« less
Global Sensitivity Measures from Given Data
Elmar Plischke; Emanuele Borgonovo; Curtis L. Smith
2013-05-01
Simulation models support managers in the solution of complex problems. International agencies recommend uncertainty and global sensitivity methods as best practice in the audit, validation and application of scientific codes. However, numerical complexity, especially in the presence of a high number of factors, induces analysts to employ less informative but numerically cheaper methods. This work introduces a design for estimating global sensitivity indices from given data (including simulation input–output data), at the minimum computational cost. We address the problem starting with a statistic based on the L1-norm. A formal definition of the estimators is provided and corresponding consistency theorems are proved. The determination of confidence intervals through a bias-reducing bootstrap estimator is investigated. The strategy is applied in the identification of the key drivers of uncertainty for the complex computer code developed at the National Aeronautics and Space Administration (NASA) assessing the risk of lunar space missions. We also introduce a symmetry result that enables the estimation of global sensitivity measures to datasets produced outside a conventional input–output functional framework.
LISA Telescope Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Waluschka, Eugene; Krebs, Carolyn (Technical Monitor)
2001-01-01
The results of a LISA telescope sensitivity analysis will be presented, The emphasis will be on the outgoing beam of the Dall-Kirkham' telescope and its far field phase patterns. The computed sensitivity analysis will include motions of the secondary with respect to the primary, changes in shape of the primary and secondary, effect of aberrations of the input laser beam and the effect the telescope thin film coatings on polarization. An end-to-end optical model will also be discussed.
[Structural sensitivity analysis].
Carrera-Hueso, F J; Ramón-Barrios, A
2011-05-01
The aim of this study was to perform a structural sensitivity analysis of a decision model and to identify its advantages and limitations. A previously published model of dinoprostone was modified, taking two scenarios into account: eliminating postpartum hemorrhages and including both hemorrhages and uterine hyperstimulation among the adverse effects. The result of the structural sensitivity analysis shows the robustness of the underlying model and confirmed the initial results: the intrauterine device is more cost-effective than intracervical dinoprostone gel. Structural sensitivity analyses should be congruent with the situation studied and clinically validated. Although uncertainty may be only slightly reduced, these analyses provide information and add greater validity and reliability to the model.
NASA Astrophysics Data System (ADS)
Vilain, Guillaume; Müller, Christoph; Schaphoff, Sibyll; Lotze-Campen, Hermann; Feulner, Georg
2013-04-01
Nitrogen (N) cycling affects carbon uptake by the terrestrial biosphere and imposes controls on carbon cycle response to variation in temperature and precipitation. In the absence of carbon-nitrogen interactions, surface warming significantly reduces carbon sequestration in both vegetation and soil by increasing respiration and decomposition (a positive feedback). If plant carbon uptake, however, is assumed to be nitrogen limited, an increase in decomposition leads to an increase in nitrogen availability stimulating plant growth. The resulting increase in carbon uptake by vegetation can exceed carbon loss from the soil, leading to enhanced carbon sequestration (a negative feedback). Cultivation of biofuel crops is expanding because of its potential for climate mitigation, whereas the environmental impacts of bioenergy production still remain unknown. While carbon payback times are being increasingly investigated, non-CO2 greenhouse gas emissions of bioenergy production have received little attention so far. We introduced a process-based nitrogen cycle to the LPJmL model at the global scale (each grid cell being 0.5° latitude by 0.5° longitude in size). The model captures mechanisms essential for N cycling and their feedbacks on C cycling: the uptake, allocation and turnover on N in plants, N limitation of plant productivity, and soil N transformation including mineralization, N2 fixation, nitrification and denitrification, NH3 volatilization, N leaching and N2O emissions. Our model captures many essential characteristics of C-N interactions and is capable of broadly recreating spatial and temporal variations in N and C dynamics. Here we evaluate LPJmL by comparing the predicted variables with data from sites with sufficient observations to describe ecosystem nitrogen and carbon fluxes and contents and their responses to climate as well as with estimates of N-dynamics at the global scale. The simulations presented here use no site-specific parameterizations in
RESRAD parameter sensitivity analysis
Cheng, J.J.; Yu, C.; Zielen, A.J.
1991-08-01
Three methods were used to perform a sensitivity analysis of RESRAD code input parameters -- enhancement of RESRAD by the Gradient Enhanced Software System (GRESS) package, direct parameter perturbation, and graphic comparison. Evaluation of these methods indicated that (1) the enhancement of RESRAD by GRESS has limitations and should be used cautiously, (2) direct parameter perturbation is tedious to implement, and (3) the graphics capability of RESRAD 4.0 is the most direct and convenient method for performing sensitivity analyses. This report describes procedures for implementing these methods and presents a comparison of results. 3 refs., 9 figs., 8 tabs.
Scaling in sensitivity analysis
Link, W.A.; Doherty, P.F.
2002-01-01
Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.
LISA Telescope Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Waluschka, Eugene; Krebs, Carolyn (Technical Monitor)
2002-01-01
The Laser Interferometer Space Antenna (LISA) for the detection of Gravitational Waves is a very long baseline interferometer which will measure the changes in the distance of a five million kilometer arm to picometer accuracies. As with any optical system, even one with such very large separations between the transmitting and receiving, telescopes, a sensitivity analysis should be performed to see how, in this case, the far field phase varies when the telescope parameters change as a result of small temperature changes.
Sensitivity analysis of a wing aeroelastic response
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.; Eldred, Lloyd B.; Barthelemy, Jean-Francois M.
1991-01-01
A variation of Sobieski's Global Sensitivity Equations (GSE) approach is implemented to obtain the sensitivity of the static aeroelastic response of a three-dimensional wing model. The formulation is quite general and accepts any aerodynamics and structural analysis capability. An interface code is written to convert one analysis's output to the other's input, and visa versa. Local sensitivity derivatives are calculated by either analytic methods or finite difference techniques. A program to combine the local sensitivities, such as the sensitivity of the stiffness matrix or the aerodynamic kernel matrix, into global sensitivity derivatives is developed. The aerodynamic analysis package FAST, using a lifting surface theory, and a structural package, ELAPS, implementing Giles' equivalent plate model are used.
Loizou, George D; McNally, Kevin; Jones, Kate; Cocker, John
2015-01-01
Global sensitivity analysis (SA) was used during the development phase of a binary chemical physiologically based pharmacokinetic (PBPK) model used for the analysis of m-xylene and ethanol co-exposure in humans. SA was used to identify those parameters which had the most significant impact on variability of venous blood and exhaled m-xylene and urinary excretion of the major metabolite of m-xylene metabolism, 3-methyl hippuric acid. This analysis informed the selection of parameters for estimation/calibration by fitting to measured biological monitoring (BM) data in a Bayesian framework using Markov chain Monte Carlo (MCMC) simulation. Data generated in controlled human studies were shown to be useful for investigating the structure and quantitative outputs of PBPK models as well as the biological plausibility and variability of parameters for which measured values were not available. This approach ensured that a priori knowledge in the form of prior distributions was ascribed only to those parameters that were identified as having the greatest impact on variability. This is an efficient approach which helps reduce computational cost. PMID:26175688
Loizou, George D.; McNally, Kevin; Jones, Kate; Cocker, John
2015-01-01
Global sensitivity analysis (SA) was used during the development phase of a binary chemical physiologically based pharmacokinetic (PBPK) model used for the analysis of m-xylene and ethanol co-exposure in humans. SA was used to identify those parameters which had the most significant impact on variability of venous blood and exhaled m-xylene and urinary excretion of the major metabolite of m-xylene metabolism, 3-methyl hippuric acid. This analysis informed the selection of parameters for estimation/calibration by fitting to measured biological monitoring (BM) data in a Bayesian framework using Markov chain Monte Carlo (MCMC) simulation. Data generated in controlled human studies were shown to be useful for investigating the structure and quantitative outputs of PBPK models as well as the biological plausibility and variability of parameters for which measured values were not available. This approach ensured that a priori knowledge in the form of prior distributions was ascribed only to those parameters that were identified as having the greatest impact on variability. This is an efficient approach which helps reduce computational cost. PMID:26175688
Kim, Nam-Soo; Im, Min-Ji; Nkongolo, Kabwe
2016-08-01
Red maple (Acer rubum), a common deciduous tree species in Northern Ontario, has shown resistance to soil metal contamination. Previous reports have indicated that this plant does not accumulate metals in its tissue. However, low level of nickel and copper corresponding to the bioavailable levels in contaminated soils in Northern Ontario causes severe physiological damages. No differentiation between metal-contaminated and uncontaminated populations has been reported based on genetic analyses. The main objective of this study was to assess whether DNA methylation is involved in A. rubrum adaptation to soil metal contamination. Global cytosine and methylation-sensitive amplified polymorphism (MSAP) analyses were carried out in A. rubrum populations from metal-contaminated and uncontaminated sites. The global modified cytosine ratios in genomic DNA revealed a significant decrease in cytosine methylation in genotypes from a metal-contaminated site compared to uncontaminated populations. Other genotypes from a different metal-contaminated site within the same region appear to be recalcitrant to metal-induced DNA alterations even ≥30 years of tree life exposure to nickel and copper. MSAP analysis showed a high level of polymorphisms in both uncontaminated (77%) and metal-contaminated (72%) populations. Overall, 205 CCGG loci were identified in which 127 were methylated in either outer or inner cytosine. No differentiation among populations was established based on several genetic parameters tested. The variations for nonmethylated and methylated loci were compared by analysis of molecular variance (AMOVA). For methylated loci, molecular variance among and within populations was 1.5% and 13.2%, respectively. These values were low (0.6% for among populations and 5.8% for within populations) for unmethylated loci. Metal contamination is seen to affect methylation of cytosine residues in CCGG motifs in the A. rubrum populations that were analyzed. PMID:27547351
Kim, Nam-Soo; Im, Min-Ji; Nkongolo, Kabwe
2016-08-01
Red maple (Acer rubum), a common deciduous tree species in Northern Ontario, has shown resistance to soil metal contamination. Previous reports have indicated that this plant does not accumulate metals in its tissue. However, low level of nickel and copper corresponding to the bioavailable levels in contaminated soils in Northern Ontario causes severe physiological damages. No differentiation between metal-contaminated and uncontaminated populations has been reported based on genetic analyses. The main objective of this study was to assess whether DNA methylation is involved in A. rubrum adaptation to soil metal contamination. Global cytosine and methylation-sensitive amplified polymorphism (MSAP) analyses were carried out in A. rubrum populations from metal-contaminated and uncontaminated sites. The global modified cytosine ratios in genomic DNA revealed a significant decrease in cytosine methylation in genotypes from a metal-contaminated site compared to uncontaminated populations. Other genotypes from a different metal-contaminated site within the same region appear to be recalcitrant to metal-induced DNA alterations even ≥30 years of tree life exposure to nickel and copper. MSAP analysis showed a high level of polymorphisms in both uncontaminated (77%) and metal-contaminated (72%) populations. Overall, 205 CCGG loci were identified in which 127 were methylated in either outer or inner cytosine. No differentiation among populations was established based on several genetic parameters tested. The variations for nonmethylated and methylated loci were compared by analysis of molecular variance (AMOVA). For methylated loci, molecular variance among and within populations was 1.5% and 13.2%, respectively. These values were low (0.6% for among populations and 5.8% for within populations) for unmethylated loci. Metal contamination is seen to affect methylation of cytosine residues in CCGG motifs in the A. rubrum populations that were analyzed.
Sensitivity testing and analysis
Neyer, B.T.
1991-01-01
New methods of sensitivity testing and analysis are proposed. The new test method utilizes Maximum Likelihood Estimates to pick the next test level in order to maximize knowledge of both the mean, {mu}, and the standard deviation, {sigma} of the population. Simulation results demonstrate that this new test provides better estimators (less bias and smaller variance) of both {mu} and {sigma} than the other commonly used tests (Probit, Bruceton, Robbins-Monro, Langlie). A new method of analyzing sensitivity tests is also proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions, for {mu}, {sigma}, and arbitrary percentiles. Unlike presently used methods, such as the program ASENT which is based on the Cramer-Rao theorem, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The new test and analysis methods will be explained and compared to the presently used methods. 19 refs., 12 figs.
NASA Astrophysics Data System (ADS)
Figueiro, Thiago; Choi, Kang-Hoon; Gutsch, Manuela; Freitag, Martin; Hohle, Christoph; Tortai, Jean-Hervé; Saib, Mohamed; Schiavone, Patrick
2012-11-01
In electron proximity effect correction (PEC), the quality of a correction is highly dependent on the quality of the model. Therefore it is of primary importance to have a reliable methodology to extract the parameters and assess the quality of a model. Among others the model describes how the energy of the electrons spreads out in the target material (via the Point Spread Function, PSF) as well as the influence of the resist process. There are different models available in previous studies, as well as several different approaches to obtain the appropriate value for their parameters. However, those are restricted in terms of complexity, or require a prohibitive number of measurements, which is limited for a certain PSF model. In this work, we propose a straightforward approach to obtain the value of parameters of a PSF. The methodology is general enough to apply for more sophisticated models as well. It focused on improving the three steps of model calibration procedure: First, it is using a good set of calibration patterns. Secondly, it secures the optimization step and avoids falling into a local optimum. And finally the developed method provides an improved analysis of the calibration step, which allows quantifying the quality of the model as well as enabling a comparison of different models. The methodology described in the paper is implemented as specific module in a commercial tool.
NASA Technical Reports Server (NTRS)
Fu, L. L.; Chao, Y.
1997-01-01
Investigated in this study is the response of a global ocean general circulation model to forcing provided by two wind products: operational analysis from the National Center for Environmental Prediction (NCEP); observations made by the ERS-1 radar scatterometer.
NASA Astrophysics Data System (ADS)
Malaguerra, Flavio; Albrechtsen, Hans-Jørgen; Binning, Philip John
2013-01-01
SummaryA reactive transport model is employed to evaluate the potential for contamination of drinking water wells by surface water pollution. The model considers various geologic settings, includes sorption and degradation processes and is tested by comparison with data from a tracer experiment where fluorescein dye injected in a river is monitored at nearby drinking water wells. Three compounds were considered: an older pesticide MCPP (Mecoprop) which is mobile and relatively persistent, glyphosate (Roundup), a newer biodegradable and strongly sorbing pesticide, and its degradation product AMPA. Global sensitivity analysis using the Morris method is employed to identify the dominant model parameters. Results show that the characteristics of clay aquitards (degree of fracturing and thickness), pollutant properties and well depths are crucial factors when evaluating the risk of drinking water well contamination from surface water. This study suggests that it is unlikely that glyphosate in streams can pose a threat to drinking water wells, while MCPP in surface water can represent a risk: MCPP concentration at the drinking water well can be up to 7% of surface water concentration in confined aquifers and up to 10% in unconfined aquifers. Thus, the presence of confining clay aquitards may not prevent contamination of drinking water wells by persistent compounds in surface water. Results are consistent with data on pesticide occurrence in Denmark where pesticides are found at higher concentrations at shallow depths and close to streams.
NASA Astrophysics Data System (ADS)
Fremier, A. K.; Estrada Carmona, N.; Harper, E.; DeClerck, F.
2011-12-01
Appropriate application of complex models to estimate system behavior requires understanding the influence of model structure and parameter estimates on model output. To date, most researchers perform local sensitivity analyses, rather than global, because of computational time and quantity of data produced. Local sensitivity analyses are limited in quantifying the higher order interactions among parameters, which could lead to incomplete analysis of model behavior. To address this concern, we performed a GSA on a commonly applied equation for soil loss - the Revised Universal Soil Loss Equation. USLE is an empirical model built on plot-scale data from the USA and the Revised version (RUSLE) includes improved equations for wider conditions, with 25 parameters grouped into six factors to estimate long-term plot and watershed scale soil loss. Despite RUSLE's widespread application, a complete sensitivity analysis has yet to be performed. In this research, we applied a GSA to plot and watershed scale data from the US and Costa Rica to parameterize the RUSLE in an effort to understand the relative importance of model factors and parameters across wide environmental space. We analyzed the GSA results using Random Forest, a statistical approach to evaluate parameter importance accounting for the higher order interactions, and used Classification and Regression Trees to show the dominant trends in complex interactions. In all GSA calculations the management of cover crops (C factor) ranks the highest among factors (compared to rain-runoff erosivity, topography, support practices, and soil erodibility). This is counter to previous sensitivity analyses where the topographic factor was determined to be the most important. The GSA finding is consistent across multiple model runs, including data from the US, Costa Rica, and a synthetic dataset of the widest theoretical space. The three most important parameters were: Mass density of live and dead roots found in the upper inch
Brookes, Victoria J.; Jordan, David; Davis, Stephen; Ward, Michael P.; Heller, Jane
2015-01-01
Introduction Strains of Shiga-toxin producing Escherichia coli O157 (STEC O157) are important foodborne pathogens in humans, and outbreaks of illness have been associated with consumption of undercooked beef. Here, we determine the most effective intervention strategies to reduce the prevalence of STEC O157 contaminated beef carcasses using a modelling approach. Method A computational model simulated events and processes in the beef harvest chain. Information from empirical studies was used to parameterise the model. Variance-based global sensitivity analysis (GSA) using the Saltelli method identified variables with the greatest influence on the prevalence of STEC O157 contaminated carcasses. Following a baseline scenario (no interventions), a series of simulations systematically introduced and tested interventions based on influential variables identified by repeated Saltelli GSA, to determine the most effective intervention strategy. Results Transfer of STEC O157 from hide or gastro-intestinal tract to carcass (improved abattoir hygiene) had the greatest influence on the prevalence of contaminated carcases. Due to interactions between inputs (identified by Saltelli GSA), combinations of interventions based on improved abattoir hygiene achieved a greater reduction in maximum prevalence than would be expected from an additive effect of single interventions. The most effective combination was improved abattoir hygiene with vaccination, which achieved a greater than ten-fold decrease in maximum prevalence compared to the baseline scenario. Conclusion Study results suggest that effective interventions to reduce the prevalence of STEC O157 contaminated carcasses should initially be based on improved abattoir hygiene. However, the effect of improved abattoir hygiene on the distribution of STEC O157 concentration on carcasses is an important information gap—further empirical research is required to determine whether reduced prevalence of contaminated carcasses is
Sensitivity analysis in computational aerodynamics
NASA Technical Reports Server (NTRS)
Bristow, D. R.
1984-01-01
Information on sensitivity analysis in computational aerodynamics is given in outline, graphical, and chart form. The prediction accuracy if the MCAERO program, a perturbation analysis method, is discussed. A procedure for calculating perturbation matrix, baseline wing paneling for perturbation analysis test cases and applications of an inviscid sensitivity matrix are among the topics covered.
Mathew, Shibin; Bartels, John; Banerjee, Ipsita; Vodovotz, Yoram
2014-10-01
The precise inflammatory role of the cytokine interleukin (IL)-6 and its utility as a biomarker or therapeutic target have been the source of much debate, presumably due to the complex pro- and anti-inflammatory effects of this cytokine. We previously developed a nonlinear ordinary differential equation (ODE) model to explain the dynamics of endotoxin (lipopolysaccharide; LPS)-induced acute inflammation and associated whole-animal damage/dysfunction (a proxy for the health of the organism), along with the inflammatory mediators tumor necrosis factor (TNF)-α, IL-6, IL-10, and nitric oxide (NO). The model was partially calibrated using data from endotoxemic C57Bl/6 mice. Herein, we investigated the sensitivity of the area under the damage curve (AUCD) to the 51 rate parameters of the ODE model for different levels of simulated LPS challenges using a global sensitivity approach called Random Sampling High Dimensional Model Representation (RS-HDMR). We explored sufficient parametric Monte Carlo samples to generate the variance-based Sobol' global sensitivity indices, and found that inflammatory damage was highly sensitive to the parameters affecting the activity of IL-6 during the different stages of acute inflammation. The AUCIL6 showed a bimodal distribution, with the lower peak representing healthy response and the higher peak representing sustained inflammation. Damage was minimal at low AUCIL6, giving rise to a healthy response. In contrast, intermediate levels of AUCIL6 resulted in high damage, and this was due to the insufficiency of damage recovery driven by anti-inflammatory responses from IL-10 and the activation of positive feedback sustained by IL-6. At high AUCIL6, damage recovery was interestingly restored in some population of simulated animals due to the NO-mediated anti-inflammatory responses. These observations suggest that the host's health status during acute inflammation depends in a nonlinear fashion on the magnitude of the inflammatory stimulus
NASA Astrophysics Data System (ADS)
Werisch, Stefan; Lennartz, Franz; Schütze, Niels
2015-04-01
Inverse modeling has become a common approach to infer the parameters of the water retention and hydraulic conductivity functions from observations of the vadose zone state variables during dynamic experiments under varying boundary conditions. This study focuses on the estimation and investigation of the feasibility of effective soil hydraulic properties to describe the soil water flow in an undisturbed 1m³ lysimeter. The lysimeter is equipped with 6 one-dimensional observation arrays consisting of 4 tensiometers and 4 water content probes each, leading to 6 replicated one-dimensional observations which establish the calibration data base. Methods of global sensitivity analysis and multiobjective calibration strategies have been applied to examine the information content about the soil hydraulic parameters of the Mualem-van Genuchten (MvG) model contained in the individual data sets, to assess the tradeoffs between the different calibration data sets and to infer effective soil hydraulic properties for each of the arrays. The results show that (1) information about the MvG model parameters decreases with increasing depth, due to effects of overlapping soil layers and reduced soil water dynamics, (2) parameter uncertainty is affected by correlation between the individual parameters. Despite these difficulties, (3) effective one-dimensional parameter sets, which produce satisfying fits and have acceptable trade-offs, can be identified for all arrays, but (4) the array specific parameter sets vary significantly and cannot be transferred to simulate the water flow in other arrays, and (5) none of the parameter sets is suitable to simulate the integral water flow within the lysimeter. The results of the study challenge the feasibility of the inversely estimated soil hydraulic properties from multiple point measurements of the soil hydraulic state variables. Relying only on point measurements inverse modeling can lead to promising results regarding the observations
Sensitivity of global terrestrial ecosystems to climate variability.
Seddon, Alistair W R; Macias-Fauria, Marc; Long, Peter R; Benz, David; Willis, Kathy J
2016-03-10
The identification of properties that contribute to the persistence and resilience of ecosystems despite climate change constitutes a research priority of global relevance. Here we present a novel, empirical approach to assess the relative sensitivity of ecosystems to climate variability, one property of resilience that builds on theoretical modelling work recognizing that systems closer to critical thresholds respond more sensitively to external perturbations. We develop a new metric, the vegetation sensitivity index, that identifies areas sensitive to climate variability over the past 14 years. The metric uses time series data derived from the moderate-resolution imaging spectroradiometer (MODIS) enhanced vegetation index, and three climatic variables that drive vegetation productivity (air temperature, water availability and cloud cover). Underlying the analysis is an autoregressive modelling approach used to identify climate drivers of vegetation productivity on monthly timescales, in addition to regions with memory effects and reduced response rates to external forcing. We find ecologically sensitive regions with amplified responses to climate variability in the Arctic tundra, parts of the boreal forest belt, the tropical rainforest, alpine regions worldwide, steppe and prairie regions of central Asia and North and South America, the Caatinga deciduous forest in eastern South America, and eastern areas of Australia. Our study provides a quantitative methodology for assessing the relative response rate of ecosystems--be they natural or with a strong anthropogenic signature--to environmental variability, which is the first step towards addressing why some regions appear to be more sensitive than others, and what impact this has on the resilience of ecosystem service provision and human well-being.
Sensitivity of global terrestrial ecosystems to climate variability
NASA Astrophysics Data System (ADS)
Seddon, Alistair W. R.; Macias-Fauria, Marc; Long, Peter R.; Benz, David; Willis, Kathy J.
2016-03-01
The identification of properties that contribute to the persistence and resilience of ecosystems despite climate change constitutes a research priority of global relevance. Here we present a novel, empirical approach to assess the relative sensitivity of ecosystems to climate variability, one property of resilience that builds on theoretical modelling work recognizing that systems closer to critical thresholds respond more sensitively to external perturbations. We develop a new metric, the vegetation sensitivity index, that identifies areas sensitive to climate variability over the past 14 years. The metric uses time series data derived from the moderate-resolution imaging spectroradiometer (MODIS) enhanced vegetation index, and three climatic variables that drive vegetation productivity (air temperature, water availability and cloud cover). Underlying the analysis is an autoregressive modelling approach used to identify climate drivers of vegetation productivity on monthly timescales, in addition to regions with memory effects and reduced response rates to external forcing. We find ecologically sensitive regions with amplified responses to climate variability in the Arctic tundra, parts of the boreal forest belt, the tropical rainforest, alpine regions worldwide, steppe and prairie regions of central Asia and North and South America, the Caatinga deciduous forest in eastern South America, and eastern areas of Australia. Our study provides a quantitative methodology for assessing the relative response rate of ecosystems—be they natural or with a strong anthropogenic signature—to environmental variability, which is the first step towards addressing why some regions appear to be more sensitive than others, and what impact this has on the resilience of ecosystem service provision and human well-being.
Sensitivity of global terrestrial ecosystems to climate variability.
Seddon, Alistair W R; Macias-Fauria, Marc; Long, Peter R; Benz, David; Willis, Kathy J
2016-03-10
The identification of properties that contribute to the persistence and resilience of ecosystems despite climate change constitutes a research priority of global relevance. Here we present a novel, empirical approach to assess the relative sensitivity of ecosystems to climate variability, one property of resilience that builds on theoretical modelling work recognizing that systems closer to critical thresholds respond more sensitively to external perturbations. We develop a new metric, the vegetation sensitivity index, that identifies areas sensitive to climate variability over the past 14 years. The metric uses time series data derived from the moderate-resolution imaging spectroradiometer (MODIS) enhanced vegetation index, and three climatic variables that drive vegetation productivity (air temperature, water availability and cloud cover). Underlying the analysis is an autoregressive modelling approach used to identify climate drivers of vegetation productivity on monthly timescales, in addition to regions with memory effects and reduced response rates to external forcing. We find ecologically sensitive regions with amplified responses to climate variability in the Arctic tundra, parts of the boreal forest belt, the tropical rainforest, alpine regions worldwide, steppe and prairie regions of central Asia and North and South America, the Caatinga deciduous forest in eastern South America, and eastern areas of Australia. Our study provides a quantitative methodology for assessing the relative response rate of ecosystems--be they natural or with a strong anthropogenic signature--to environmental variability, which is the first step towards addressing why some regions appear to be more sensitive than others, and what impact this has on the resilience of ecosystem service provision and human well-being. PMID:26886790
Multidisciplinary optimization of controlled space structures with global sensitivity equations
NASA Technical Reports Server (NTRS)
Padula, Sharon L.; James, Benjamin B.; Graves, Philip C.; Woodard, Stanley E.
1991-01-01
A new method for the preliminary design of controlled space structures is presented. The method coordinates standard finite element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structures and control systems of a spacecraft. Global sensitivity equations are a key feature of this method. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Fifteen design variables are used to optimize truss member sizes and feedback gain values. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporating the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables. The solution of the demonstration problem is an important step toward a comprehensive preliminary design capability for structures and control systems. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines.
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2013-01-01
This paper presents the extended forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed to run at optimized time and space steps without affecting the confidence of the physical parameter sensitivity results. The time and space steps forward sensitivity analysis method can also replace the traditional time step and grid convergence study with much less computational cost. Two well-defined benchmark problems with manufactured solutions are utilized to demonstrate the method.
Antony, Hiasindh Ashmi; Pathak, Vrushali; Parija, Subhash Chandra; Ghosh, Kanjaksha; Bhattacherjee, Amrita
2016-07-01
Increasing drug resistance in Plasmodium falciparum is an important global health burden because it reverses the malarial control achieved so far. Hence, understanding the molecular mechanisms of drug resistance is the epicenter of the development agenda for novel diagnostic and therapeutic (drugs/vaccines) targets for malaria. In this study, we report global comparative transcriptome profiling (RNA-Seq) to characterize the difference in the transcriptome between 48-h intraerythrocytic stage of chloroquine-sensitive and chloroquine-resistant P. falciparum (3D7 and Dd2) strains. The two P. falciparum 3D7 and Dd2 strains have distant geographical origin, the Netherlands and Indochina, respectively. The strains were cultured by an in vitro method and harvested at the 48-h intraerythrocytic stage having 5% parasitemia. The whole transcriptome sequencing was performed using Illumina HiSeq 2500 platform with paired-end reads. The reads were aligned with the reference P. falciparum genome. The alignment percentages for 3D7, Dd2, and Dd2 w/CQ strains were 85.40%, 89.13%, and 84%, respectively. Nearly 40% of the transcripts had known gene function, whereas the remaining genes (about 60%) had unknown function. The genes involved in immune evasion showed a significant difference between the strains. The differential gene expression between the sensitive and resistant strains was measured using the cuffdiff program with the p-value cutoff ≤0.05. Collectively, this study identified differentially expressed genes between 3D7 and Dd2 strains, where we found 89 genes to be upregulated and 227 to be downregulated. On the contrary, for 3D7 and Dd2 w/CQ strains, 45 genes were upregulated and 409 were downregulated. These differentially regulated genes code, by and large, for surface antigens involved in invasion, pathogenesis, and host-parasite interactions, among others. The exhibition of transcriptional differences between these strains of P. falciparum contributes to our
Antony, Hiasindh Ashmi; Pathak, Vrushali; Parija, Subhash Chandra; Ghosh, Kanjaksha; Bhattacherjee, Amrita
2016-07-01
Increasing drug resistance in Plasmodium falciparum is an important global health burden because it reverses the malarial control achieved so far. Hence, understanding the molecular mechanisms of drug resistance is the epicenter of the development agenda for novel diagnostic and therapeutic (drugs/vaccines) targets for malaria. In this study, we report global comparative transcriptome profiling (RNA-Seq) to characterize the difference in the transcriptome between 48-h intraerythrocytic stage of chloroquine-sensitive and chloroquine-resistant P. falciparum (3D7 and Dd2) strains. The two P. falciparum 3D7 and Dd2 strains have distant geographical origin, the Netherlands and Indochina, respectively. The strains were cultured by an in vitro method and harvested at the 48-h intraerythrocytic stage having 5% parasitemia. The whole transcriptome sequencing was performed using Illumina HiSeq 2500 platform with paired-end reads. The reads were aligned with the reference P. falciparum genome. The alignment percentages for 3D7, Dd2, and Dd2 w/CQ strains were 85.40%, 89.13%, and 84%, respectively. Nearly 40% of the transcripts had known gene function, whereas the remaining genes (about 60%) had unknown function. The genes involved in immune evasion showed a significant difference between the strains. The differential gene expression between the sensitive and resistant strains was measured using the cuffdiff program with the p-value cutoff ≤0.05. Collectively, this study identified differentially expressed genes between 3D7 and Dd2 strains, where we found 89 genes to be upregulated and 227 to be downregulated. On the contrary, for 3D7 and Dd2 w/CQ strains, 45 genes were upregulated and 409 were downregulated. These differentially regulated genes code, by and large, for surface antigens involved in invasion, pathogenesis, and host-parasite interactions, among others. The exhibition of transcriptional differences between these strains of P. falciparum contributes to our
Sensitivity analysis of thermodynamic calculations
NASA Astrophysics Data System (ADS)
Irwin, C. L.; Obrien, T. J.
Iterative solution methods and sensitivity analysis for mathematical models of chemical equilibrium are formally similar. For models which are a Newton-type iterative solution scheme, such as the NASA-Lewis CEC code or the R-Gibbs unit of ASPEN, it is shown that extensive sensitivity information is available for approximately the cost of one additional Newton iteration. All matrices and vectors required for implementation of first and second order sensitivity analysis in the CEC code are given in an appendix. A simple problem for which an analytical solution is possible is presented to illustrate the calculations and verify the computer calculations.
Lombardi, D.P.
1992-08-01
The Chemical Hazard Prediction Model (D2PC) developed by the US Army will play a critical role in the Chemical Stockpile Emergency Preparedness Program by predicting chemical agent transport and dispersion through the atmosphere after an accidental release. To aid in the analysis of the output calculated by D2PC, this sensitivity analysis was conducted to provide information on model response to a variety of input parameters. The sensitivity analysis focused on six accidental release scenarios involving chemical agents VX, GB, and HD (sulfur mustard). Two categories, corresponding to conservative most likely and worst case meteorological conditions, provided the reference for standard input values. D2PC displayed a wide variety of sensitivity to the various input parameters. The model displayed the greatest overall sensitivity to wind speed, mixing height, and breathing rate. For other input parameters, sensitivity was mixed but generally lower. Sensitivity varied not only with parameter, but also over the range of values input for a single parameter. This information on model response can provide useful data for interpreting D2PC output.
Involute composite design evaluation using global design sensitivity derivatives
NASA Technical Reports Server (NTRS)
Hart, J. K.; Stanton, E. L.
1989-01-01
An optimization capability for involute structures has been developed. Its key feature is the use of global material geometry variables which are so chosen that all combinations of design variables within a set of lower and upper bounds correspond to manufacturable designs. A further advantage of global variables is that their number does not increase with increasing mesh density. The accuracy of the sensitivity derivatives has been verified both through finite difference tests and through the successful use of the derivatives by an optimizer. The state of the art in composite design today is still marked by point design algorithms linked together using ad hoc methods not directly related to a manufacturing procedure. The global design sensitivity approach presented here for involutes can be applied to filament wound shells and other composite constructions using material form features peculiar to each construction. The present involute optimization technology is being applied to the Space Shuttle SRM nozzle boot ring redesigns by PDA Engineering.
Comparative Sensitivity Analysis of Muscle Activation Dynamics.
Rockenfeller, Robert; Günther, Michael; Schmitt, Syn; Götz, Thomas
2015-01-01
We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379
Connecting Local and Global Sensitivities in a Mathematical Model for Wound Healing.
Krishna, Nitin A; Pennington, Hannah M; Coppola, Canaan D; Eisenberg, Marisa C; Schugart, Richard C
2015-12-01
The process of wound healing is governed by complex interactions between proteins and the extracellular matrix, involving a range of signaling pathways. This study aimed to formulate, quantify, and analyze a mathematical model describing interactions among matrix metalloproteinases (MMP-1), their inhibitors (TIMP-1), and extracellular matrix in the healing of a diabetic foot ulcer. De-identified patient data for modeling were taken from Muller et al. (Diabet Med 25(4):419-426, 2008), a research outcome that collected average physiological data for two patient subgroups: "good healers" and "poor healers," where classification was based on rate of ulcer healing. Model parameters for the two patient subgroups were estimated using least squares. The model and parameter values were analyzed by conducting a steady-state analysis and both global and local sensitivity analyses. The global sensitivity analysis was performed using Latin hypercube sampling and partial rank correlation analysis, while local analysis was conducted through a classical sensitivity analysis followed by an SVD-QR subset selection. We developed a "local-to-global" analysis to compare the results of the sensitivity analyses. Our results show that the sensitivities of certain parameters are highly dependent on the size of the parameter space, suggesting that identifying physiological bounds may be critical in defining the sensitivities. PMID:26597096
Computational methods for global/local analysis
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.
1992-01-01
Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.
On the climate sensitivity in global aqua-planet simulations
NASA Astrophysics Data System (ADS)
Sušelj, Kay; Teixeira, João
2015-04-01
A number of recent studies conclude that uncertainty of cloud radiative effects in global circulation models (GCMs) with respect to imposed warming is on the same order of magnitude as the radiative forcing due to the increase in greenhouse gasses since the industrial revolution. This uncertainty persists over generations of GCMs and imposes a key limitation on better understanding of the climate sensitivity of the whole coupled Earth system. Because physical processes in the atmosphere are highly nonlinear and coupled it is not well understood which processes are at the heart of the uncertainty problem. To shed light to this question, we perform a series of global aqua-planet simulations with prescribed sea surface temperature (SST) using the Weather Research and Forecasting (WRF) Model. This series of simulations represents a simplified yet realistic framework in which climate change is represented by an increase in the SST. We investigate the sensitivity of the WRF model climate response (in particular clouds) as a function of different combinations of the dynamical and physical parameterization options. We show that physical parameterizations are responsible for the majority of the uncertainty of the WRF model response. Specifically we find that the WRF is highly sensitive to the parameterization of turbulent mixing, which depends on the combination of boundary layer and convection parameterizations. We anticipate that these findings will be helpful for more focused development of GCMs.
Global thermohaline circulation. Part 1: Sensitivity to atmospheric moisture transport
Wang, X.; Stone, P.H.; Marotzke, J.
1999-01-01
A global ocean general circulation model of idealized geometry, combined with an atmospheric model based on observed transports of heat, momentum, and moisture, is used to explore the sensitivity of the global conveyor belt circulation to the surface freshwater fluxes, in particular the effects of meridional atmospheric moisture transports. The numerical results indicate that the equilibrium strength of the North Atlantic Deep Water (NADW) formation increases as the global freshwater transports increase. However, the global deep water formation--that is, the sum of the NADW and the Southern Ocean Deep Water formation rates--is relatively insensitive to changes of the freshwater flux. Perturbations to the meridional moisture transports of each hemisphere identify equatorially asymmetric effects of the freshwater fluxes. The results are consistent with box model results that the equilibrium NADW formation is primarily controlled by the magnitude of the Southern Hemisphere freshwater flux. However, the results show that the Northern Hemisphere freshwater flux has a strong impact on the transient behavior of the North Atlantic overturning. Increasing this flux leads to a collapse of the conveyor belt circulation, but the collapse is delayed if the Southern Hemisphere flux also increases. The perturbation experiments also illustrate that the rapidity of collapse is affected by random fluctuations in the wind stress field.
Sensitivity and Uncertainty Analysis Shell
1999-04-20
SUNS (Sensitivity and Uncertainty Analysis Shell) is a 32-bit application that runs under Windows 95/98 and Windows NT. It is designed to aid in statistical analyses for a broad range of applications. The class of problems for which SUNS is suitable is generally defined by two requirements: 1. A computer code is developed or acquired that models some processes for which input is uncertain and the user is interested in statistical analysis of the outputmore » of that code. 2. The statistical analysis of interest can be accomplished using the Monte Carlo analysis. The implementation then requires that the user identify which input to the process model is to be manipulated for statistical analysis. With this information, the changes required to loosely couple SUNS with the process model can be completed. SUNS is then used to generate the required statistical sample and the user-supplied process model analyses the sample. The SUNS post processor displays statistical results from any existing file that contains sampled input and output values.« less
Using Dynamic Sensitivity Analysis to Assess Testability
NASA Technical Reports Server (NTRS)
Voas, Jeffrey; Morell, Larry; Miller, Keith
1990-01-01
This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.
Ellouze, M; Gauchi, J-P; Augustin, J-C
2011-06-01
The aim of this study was to apply a global sensitivity analysis (SA) method in model simplification and to evaluate (eO)®, a biological Time Temperature Integrator (TTI) as a quality and safety indicator for cold smoked salmon (CSS). Models were thus developed to predict the evolutions of Listeria monocytogenes and the indigenous food flora in CSS and to predict TTIs endpoint. A global SA was then applied on the three models to identify the less important factors and simplify the models accordingly. Results showed that the subset of the most important factors of the three models was mainly composed of the durations and temperatures of two chill chain links, out of the control of the manufacturers: the domestic refrigerator and the retail/cabinet links. Then, the simplified versions of the three models were run with 10(4) time temperature profiles representing the variability associated to the microbial behavior, to the TTIs evolution and to the French chill chain characteristics. The results were used to assess the distributions of the microbial contaminations obtained at the TTI endpoint and at the end of the simulated profiles and proved that, in the case of poor storage conditions, the TTI use could reduce the number of unacceptable foods by 50%. PMID:21511136
Stiff DAE integrator with sensitivity analysis capabilities
2007-11-26
IDAS is a general purpose (serial and parallel) solver for differential equation (ODE) systems with senstivity analysis capabilities. It provides both forward and adjoint sensitivity analysis options.
LCA data quality: sensitivity and uncertainty analysis.
Guo, M; Murphy, R J
2012-10-01
Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions. PMID:22854094
Duret, Steven; Guillier, Laurent; Hoang, Hong-Minh; Flick, Denis; Laguerre, Onrawee
2014-06-16
Deterministic models describing heat transfer and microbial growth in the cold chain are widely studied. However, it is difficult to apply them in practice because of several variable parameters in the logistic supply chain (e.g., ambient temperature varying due to season and product residence time in refrigeration equipment), the product's characteristics (e.g., pH and water activity) and the microbial characteristics (e.g., initial microbial load and lag time). This variability can lead to different bacterial growth rates in food products and has to be considered to properly predict the consumer's exposure and identify the key parameters of the cold chain. This study proposes a new approach that combines deterministic (heat transfer) and stochastic (Monte Carlo) modeling to account for the variability in the logistic supply chain and the product's characteristics. The model generates a realistic time-temperature product history , contrary to existing modeling whose describe time-temperature profile Contrary to existing approaches that use directly a time-temperature profile, the proposed model predicts product temperature evolution from the thermostat setting and the ambient temperature. The developed methodology was applied to the cold chain of cooked ham including, the display cabinet, transport by the consumer and the domestic refrigerator, to predict the evolution of state variables, such as the temperature and the growth of Listeria monocytogenes. The impacts of the input factors were calculated and ranked. It was found that the product's time-temperature history and the initial contamination level are the main causes of consumers' exposure. Then, a refined analysis was applied, revealing the importance of consumer behaviors on Listeria monocytogenes exposure. PMID:24786551
Duret, Steven; Guillier, Laurent; Hoang, Hong-Minh; Flick, Denis; Laguerre, Onrawee
2014-06-16
Deterministic models describing heat transfer and microbial growth in the cold chain are widely studied. However, it is difficult to apply them in practice because of several variable parameters in the logistic supply chain (e.g., ambient temperature varying due to season and product residence time in refrigeration equipment), the product's characteristics (e.g., pH and water activity) and the microbial characteristics (e.g., initial microbial load and lag time). This variability can lead to different bacterial growth rates in food products and has to be considered to properly predict the consumer's exposure and identify the key parameters of the cold chain. This study proposes a new approach that combines deterministic (heat transfer) and stochastic (Monte Carlo) modeling to account for the variability in the logistic supply chain and the product's characteristics. The model generates a realistic time-temperature product history , contrary to existing modeling whose describe time-temperature profile Contrary to existing approaches that use directly a time-temperature profile, the proposed model predicts product temperature evolution from the thermostat setting and the ambient temperature. The developed methodology was applied to the cold chain of cooked ham including, the display cabinet, transport by the consumer and the domestic refrigerator, to predict the evolution of state variables, such as the temperature and the growth of Listeria monocytogenes. The impacts of the input factors were calculated and ranked. It was found that the product's time-temperature history and the initial contamination level are the main causes of consumers' exposure. Then, a refined analysis was applied, revealing the importance of consumer behaviors on Listeria monocytogenes exposure.
Data fusion qualitative sensitivity analysis
Clayton, E.A.; Lewis, R.E.
1995-09-01
Pacific Northwest Laboratory was tasked with testing, debugging, and refining the Hanford Site data fusion workstation (DFW), with the assistance of Coleman Research Corporation (CRC), before delivering the DFW to the environmental restoration client at the Hanford Site. Data fusion is the mathematical combination (or fusion) of disparate data sets into a single interpretation. The data fusion software used in this study was developed by CRC. The data fusion software developed by CRC was initially demonstrated on a data set collected at the Hanford Site where three types of data were combined. These data were (1) seismic reflection, (2) seismic refraction, and (3) depth to geologic horizons. The fused results included a contour map of the top of a low-permeability horizon. This report discusses the results of a sensitivity analysis of data fusion software to variations in its input parameters. The data fusion software developed by CRC has a large number of input parameters that can be varied by the user and that influence the results of data fusion. Many of these parameters are defined as part of the earth model. The earth model is a series of 3-dimensional polynomials with horizontal spatial coordinates as the independent variables and either subsurface layer depth or values of various properties within these layers (e.g., compression wave velocity, resistivity) as the dependent variables.
Roy, Pierre-Olivier; Deschênes, Louise; Margni, Manuele
2012-08-01
This paper presents a novel life cycle impact assessment (LCIA) approach to derive spatially explicit soil sensitivity indicators for terrestrial acidification. This global approach is compatible with a subsequent damage assessment, making it possible to consistently link the developed midpoint indicators with a later endpoint assessment along the cause-effect chain-a prerequisite in LCIA. Four different soil chemical indicators were preselected to evaluate sensitivity factors (SFs) for regional receiving environments at the global scale, namely the base cations to aluminum ratio, aluminum to calcium ratio, pH, and aluminum concentration. These chemical indicators were assessed using the PROFILE geochemical steady-state soil model and a global data set of regional soil parameters developed specifically for this study. Results showed that the most sensitive regions (i.e., where SF is maximized) are in Canada, northern Europe, the Amazon, central Africa, and East and Southeast Asia. However, the approach is not bereft of uncertainty. Indeed, a Monte Carlo analysis showed that input parameter variability may induce SF variations of up to over 6 orders of magnitude for certain chemical indicators. These findings improve current practices and enable the development of regional characterization models to assess regional life cycle inventories in a global economy.
A review of sensitivity analysis techniques
Hamby, D.M.
1993-12-31
Mathematical models are utilized to approximate various highly complex engineering, physical, environmental, social, and economic phenomena. Model parameters exerting the most influence on model results are identified through a {open_quotes}sensitivity analysis.{close_quotes} A comprehensive review is presented of more than a dozen sensitivity analysis methods. The most fundamental of sensitivity techniques utilizes partial differentiation whereas the simplest approach requires varying parameter values one-at-a-time. Correlation analysis is used to determine relationships between independent and dependent variables. Regression analysis provides the most comprehensive sensitivity measure and is commonly utilized to build response surfaces that approximate complex models.
Climate sensitivity: Analysis of feedback mechanisms
NASA Astrophysics Data System (ADS)
Hansen, J.; Lacis, A.; Rind, D.; Russell, G.; Stone, P.; Fung, I.; Ruedy, R.; Lerner, J.
We study climate sensitivity and feedback processes in three independent ways: (1) by using a three dimensional (3-D) global climate model for experiments in which solar irradiance S0 is increased 2 percent or CO2 is doubled, (2) by using the CLIMAP climate boundary conditions to analyze the contributions of different physical processes to the cooling of the last ice age (18K years ago), and (3) by using estimated changes in global temperature and the abundance of atmospheric greenhouse gases to deduce an empirical climate sensitivity for the period 1850-1980. Our 3-D global climate model yields a warming of ˜4°C for either a 2 percent increase of S0 or doubled CO2. This indicates a net feedback factor of f = 3-4, because either of these forcings would cause the earth's surface temperature to warm 1.2-1.3°C to restore radiative balance with space, if other factors remained unchanged. Principal positive feedback processes in the model are changes in atmospheric water vapor, clouds and snow/ice cover. Feedback factors calculated for these processes, with atmospheric dynamical feedbacks implicitly incorporated, are respectively fwater vapor ˜ 1.6, fclouds ˜ 1.3 and fsnow/ice ˜ 1.1 with the latter mainly caused by sea ice changes. A number of potential feedbacks, such as land ice cover, vegetation cover and ocean heat transport were held fixed in these experiments. We calculate land ice, sea ice and vegetation feedbacks for the 18K climate to be fland ice ˜ 1.2-1.3, fsea ice ˜ 1.2 and fvegetation ˜ 1.05-1.1 from their effect on the radiation budget at the top of the atmosphere. This sea ice feedback at 18K is consistent with the smaller fsnow/ice ˜ 1.1 in the S0 and CO2 experiments, which applied to a warmer earth with less sea ice. We also obtain an empirical estimate of f = 2-4 for the fast feedback processes (water vapor, clouds, sea ice) operating on 10-100 year time scales by comparing the cooling due to slow or specified changes (land ice, C02
Shape design sensitivity analysis using domain information
NASA Technical Reports Server (NTRS)
Seong, Hwal-Gyeong; Choi, Kyung K.
1985-01-01
A numerical method for obtaining accurate shape design sensitivity information for built-up structures is developed and demonstrated through analysis of examples. The basic character of the finite element method, which gives more accurate domain information than boundary information, is utilized for shape design sensitivity improvement. A domain approach for shape design sensitivity analysis of built-up structures is derived using the material derivative idea of structural mechanics and the adjoint variable method of design sensitivity analysis. Velocity elements and B-spline curves are introduced to alleviate difficulties in generating domain velocity fields. The regularity requirements of the design velocity field are studied.
Recent developments in structural sensitivity analysis
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Adelman, Howard M.
1988-01-01
Recent developments are reviewed in two major areas of structural sensitivity analysis: sensitivity of static and transient response; and sensitivity of vibration and buckling eigenproblems. Recent developments from the standpoint of computational cost, accuracy, and ease of implementation are presented. In the area of static response, current interest is focused on sensitivity to shape variation and sensitivity of nonlinear response. Two general approaches are used for computing sensitivities: differentiation of the continuum equations followed by discretization, and the reverse approach of discretization followed by differentiation. It is shown that the choice of methods has important accuracy and implementation implications. In the area of eigenproblem sensitivity, there is a great deal of interest and significant progress in sensitivity of problems with repeated eigenvalues. In addition to reviewing recent contributions in this area, the paper raises the issue of differentiability and continuity associated with the occurrence of repeated eigenvalues.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin
2016-04-01
Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
Structural sensitivity analysis: Methods, applications and needs
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.; Camarda, C. J.; Walsh, J. L.
1984-01-01
Innovative techniques applicable to sensitivity analysis of discretized structural systems are reviewed. The techniques include a finite difference step size selection algorithm, a method for derivatives of iterative solutions, a Green's function technique for derivatives of transient response, simultaneous calculation of temperatures and their derivatives, derivatives with respect to shape, and derivatives of optimum designs with respect to problem parameters. Computerized implementations of sensitivity analysis and applications of sensitivity derivatives are also discussed. Some of the critical needs in the structural sensitivity area are indicated along with plans for dealing with some of those needs.
Structural sensitivity analysis: Methods, applications, and needs
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.; Camarda, C. J.; Walsh, J. L.
1984-01-01
Some innovative techniques applicable to sensitivity analysis of discretized structural systems are reviewed. These techniques include a finite-difference step-size selection algorithm, a method for derivatives of iterative solutions, a Green's function technique for derivatives of transient response, a simultaneous calculation of temperatures and their derivatives, derivatives with respect to shape, and derivatives of optimum designs with respect to problem parameters. Computerized implementations of sensitivity analysis and applications of sensitivity derivatives are also discussed. Finally, some of the critical needs in the structural sensitivity area are indicated along with Langley plans for dealing with some of these needs.
Sensitivity Analysis for some Water Pollution Problem
NASA Astrophysics Data System (ADS)
Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff
2014-05-01
Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .
Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.
2014-01-01
Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544
Adkins, D E; McClay, J L; Vunck, S A; Batman, A M; Vann, R E; Clark, S L; Souza, R P; Crowley, J J; Sullivan, P F; van den Oord, E J C G; Beardsley, P M
2013-11-01
Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In this study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine (MA)-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate, FDR <0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent MA levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization.
Sensitivity analysis for large-scale problems
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Whitworth, Sandra L.
1987-01-01
The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.
Sensitivity of global river discharges under Holocene and future climate conditions
NASA Astrophysics Data System (ADS)
Aerts, J. C. J. H.; Renssen, H.; Ward, P. J.; de Moel, H.; Odada, E.; Bouwer, L. M.; Goosse, H.
2006-10-01
A comparative analysis of global river basins shows that some river discharges are more sensitive to future climate change for the coming century than to natural climate variability over the last 9000 years. In these basins (Ganges, Mekong, Volta, Congo, Amazon, Murray-Darling, Rhine, Oder, Yukon) future discharges increase by 6-61%. These changes are of similar magnitude to changes over the last 9000 years. Some rivers (Nile, Syr Darya) experienced strong reductions in discharge over the last 9000 years (17-56%), but show much smaller responses to future warming. The simulation results for the last 9000 years are validated with independent proxy data.
Coal Transportation Rate Sensitivity Analysis
2005-01-01
On December 21, 2004, the Surface Transportation Board (STB) requested that the Energy Information Administration (EIA) analyze the impact of changes in coal transportation rates on projected levels of electric power sector energy use and emissions. Specifically, the STB requested an analysis of changes in national and regional coal consumption and emissions resulting from adjustments in railroad transportation rates for Wyoming's Powder River Basin (PRB) coal using the National Energy Modeling System (NEMS). However, because NEMS operates at a relatively aggregate regional level and does not represent the costs of transporting coal over specific rail lines, this analysis reports on the impacts of interregional changes in transportation rates from those used in the Annual Energy Outlook 2005 (AEO2005) reference case.
Sensitivity Analysis of the Static Aeroelastic Response of a Wing
NASA Technical Reports Server (NTRS)
Eldred, Lloyd B.
1993-01-01
A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.
Multiple predictor smoothing methods for sensitivity analysis.
Helton, Jon Craig; Storlie, Curtis B.
2006-08-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.
Sensitivity Analysis for Coupled Aero-structural Systems
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.
1999-01-01
A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.
River Runoff Sensitivity in Eastern Siberia to Global Climate Warming
NASA Astrophysics Data System (ADS)
Georgiadi, A. G.; Milyukova, I. P.; Kashutina, E.
2008-12-01
During several last decades significant climate warming is observed in permafrost regions of Eastern Siberia. These changes include rise of air temperature as well as precipitation. Changes in regional climate are accompanied with river runoff changes. The analysis of the data shows that in the past 25 years, the largest contribution to the annual river runoff increase in the lower reaches of the Lena (Kyusyur) is made (in descending order) by the Lena river watershed (above Tabaga), the Aldan river (Okhotsky Perevoz), and the Vilyui river (Khatyryk-Khomo). The similar relation is also retained in the case of flood, with the seasonal river runoff of the Vilyui river being slightly decreased. Completely different relations are noted in winter, when a substantial river runoff increase is recorded in the lower reaches of the Lena river. In this case the major contribution to the winter river runoff increase in the Lena outlet is made by the winter river runoff increase on the Vilyui river. Unlike the above cases, the summer-fall river runoff in the lower reaches of the Lena river tends to decrease, which is similar to the trend exhibited by the Vilyui river. At the same time, the river runoff of the Lena (Tabaga) and Aldan (Verkhoyansky Perevoz) rivers increase. According to the results of hydrological modeling the expected anthropogenic climate warming in XXI century can bring more significant river runoff increase in the Lena river basin as compared with the recent one. Hydrological responses to climate warming have been evaluated for the plain part of the Lena river basin basing on a macroscale hydrological model featuring simplified description of processes developed in Institute of Geography of the Russian Academy of Sciences. Two atmosphere-ocean global circulation models included in the IPCC (ECHAM4/OPY3 and GFDL-R30) were used as scenarios of future global climate. According to the results of hydrological modeling the expected anthropogenic climate warming in
Global ocean wind power sensitivity to surface layer stability
NASA Astrophysics Data System (ADS)
Capps, Scott B.; Zender, Charles S.
2009-05-01
Global ocean wind power has recently been assessed (W. T. Liu et al., 2008) using scatterometry-based 10 m winds. We characterize, for the first time, wind power at 80 m (typical wind turbine hub height) above the global ocean surface, and account for the effects of surface layer stability. Accounting for realistic turbine height and atmospheric stability increases mean global ocean wind power by +58% and -4%, respectively. Our best estimate of mean global ocean wind power is 731 W m-2, about 50% greater than the 487 W m-2 based on previous methods. 80 m wind power is 1.2-1.5 times 10 m power equatorward of 30° latitude, between 1.4 and 1.7 times 10 m power in wintertime storm track regions and >6 times 10 m power in stable regimes east of continents. These results are relatively insensitive to methodology as wind power calculated using a fitted Weibull probability density function is within 10% of power calculated from discrete wind speed measurements over most of the global oceans.
Sensitivity analysis of the critical speed in railway vehicle dynamics
NASA Astrophysics Data System (ADS)
Bigoni, D.; True, H.; Engsig-Karup, A. P.
2014-05-01
We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, high-dimensional model representation and total sensitivity indices. It is applied to a half car with a two-axle Cooperrider bogie, in order to study the sensitivity of the critical speed with respect to the suspension parameters. The importance of a certain suspension component is expressed by the variance in critical speed that is ascribable to it. This proves to be useful in the identification of parameters for which the accuracy of their values is critically important. The approach has a general applicability in many engineering fields and does not require the knowledge of the particular solver of the dynamical system. This analysis can be used as part of the virtual homologation procedure and to help engineers during the design phase of complex systems.
Adjoint sensitivity analysis of an ultrawideband antenna
Stephanson, M B; White, D A
2011-07-28
The frequency domain finite element method using H(curl)-conforming finite elements is a robust technique for full-wave analysis of antennas. As computers become more powerful, it is becoming feasible to not only predict antenna performance, but also to compute sensitivity of antenna performance with respect to multiple parameters. This sensitivity information can then be used for optimization of the design or specification of manufacturing tolerances. In this paper we review the Adjoint Method for sensitivity calculation, and apply it to the problem of optimizing a Ultrawideband antenna.
Sensitivity Analysis in the Model Web
NASA Astrophysics Data System (ADS)
Jones, R.; Cornford, D.; Boukouvalas, A.
2012-04-01
The Model Web, and in particular the Uncertainty enabled Model Web being developed in the UncertWeb project aims to allow model developers and model users to deploy and discover models exposed as services on the Web. In particular model users will be able to compose model and data resources to construct and evaluate complex workflows. When discovering such workflows and models on the Web it is likely that the users might not have prior experience of the model behaviour in detail. It would be particularly beneficial if users could undertake a sensitivity analysis of the models and workflows they have discovered and constructed to allow them to assess the sensitivity to their assumptions and parameters. This work presents a Web-based sensitivity analysis tool which provides computationally efficient sensitivity analysis methods for models exposed on the Web. In particular the tool is tailored to the UncertWeb profiles for both information models (NetCDF and Observations and Measurements) and service specifications (WPS and SOAP/WSDL). The tool employs emulation technology where this is found to be possible, constructing statistical surrogate models for the models or workflows, to allow very fast variance based sensitivity analysis. Where models are too complex for emulation to be possible, or evaluate too fast for this to be necessary the original models are used with a carefully designed sampling strategy. A particular benefit of constructing emulators of the models or workflow components is that within the framework these can be communicated and evaluated at any physical location. The Web-based tool and backend API provide several functions to facilitate the process of creating an emulator and performing sensitivity analysis. A user can select a model exposed on the Web and specify the input ranges. Once this process is complete, they are able to perform screening to discover important inputs, train an emulator, and validate the accuracy of the trained emulator. In
SILAC for global phosphoproteomic analysis.
Pimienta, Genaro; Chaerkady, Raghothama; Pandey, Akhilesh
2009-01-01
Establishing the phosphorylation pattern of proteins in a comprehensive fashion is an important goal of a majority of cell signaling projects. Phosphoproteomic strategies should be designed in such a manner as to identify sites of phosphorylation as well as to provide quantitative information about the extent of phosphorylation at the sites. In this chapter, we describe an experimental strategy that outlines such an approach using stable isotope labeling with amino acids in cell culture (SILAC) coupled to LC-MS/MS. We highlight the importance of quantitative strategies in signal transduction as a platform for a systematic and global elucidation of biological processes.
Sensitivity analysis and application in exploration geophysics
NASA Astrophysics Data System (ADS)
Tang, R.
2013-12-01
In exploration geophysics, the usual way of dealing with geophysical data is to form an Earth model describing underground structure in the area of investigation. The resolved model, however, is based on the inversion of survey data which is unavoidable contaminated by various noises and is sampled in a limited number of observation sites. Furthermore, due to the inherent non-unique weakness of inverse geophysical problem, the result is ambiguous. And it is not clear that which part of model features is well-resolved by the data. Therefore the interpretation of the result is intractable. We applied a sensitivity analysis to address this problem in magnetotelluric(MT). The sensitivity, also named Jacobian matrix or the sensitivity matrix, is comprised of the partial derivatives of the data with respect to the model parameters. In practical inversion, the matrix can be calculated by direct modeling of the theoretical response for the given model perturbation, or by the application of perturbation approach and reciprocity theory. We now acquired visualized sensitivity plot by calculating the sensitivity matrix and the solution is therefore under investigation that the less-resolved part is indicated and should not be considered in interpretation, while the well-resolved parameters can relatively be convincing. The sensitivity analysis is hereby a necessary and helpful tool for increasing the reliability of inverse models. Another main problem of exploration geophysics is about the design strategies of joint geophysical survey, i.e. gravity, magnetic & electromagnetic method. Since geophysical methods are based on the linear or nonlinear relationship between observed data and subsurface parameters, an appropriate design scheme which provides maximum information content within a restricted budget is quite difficult. Here we firstly studied sensitivity of different geophysical methods by mapping the spatial distribution of different survey sensitivity with respect to the
Dynamic sensitivity analysis of biological systems
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2008-01-01
Background A mathematical model to understand, predict, control, or even design a real biological system is a central theme in systems biology. A dynamic biological system is always modeled as a nonlinear ordinary differential equation (ODE) system. How to simulate the dynamic behavior and dynamic parameter sensitivities of systems described by ODEs efficiently and accurately is a critical job. In many practical applications, e.g., the fed-batch fermentation systems, the system admissible input (corresponding to independent variables of the system) can be time-dependent. The main difficulty for investigating the dynamic log gains of these systems is the infinite dimension due to the time-dependent input. The classical dynamic sensitivity analysis does not take into account this case for the dynamic log gains. Results We present an algorithm with an adaptive step size control that can be used for computing the solution and dynamic sensitivities of an autonomous ODE system simultaneously. Although our algorithm is one of the decouple direct methods in computing dynamic sensitivities of an ODE system, the step size determined by model equations can be used on the computations of the time profile and dynamic sensitivities with moderate accuracy even when sensitivity equations are more stiff than model equations. To show this algorithm can perform the dynamic sensitivity analysis on very stiff ODE systems with moderate accuracy, it is implemented and applied to two sets of chemical reactions: pyrolysis of ethane and oxidation of formaldehyde. The accuracy of this algorithm is demonstrated by comparing the dynamic parameter sensitivities obtained from this new algorithm and from the direct method with Rosenbrock stiff integrator based on the indirect method. The same dynamic sensitivity analysis was performed on an ethanol fed-batch fermentation system with a time-varying feed rate to evaluate the applicability of the algorithm to realistic models with time
SEP thrust subsystem performance sensitivity analysis
NASA Technical Reports Server (NTRS)
Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.
1973-01-01
This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.
Sensitive chiral analysis by capillary electrophoresis.
García-Ruiz, Carmen; Marina, María Luisa
2006-01-01
In this review, an updated view of the different strategies used up to now to enhance the sensitivity of detection in chiral analysis by CE will be provided to the readers. With this aim, it will include a brief description of the fundamentals and most of the recent applications performed in sensitive chiral analysis by CE using offline and online sample treatment techniques (SPE, liquid-liquid extraction, microdialysis, etc.), on-column preconcentration techniques based on electrophoretic principles (ITP, stacking, and sweeping), and alternative detection systems (spectroscopic, spectrometric, and electrochemical) to the widely used UV-Vis absorption detection.
A numerical comparison of sensitivity analysis techniques
Hamby, D.M.
1993-12-31
Engineering and scientific phenomena are often studied with the aid of mathematical models designed to simulate complex physical processes. In the nuclear industry, modeling the movement and consequence of radioactive pollutants is extremely important for environmental protection and facility control. One of the steps in model development is the determination of the parameters most influential on model results. A {open_quotes}sensitivity analysis{close_quotes} of these parameters is not only critical to model validation but also serves to guide future research. A previous manuscript (Hamby) detailed many of the available methods for conducting sensitivity analyses. The current paper is a comparative assessment of several methods for estimating relative parameter sensitivity. Method practicality is based on calculational ease and usefulness of the results. It is the intent of this report to demonstrate calculational rigor and to compare parameter sensitivity rankings resulting from various sensitivity analysis techniques. An atmospheric tritium dosimetry model (Hamby) is used here as an example, but the techniques described can be applied to many different modeling problems. Other investigators (Rose; Dalrymple and Broyd) present comparisons of sensitivity analyses methodologies, but none as comprehensive as the current work.
NASA Astrophysics Data System (ADS)
Bernstein, Diana N.; Neelin, J. David
2016-06-01
A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3 mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme. This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive "dangerous ranges." The low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.
Sensitivity analysis for interactions under unmeasured confounding.
Vanderweele, Tyler J; Mukherjee, Bhramar; Chen, Jinbo
2012-09-28
We develop a sensitivity analysis technique to assess the sensitivity of interaction analyses to unmeasured confounding. We give bias formulas for sensitivity analysis for interaction under unmeasured confounding on both additive and multiplicative scales. We provide simplified formulas in the case in which either one of the two factors does not interact with the unmeasured confounder in its effects on the outcome. An interesting consequence of the results is that if the two exposures of interest are independent (e.g., gene-environment independence), even under unmeasured confounding, if the estimate of the interaction is nonzero, then either there is a true interaction between the two factors or there is an interaction between one of the factors and the unmeasured confounder; an interaction must be present in either scenario. We apply the results to two examples drawn from the literature.
Design sensitivity analysis of boundary element substructures
NASA Technical Reports Server (NTRS)
Kane, James H.; Saigal, Sunil; Gallagher, Richard H.
1989-01-01
The ability to reduce or condense a three-dimensional model exactly, and then iterate on this reduced size model representing the parts of the design that are allowed to change in an optimization loop is discussed. The discussion presents the results obtained from an ongoing research effort to exploit the concept of substructuring within the structural shape optimization context using a Boundary Element Analysis (BEA) formulation. The first part contains a formulation for the exact condensation of portions of the overall boundary element model designated as substructures. The use of reduced boundary element models in shape optimization requires that structural sensitivity analysis can be performed. A reduced sensitivity analysis formulation is then presented that allows for the calculation of structural response sensitivities of both the substructured (reduced) and unsubstructured parts of the model. It is shown that this approach produces significant computational economy in the design sensitivity analysis and reanalysis process by facilitating the block triangular factorization and forward reduction and backward substitution of smaller matrices. The implementatior of this formulation is discussed and timings and accuracies of representative test cases presented.
Pediatric Pain, Predictive Inference, and Sensitivity Analysis.
ERIC Educational Resources Information Center
Weiss, Robert
1994-01-01
Coping style and effects of counseling intervention on pain tolerance was studied for 61 elementary school students through immersion of hands in cold water. Bayesian predictive inference tools are able to distinguish between subject characteristics and manipulable treatments. Sensitivity analysis strengthens the certainty of conclusions about…
A pathway analysis of global aerosol processes
NASA Astrophysics Data System (ADS)
Schutgens, N. A. J.; Stier, P.
2014-06-01
We present a detailed budget of the changes in atmospheric aerosol mass and numbers due to various processes: emission, nucleation, coagulation, H2SO4 condensation and in-cloud production, ageing and deposition. The budget is created from monthly-averaged tracer tendencies calculated by the global aerosol model ECHAM5.5-HAM2 and allows us to investigate process contributions at various length- and time-scales. As a result, we show in unprecedented detail what processes drive the evolution of aerosol. In particular, we show that the processes that affect aerosol masses are quite different from those affecting aerosol numbers. Condensation of H2SO4 gas onto pre-existing particles is an important process, dominating the growth of small particles in the nucleation mode to the Aitken mode and the ageing of hydrophobic matter. Together with in-cloud production of H2SO4, it significantly contributes to (and often dominates) the mass burden (and hence composition) of the hydrophilic Aitken and accumulation mode particles. Particle growth itself is the leading source of number densities in the hydrophilic Aitken and accumulation modes, with their hydrophobic counterparts contributing (even locally) relatively little. As expected, the coarse mode is dominated by primary emissions and mostly decoupled from the smaller modes. Our analysis also suggests that coagulation serves mainly as a loss process for number densities and that, relative to other processes, it is a rather unimportant contributor to composition changes of aerosol. The analysis is extended with sensitivity studies where the impact of a lower model resolution or pre-industrial emissions is shown to be small. We discuss the use of the current budget for model simplification, prioritisation of model improvements, identification of potential structural model errors and model evaluation against observations.
A pathway analysis of global aerosol processes
NASA Astrophysics Data System (ADS)
Schutgens, N. A. J.; Stier, P.
2014-11-01
We present a detailed budget of the changes in atmospheric aerosol mass and numbers due to various processes: emission (including instant condensation of soluble biogenic emissions), nucleation, coagulation, H2SO4 condensation and in-cloud production, aging and deposition. The budget is created from monthly averaged tracer tendencies calculated by the global aerosol model ECHAM5.5-HAM2 and allows us to investigate process contributions at various length-scales and timescales. As a result, we show in unprecedented detail what processes drive the evolution of aerosol. In particular, we show that the processes that affect aerosol masses are quite different from those that affect aerosol numbers. Condensation of H2SO4 gas onto pre-existing particles is an important process, dominating the growth of small particles in the nucleation mode to the Aitken mode and the aging of hydrophobic matter. Together with in-cloud production of H2SO4, it significantly contributes to (and often dominates) the mass burden (and hence composition) of the hydrophilic Aitken and accumulation mode particles. Particle growth itself is the leading source of number densities in the hydrophilic Aitken and accumulation modes, with their hydrophobic counterparts contributing (even locally) relatively little. As expected, the coarse mode is dominated by primary emissions and mostly decoupled from the smaller modes. Our analysis also suggests that coagulation serves mainly as a loss process for number densities and that, relative to other processes, it is a rather unimportant contributor to composition changes of aerosol. The analysis is extended with sensitivity studies where the impact of a lower model resolution or pre-industrial emissions is shown to be small. We discuss the use of the current budget for model simplification, prioritization of model improvements, identification of potential structural model errors and model evaluation against observations.
Sparing of Sensitivity to Biological Motion but Not of Global Motion after Early Visual Deprivation
ERIC Educational Resources Information Center
Hadad, Bat-Sheva; Maurer, Daphne; Lewis, Terri L.
2012-01-01
Patients deprived of visual experience during infancy by dense bilateral congenital cataracts later show marked deficits in the perception of global motion (dorsal visual stream) and global form (ventral visual stream). We expected that they would also show marked deficits in sensitivity to biological motion, which is normally processed in the…
Multi-Scale Distributed Sensitivity Analysis of Radiative Transfer Model
NASA Astrophysics Data System (ADS)
Neelam, M.; Mohanty, B.
2015-12-01
Amidst nature's great variability and complexity and Soil Moisture Active Passive (SMAP) mission aims to provide high resolution soil moisture products for earth sciences applications. One of the biggest challenges still faced by the remote sensing community are the uncertainties, heterogeneities and scaling exhibited by soil, land cover, topography, precipitation etc. At each spatial scale, there are different levels of uncertainties and heterogeneities. Also, each land surface variable derived from various satellite mission comes with their own error margins. As such, soil moisture retrieval accuracy is affected as radiative model sensitivity changes with space, time, and scale. In this paper, we explore the distributed sensitivity analysis of radiative model under different hydro-climates and spatial scales, 1.5 km, 3 km, 9km and 39km. This analysis is conducted in three different regions Iowa, U.S.A (SMEX02), Arizona, USA (SMEX04) and Winnipeg, Canada (SMAPVEX12). Distributed variables such as soil moisture, soil texture, vegetation and temperature are assumed to be uncertain and are conditionally simulated to obtain uncertain maps, whereas roughness data which is spatially limited are assumed a probability distribution. The relative contribution of the uncertain model inputs to the aggregated model output is also studied, using various aggregation techniques. We use global sensitivity analysis (GSA) to conduct this analysis across spatio-temporal scales. Keywords: Soil moisture, radiative transfer, remote sensing, sensitivity, SMEX02, SMAPVEX12.
Geothermal well cost sensitivity analysis: current status
Carson, C.C.; Lin, Y.T.
1980-01-01
The geothermal well-cost model developed by Sandia National Laboratories is being used to analyze the sensitivity of well costs to improvements in geothermal drilling technology. Three interim results from this modeling effort are discussed. The sensitivity of well costs to bit parameters, rig parameters, and material costs; an analysis of the cost reduction potential of an advanced bit; and a consideration of breakeven costs for new cementing technology. All three results illustrate that the well-cost savings arising from any new technology will be highly site-dependent but that in specific wells the advances considered can result in significant cost reductions.
NIR sensitivity analysis with the VANE
NASA Astrophysics Data System (ADS)
Carrillo, Justin T.; Goodin, Christopher T.; Baylot, Alex E.
2016-05-01
Near infrared (NIR) cameras, with peak sensitivity around 905-nm wavelengths, are increasingly used in object detection applications such as pedestrian detection, occupant detection in vehicles, and vehicle detection. In this work, we present the results of simulated sensitivity analysis for object detection with NIR cameras. The analysis was conducted using high performance computing (HPC) to determine the environmental effects on object detection in different terrains and environmental conditions. The Virtual Autonomous Navigation Environment (VANE) was used to simulate highresolution models for environment, terrain, vehicles, and sensors. In the experiment, an active fiducial marker was attached to the rear bumper of a vehicle. The camera was mounted on a following vehicle that trailed at varying standoff distances. Three different terrain conditions (rural, urban, and forest), two environmental conditions (clear and hazy), three different times of day (morning, noon, and evening), and six different standoff distances were used to perform the sensor sensitivity analysis. The NIR camera that was used for the simulation is the DMK firewire monochrome on a pan-tilt motor. Standoff distance was varied along with environment and environmental conditions to determine the critical failure points for the sensor. Feature matching was used to detect the markers in each frame of the simulation, and the percentage of frames in which one of the markers was detected was recorded. The standoff distance produced the biggest impact on the performance of the camera system, while the camera system was not sensitive to environment conditions.
A climate sensitivity test using a global cloud resolving model under an aqua planet condition
NASA Astrophysics Data System (ADS)
Miura, Hiroaki; Tomita, Hirofumi; Nasuno, Tomoe; Iga, Shin-ichi; Satoh, Masaki; Matsuno, Taroh
2005-10-01
A global Cloud Resolving Model (CRM) is used in a climate sensitivity test for an aqua planet in this first attempt to evaluate climate sensitivity without cumulus parameterizations. Results from a control experiment and an experiment with global sea surface temperature (SST) warmer by 2 K are examined. Notable features in the simulation with warmer SST include a wider region of active convection, a weaker Hadley circulation, mid-tropospheric moistening in the subtropics, and more clouds in the extratropics. Negative feedback from short-wave radiation reduces the climate sensitivity parameter compared to a result in a more conventional model with a cumulus parameterization.
Sensitivity analysis techniques for models of human behavior.
Bier, Asmeret Brooke
2010-09-01
Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.
A global DGLAP analysis of nuclear PDFs
NASA Astrophysics Data System (ADS)
Eskola, K. J.; Kolhinen, V. J.; Paukkunen, H.; Salgado, C. A.
2008-05-01
In this talk, we shortly report results from our recent global DGLAP analysis of nuclear parton distributions. This is an extension of our former EKS98-analysis improved with an automated χ2 minimization procedure and uncertainty estimates. Although our new analysis show no significant deviation from EKS98, a sign of a significantly stronger gluon shadowing could be seen in the RHIC BRAHMS data.
Wideband sensitivity analysis of plasmonic structures
NASA Astrophysics Data System (ADS)
Ahmed, Osman S.; Bakr, Mohamed H.; Li, Xun; Nomura, Tsuyoshi
2013-03-01
We propose an adjoint variable method (AVM) for efficient wideband sensitivity analysis of the dispersive plasmonic structures. Transmission Line Modeling (TLM) is exploited for calculation of the structure sensitivities. The theory is developed for general dispersive materials modeled by Drude or Lorentz model. Utilizing the dispersive AVM, sensitivities are calculated with respect to all the designable parameters regardless of their number using at most one extra simulation. This is significantly more efficient than the regular finite difference approaches whose computational overhead scales linearly with the number of design parameters. A Z-domain formulation is utilized to allow for the extension of the theory to a general material model. The theory has been successfully applied to a structure with teethshaped plasmonic resonator. The design variables are the shape parameters (widths and thicknesses) of these teeth. The results are compared to the accurate yet expensive finite difference approach and good agreement is achieved.
Nursing-sensitive indicators: a concept analysis
Heslop, Liza; Lu, Sai
2014-01-01
Aim To report a concept analysis of nursing-sensitive indicators within the applied context of the acute care setting. Background The concept of ‘nursing sensitive indicators’ is valuable to elaborate nursing care performance. The conceptual foundation, theoretical role, meaning, use and interpretation of the concept tend to differ. The elusiveness of the concept and the ambiguity of its attributes may have hindered research efforts to advance its application in practice. Design Concept analysis. Data sources Using ‘clinical indicators’ or ‘quality of nursing care’ as subject headings and incorporating keyword combinations of ‘acute care’ and ‘nurs*’, CINAHL and MEDLINE with full text in EBSCOhost databases were searched for English language journal articles published between 2000–2012. Only primary research articles were selected. Methods A hybrid approach was undertaken, incorporating traditional strategies as per Walker and Avant and a conceptual matrix based on Holzemer's Outcomes Model for Health Care Research. Results The analysis revealed two main attributes of nursing-sensitive indicators. Structural attributes related to health service operation included: hours of nursing care per patient day, nurse staffing. Outcome attributes related to patient care included: the prevalence of pressure ulcer, falls and falls with injury, nosocomial selective infection and patient/family satisfaction with nursing care. Conclusion This concept analysis may be used as a basis to advance understandings of the theoretical structures that underpin both research and practical application of quality dimensions of nursing care performance. PMID:25113388
Global thermohaline circulation. Part 2: Sensitivity with interactive atmospheric transports
Wang, X.; Stone, P.H.; Marotzke, J.
1999-01-01
A hybrid coupled ocean-atmospheric model is used to investigate the stability of the thermohaline circulation (THC) to an increase in the surface freshwater forcing in the presence of interactive meridional transports in the atmosphere. The ocean component is the idealized global general circulation model used in Part 1. The atmospheric model assumes fixed latitudinal structure of the heat and moisture transports, and the amplitudes are calculated separately for each hemisphere from the large-scale sea surface temperature (SST) and SST gradient, using parameterizations based on baroclinic stability theory. The ocean-atmosphere heat and freshwater exchanges are calculated as residuals of the steady-state atmospheric budgets. Owing to the ocean component`s weak heat transport, the model has too strong a meridional SST gradient when driven with observed atmospheric meridional transports. When the latter are made interactive, the conveyor belt circulation collapses. A flux adjustment is introduced in which the efficiency of the atmospheric transports is lowered to match the too low efficiency of the ocean component. The feedbacks between the THC and both the atmospheric heat and moisture transports are positive, whether atmospheric transports are interactive in the Northern Hemisphere, the Southern Hemisphere, or both. However, the feedbacks operate differently in the northern and southern Hemispheres, because the Pacific THC dominates in the Southern Hemisphere, and deep water formation in the two hemispheres is negatively correlated. The feedbacks in the two hemisphere do not necessarily reinforce each other because they have opposite effects on low-latitude temperatures. The model is qualitatively similar in stability to one with conventional additive flux adjustment, but quantitatively more stable.
Global Thermohaline Circulation. Part II: Sensitivity with Interactive Atmospheric Transports.
NASA Astrophysics Data System (ADS)
Wang, Xiaoli; Stone, Peter H.; Marotzke, Jochem
1999-01-01
A hybrid coupled ocean-atmosphere model is used to investigate the stability of the thermohaline circulation (THC) to an increase in the surface freshwater forcing in the presence of interactive meridional transports in the atmosphere. The ocean component is the idealized global general circulation model used in Part I. The atmospheric model assumes fixed latitudinal structure of the heat and moisture transports, and the amplitudes are calculated separately for each hemisphere from the large-scale sea surface temperature (SST) and SST gradient, using parameterizations based on baroclinic stability theory. The ocean-atmosphere heat and freshwater exchanges are calculated as residuals of the steady-state atmospheric budgets.Owing to the ocean component's weak heat transport, the model has too strong a meridional SST gradient when driven with observed atmospheric meridional transports. When the latter are made interactive, the conveyor belt circulation collapses. A flux adjustment is introduced in which the efficiency of the atmospheric transports is lowered to match the too low efficiency of the ocean component.The feedbacks between the THC and both the atmospheric heat and moisture transports are positive, whether atmospheric transports are interactive in the Northern Hemisphere, the Southern Hemisphere, or both. However, the feedbacks operate differently in the Northern and Southern Hemispheres, because the Pacific THC dominates in the Southern Hemisphere, and deep water formation in the two hemispheres is negatively correlated. The feedbacks in the two hemispheres do not necessarily reinforce each other because they have opposite effects on low-latitude temperatures. The model is qualitatively similar in stability to one with conventional `additive' flux adjustment, but quantitatively more stable.
SENSITIVITY ANALYSIS FOR OSCILLATING DYNAMICAL SYSTEMS
WILKINS, A. KATHARINA; TIDOR, BRUCE; WHITE, JACOB; BARTON, PAUL I.
2012-01-01
Boundary value formulations are presented for exact and efficient sensitivity analysis, with respect to model parameters and initial conditions, of different classes of oscillating systems. Methods for the computation of sensitivities of derived quantities of oscillations such as period, amplitude and different types of phases are first developed for limit-cycle oscillators. In particular, a novel decomposition of the state sensitivities into three parts is proposed to provide an intuitive classification of the influence of parameter changes on period, amplitude and relative phase. The importance of the choice of time reference, i.e., the phase locking condition, is demonstrated and discussed, and its influence on the sensitivity solution is quantified. The methods are then extended to other classes of oscillatory systems in a general formulation. Numerical techniques are presented to facilitate the solution of the boundary value problem, and the computation of different types of sensitivities. Numerical results are verified by demonstrating consistency with finite difference approximations and are superior both in computational efficiency and in numerical precision to existing partial methods. PMID:23296349
[Sensitivity analysis in health investment projects].
Arroyave-Loaiza, G; Isaza-Nieto, P; Jarillo-Soto, E C
1994-01-01
This paper discusses some of the concepts and methodologies frequently used in sensitivity analyses in the evaluation of investment programs. In addition, a concrete example is presented: a hospital investment in which four indicators were used to design different scenarios and their impact on investment costs. This paper emphasizes the importance of this type of analysis in the field of management of health services, and more specifically in the formulation of investment programs.
Sensitivity analysis of aeroelastic response of a wing using piecewise pressure representation
NASA Astrophysics Data System (ADS)
Eldred, Lloyd B.; Kapania, Rakesh K.; Barthelemy, Jean-Francois M.
1993-04-01
A sensitivity analysis scheme of the static aeroelastic response of a wing is developed, by incorporating a piecewise panel-based pressure representation into an existing wing aeroelastic model to improve the model's fidelity, including the sensitivity of the wing static aeroelastic response with respect to various shape parameters. The new formulation is quite general and accepts any aerodynamics and structural analysis capability. A program is developed which combines the local sensitivities, such as the sensitivity of the stiffness matrix or the aerodynamic kernel matrix, into global sensitivity derivatives.
Global Precipitation Analysis Using Satellite Observations
NASA Technical Reports Server (NTRS)
Adler, Robert F.; Huffman, George; Curtis, Scott; Bolvin, David; Nelkin, Eric
2002-01-01
Global precipitation analysis covering the last few decades and the impact of the new TRMM (Tropical Rainfall Measuring Mission) observations are reviewed in the context of weather and climate applications. All the data sets discussed are the result of mergers of information from multiple satellites and gauges, where available. The focus of the talk is on TRMM-based 3 hr. analyses that use TRMM to calibrate polar-orbit microwave observations from SSM/I (and other satellites) and geosynchronous IR observations and merges the various calibrated observations into a final, 3 hr. resolution map. This TRMM standard product will be available for the entire TRMM period (January 1998-present) at the end of 2002. A real-time version of this merged product is being produced and is available at 0.25 deg latitude-longitude resolution over the latitude range from 50 deg N-50 deg S. Examples will be shown, including its use in monitoring flood conditions and in relating weather-scale patterns to climate-scale patterns. The 3-hourly analysis is placed in the context of two research products of the World Climate Research Program's (WCRP/GEWEX) Global Precipitation Climatology Project (GPCP). The first is the 23 year, monthly, globally complete precipitation analysis that is used to explore global and regional variations and trends and is compared to the much shorter TRMM tropical data set. The GPCP data set shows no significant global trend in precipitation over the twenty years, unlike the positive trend in global surface temperatures over the past century. Regional trends are also analyzed. A trend pattern that is a combination of both El Nino and La Nina precipitation features is evident in the Goodyear data set. This pattern is related to an increase with time in the number of combined months of El Nino and La Nina during the 23 year period. Monthly anomalies of precipitation are related to ENSO variations with clear signals extending into middle and high latitudes of both
Global Optimization and Broadband Analysis Software for Interstellar Chemistry (GOBASIC)
NASA Astrophysics Data System (ADS)
Rad, Mary L.; Zou, Luyao; Sanders, James L.; Widicus Weaver, Susanna L.
2016-01-01
Context. Broadband receivers that operate at millimeter and submillimeter frequencies necessitate the development of new tools for spectral analysis and interpretation. Simultaneous, global, multimolecule, multicomponent analysis is necessary to accurately determine the physical and chemical conditions from line-rich spectra that arise from sources like hot cores. Aims: We aim to provide a robust and efficient automated analysis program to meet the challenges presented with the large spectral datasets produced by radio telescopes. Methods: We have written a program in the MATLAB numerical computing environment for simultaneous global analysis of broadband line surveys. The Global Optimization and Broadband Analysis Software for Interstellar Chemistry (GOBASIC) program uses the simplifying assumption of local thermodynamic equilibrium (LTE) for spectral analysis to determine molecular column density, temperature, and velocity information. Results: GOBASIC achieves simultaneous, multimolecule, multicomponent fitting for broadband spectra. The number of components that can be analyzed at once is only limited by the available computational resources. Analysis of subsequent sets of molecules or components is performed iteratively while taking the previous fits into account. All features of a given molecule across the entire window are fitted at once, which is preferable to the rotation diagram approach because global analysis is less sensitive to blended features and noise features in the spectra. In addition, the fitting method used in GOBASIC is insensitive to the initial conditions chosen, the fitting is automated, and fitting can be performed in a parallel computing environment. These features make GOBASIC a valuable improvement over previously available LTE analysis methods. A copy of the sofware is available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/585/A23
Trends in sensitivity analysis practice in the last decade.
Ferretti, Federico; Saltelli, Andrea; Tarantola, Stefano
2016-10-15
The majority of published sensitivity analyses (SAs) are either local or one factor-at-a-time (OAT) analyses, relying on unjustified assumptions of model linearity and additivity. Global approaches to sensitivity analyses (GSA) which would obviate these shortcomings, are applied by a minority of researchers. By reviewing the academic literature on SA, we here present a bibliometric analysis of the trends of different SA practices in last decade. The review has been conducted both on some top ranking journals (Nature and Science) and through an extended analysis in the Elsevier's Scopus database of scientific publications. After correcting for the global growth in publications, the amount of papers performing a generic SA has notably increased over the last decade. Even if OAT is still the most largely used technique in SA, there is a clear increase in the use of GSA with preference respectively for regression and variance-based techniques. Even after adjusting for the growth of publications in the sole modelling field, to which SA and GSA normally apply, the trend is confirmed. Data about regions of origin and discipline are also briefly discussed. The results above are confirmed when zooming on the sole articles published in chemical modelling, a field historically proficient in the use of SA methods. PMID:26934843
Trends in sensitivity analysis practice in the last decade.
Ferretti, Federico; Saltelli, Andrea; Tarantola, Stefano
2016-10-15
The majority of published sensitivity analyses (SAs) are either local or one factor-at-a-time (OAT) analyses, relying on unjustified assumptions of model linearity and additivity. Global approaches to sensitivity analyses (GSA) which would obviate these shortcomings, are applied by a minority of researchers. By reviewing the academic literature on SA, we here present a bibliometric analysis of the trends of different SA practices in last decade. The review has been conducted both on some top ranking journals (Nature and Science) and through an extended analysis in the Elsevier's Scopus database of scientific publications. After correcting for the global growth in publications, the amount of papers performing a generic SA has notably increased over the last decade. Even if OAT is still the most largely used technique in SA, there is a clear increase in the use of GSA with preference respectively for regression and variance-based techniques. Even after adjusting for the growth of publications in the sole modelling field, to which SA and GSA normally apply, the trend is confirmed. Data about regions of origin and discipline are also briefly discussed. The results above are confirmed when zooming on the sole articles published in chemical modelling, a field historically proficient in the use of SA methods.
The Theoretical Foundation of Sensitivity Analysis for GPS
NASA Astrophysics Data System (ADS)
Shikoska, U.; Davchev, D.; Shikoski, J.
2008-10-01
In this paper the equations of sensitivity analysis are derived and established theoretical underpinnings for the analyses. Paper propounds a land-vehicle navigation concepts and definition for sensitivity analysis. Equations of sensitivity analysis are presented for a linear Kalman filter and case study is given to illustrate the use of sensitivity analysis to the reader. At the end of the paper, extensions that are required for this research are made to the basic equations of sensitivity analysis specifically; the equations of sensitivity analysis are re-derived for a linearized Kalman filter.
Climate sensitivity of global terrestrial ecosystems' subdaily carbon, water, and energy dynamics.
NASA Astrophysics Data System (ADS)
Yu, R.; Ruddell, B. L.; Childers, D. L.; Kang, M.
2015-12-01
Abstract: Under the context of global climate change, it is important to understand the direction and magnitude of different ecosystems respond to climate at the global level. In this study, we applied dynamical process network (DPN) approach combined with eco-climate system sensitivity model and used the global FLUXNET eddy covariance measurements (subdaily net ecosystem exchange of CO2, air temperature, and precipitation) to access eco-climate system sensitivity to climate and biophysical factors at the flux site level. For the first time, eco-climate system sensitivity was estimated at the global flux sites and extrapolated to all possible land covers by employing artificial neural network approach and using the MODIS phenology and land cover products, the long-term climate GLDAS-2 product, and the GMTED2010 Global Grid elevation dataset. We produced the seasonal eco-climate system DPN maps, which revealed how global carbon dynamics driven by temperature and precipitation. We also found that the eco-climate system dynamical process structures are more sensitive to temperature, whether directly or indirectly via phenology. Interestingly, if temperature continues rising, the temperature-NEE coupling may increase in tropical rain forest areas while decrease in tropical desert or Savanna areas, which means that rising temperature in the future could lead to more carbon sequestration in tropical forests whereas less carbon sequestration in tropical drylands. At the same time, phenology showed a positive effect on the temperature-NEE coupling at all pixels, which suggests increased greenness may increase temperature driven carbon dynamics and consequently carbon sequestration globally. Precipitation showed relatively strong influence on the precipitation-NEE coupling, especially indirectly via phenology. This study has the potential to conduct eco-climate system short-term and long-term forecasting.
Simple Sensitivity Analysis for Orion GNC
NASA Technical Reports Server (NTRS)
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
Bayesian sensitivity analysis of bifurcating nonlinear models
NASA Astrophysics Data System (ADS)
Becker, W.; Worden, K.; Rowson, J.
2013-01-01
Sensitivity analysis allows one to investigate how changes in input parameters to a system affect the output. When computational expense is a concern, metamodels such as Gaussian processes can offer considerable computational savings over Monte Carlo methods, albeit at the expense of introducing a data modelling problem. In particular, Gaussian processes assume a smooth, non-bifurcating response surface. This work highlights a recent extension to Gaussian processes which uses a decision tree to partition the input space into homogeneous regions, and then fits separate Gaussian processes to each region. In this way, bifurcations can be modelled at region boundaries and different regions can have different covariance properties. To test this method, both the treed and standard methods were applied to the bifurcating response of a Duffing oscillator and a bifurcating FE model of a heart valve. It was found that the treed Gaussian process provides a practical way of performing uncertainty and sensitivity analysis on large, potentially-bifurcating models, which cannot be dealt with by using a single GP, although an open problem remains how to manage bifurcation boundaries that are not parallel to coordinate axes.
A global analysis of island pyrogeography
NASA Astrophysics Data System (ADS)
Trauernicht, C.; Murphy, B. P.
2014-12-01
Islands have provided insight into the ecological role of fire worldwide through research on the positive feedbacks between fire and nonnative grasses, particularly in the Hawaiian Islands. However, the global extent and frequency of fire on islands as an ecological disturbance has received little attention, possibly because 'natural fires' on islands are typically limited to infrequent dry lightning strikes and isolated volcanic events. But because most contemporary fires on islands are anthropogenic, islands provide ideal systems with which to understand the linkages between socio-economic development, shifting fire regimes, and ecological change. Here we use the density of satellite-derived (MODIS) active fire detections for the years 2000-2014 and global data sets of vegetation, climate, population density, and road development to examine the drivers of fire activity on islands at the global scale, and compare these results to existing pyrogeographic models derived from continental data sets. We also use the Hawaiian Islands as a case study to understand the extent to which novel fire regimes can pervade island ecosystems. The global analysis indicates that fire is a frequent disturbance across islands worldwide, strongly affected by human activities, indicating people can more readily override climatic drivers than on continental land masses. The extent of fire activity derived from local records in the Hawaiian Islands reveals that our global analysis likely underestimates the prevalence of fire among island systems and that the combined effects of human activity and invasion by nonnative grasses can create conditions for frequent and relatively large-scale fires. Understanding the extent of these novel fire regimes, and mitigating their impacts, is critical to reducing the current and rapid degradation of native island ecosystems worldwide.
A Post-Monte-Carlo Sensitivity Analysis Code
2000-04-04
SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmore » also quantifies the relative importance among the sensitive variables.« less
Long Trajectory for the Development of Sensitivity to Global and Biological Motion
ERIC Educational Resources Information Center
Hadad, Bat-Sheva; Maurer, Daphne; Lewis, Terri L.
2011-01-01
We used a staircase procedure to test sensitivity to (1) global motion in random-dot kinematograms moving at 4 degrees and 18 degrees s[superscript -1] and (2) biological motion. Thresholds were defined as (1) the minimum percentage of signal dots (i.e. the maximum percentage of noise dots) necessary for accurate discrimination of upward versus…
Toward a Globally Sensitive Definition of Inclusive Education Based in Social Justice
ERIC Educational Resources Information Center
Shyman, Eric
2015-01-01
While many policies, pieces of legislation and educational discourse focus on the concept of inclusion, or inclusive education, the field of education as a whole lacks a clear, precise and comprehensive definition that is both globally sensitive and based in social justice. Even international efforts including the UN Convention on the Rights of…
Updated Chemical Kinetics and Sensitivity Analysis Code
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan
2005-01-01
An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.
Stormwater quality models: performance and sensitivity analysis.
Dotto, C B S; Kleidorfer, M; Deletic, A; Fletcher, T D; McCarthy, D T; Rauch, W
2010-01-01
The complex nature of pollutant accumulation and washoff, along with high temporal and spatial variations, pose challenges for the development and establishment of accurate and reliable models of the pollution generation process in urban environments. Therefore, the search for reliable stormwater quality models remains an important area of research. Model calibration and sensitivity analysis of such models are essential in order to evaluate model performance; it is very unlikely that non-calibrated models will lead to reasonable results. This paper reports on the testing of three models which aim to represent pollutant generation from urban catchments. Assessment of the models was undertaken using a simplified Monte Carlo Markov Chain (MCMC) method. Results are presented in terms of performance, sensitivity to the parameters and correlation between these parameters. In general, it was suggested that the tested models poorly represent reality and result in a high level of uncertainty. The conclusions provide useful information for the improvement of existing models and insights for the development of new model formulations.
Network Analysis of Global Influenza Spread
Chan, Joseph; Holmes, Antony; Rabadan, Raul
2010-01-01
Although vaccines pose the best means of preventing influenza infection, strain selection and optimal implementation remain difficult due to antigenic drift and a lack of understanding global spread. Detecting viral movement by sequence analysis is complicated by skewed geographic and seasonal distributions in viral isolates. We propose a probabilistic method that accounts for sampling bias through spatiotemporal clustering and modeling regional and seasonal transmission as a binomial process. Analysis of H3N2 not only confirmed East-Southeast Asia as a source of new seasonal variants, but also increased the resolution of observed transmission to a country level. H1N1 data revealed similar viral spread from the tropics. Network analysis suggested China and Hong Kong as the origins of new seasonal H3N2 strains and the United States as a region where increased vaccination would maximally disrupt global spread of the virus. These techniques provide a promising methodology for the analysis of any seasonal virus, as well as for the continued surveillance of influenza. PMID:21124942
Scalable analysis tools for sensitivity analysis and UQ (3160) results.
Karelitz, David B.; Ice, Lisa G.; Thompson, David C.; Bennett, Janine C.; Fabian, Nathan; Scott, W. Alan; Moreland, Kenneth D.
2009-09-01
The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.
Phase sensitivity analysis of circadian rhythm entrainment.
Gunawan, Rudiyanto; Doyle, Francis J
2007-04-01
As a biological clock, circadian rhythms evolve to accomplish a stable (robust) entrainment to environmental cycles, of which light is the most obvious. The mechanism of photic entrainment is not known, but two models of entrainment have been proposed based on whether light has a continuous (parametric) or discrete (nonparametric) effect on the circadian pacemaker. A novel sensitivity analysis is developed to study the circadian entrainment in silico based on a limit cycle approach and applied to a model of Drosophila circadian rhythm. The comparative analyses of complete and skeleton photoperiods suggest a trade-off between the contribution of period modulation (parametric effect) and phase shift (nonparametric effect) in Drosophila circadian entrainment. The results also give suggestions for an experimental study to (in)validate the two models of entrainment.
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-01-01
Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080
Sensitivity analysis of distributed volcanic source inversion
NASA Astrophysics Data System (ADS)
Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José
2016-04-01
A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep
Global climate sensitivity derived from ~784,000 years of SST data
NASA Astrophysics Data System (ADS)
Friedrich, T.; Timmermann, A.; Tigchelaar, M.; Elison Timm, O.; Ganopolski, A.
2015-12-01
Global mean temperatures will increase in response to future increasing greenhouse gas concentrations. The magnitude of this warming for a given radiative forcing is still subject of debate. Here we provide estimates for the equilibrium climate sensitivity using paleo-proxy and modeling data from the last eight glacial cycles (~784,000 years). First of all, two reconstructions of globally averaged surface air temperature (SAT) for the last eight glacial cycles are obtained from two independent sources: one mainly based on a transient model simulation, the other one derived from paleo- SST records and SST network/global SAT scaling factors. Both reconstructions exhibit very good agreement in both amplitude and timing of past SAT variations. In the second step, we calculate the radiative forcings associated with greenhouse gas concentrations, dust concentrations, and surface albedo changes for the last 784, 000 years. The equilibrium climate sensitivity is then derived from the ratio of the SAT anomalies and the radiative forcing changes. Our results reveal that this estimate of the Charney climate sensitivity is a function of the background climate with substantially higher values for warmer climates. Warm phases exhibit an equilibrium climate sensitivity of ~3.70 K per CO2-doubling - more than twice the value derived for cold phases (~1.40 K per 2xCO2). We will show that the current CMIP5 ensemble-mean projection of global warming during the 21st century is supported by our estimate of climate sensitivity derived from climate paleo data of the past 784,000 years.
Global meta-analysis of transcriptomics studies.
Caldas, José; Vinga, Susana
2014-01-01
Transcriptomics meta-analysis aims at re-using existing data to derive novel biological hypotheses, and is motivated by the public availability of a large number of independent studies. Current methods are based on breaking down studies into multiple comparisons between phenotypes (e.g. disease vs. healthy), based on the studies' experimental designs, followed by computing the overlap between the resulting differential expression signatures. While useful, in this methodology each study yields multiple independent phenotype comparisons, and connections are established not between studies, but rather between subsets of the studies corresponding to phenotype comparisons. We propose a rank-based statistical meta-analysis framework that establishes global connections between transcriptomics studies without breaking down studies into sets of phenotype comparisons. By using a rank product method, our framework extracts global features from each study, corresponding to genes that are consistently among the most expressed or differentially expressed genes in that study. Those features are then statistically modelled via a term-frequency inverse-document frequency (TF-IDF) model, which is then used for connecting studies. Our framework is fast and parameter-free; when applied to large collections of Homo sapiens and Streptococcus pneumoniae transcriptomics studies, it performs better than similarity-based approaches in retrieving related studies, using a Medical Subject Headings gold standard. Finally, we highlight via case studies how the framework can be used to derive novel biological hypotheses regarding related studies and the genes that drive those connections. Our proposed statistical framework shows that it is possible to perform a meta-analysis of transcriptomics studies with arbitrary experimental designs by deriving global expression features rather than decomposing studies into multiple phenotype comparisons. PMID:24586684
Global QCD Analysis of Polarized Parton Densities
Stratmann, Marco
2009-08-04
We focus on some highlights of a recent, first global Quantum Chromodynamics (QCD) analysis of the helicity parton distributions of the nucleon, mainly the evidence for a rather small gluon polarization over a limited region of momentum fraction and for interesting flavor patterns in the polarized sea. It is examined how the various sets of data obtained in inclusive and semi-inclusive deep inelastic scattering and polarized proton-proton collisions help to constrain different aspects of the quark, antiquark, and gluon helicity distributions. Uncertainty estimates are performed using both the robust Lagrange multiplier technique and the standard Hessian approach.
Longitudinal Genetic Analysis of Anxiety Sensitivity
ERIC Educational Resources Information Center
Zavos, Helena M. S.; Gregory, Alice M.; Eley, Thalia C.
2012-01-01
Anxiety sensitivity is associated with both anxiety and depression and has been shown to be heritable. Little, however, is known about the role of genetic influence on continuity and change of symptoms over time. The authors' aim was to examine the stability of anxiety sensitivity during adolescence. By using a genetically sensitive design, the…
A global analysis of soil acidification caused by nitrogen addition
NASA Astrophysics Data System (ADS)
Tian, Dashuan; Niu, Shuli
2015-02-01
Nitrogen (N) deposition-induced soil acidification has become a global problem. However, the response patterns of soil acidification to N addition and the underlying mechanisms remain far from clear. Here, we conducted a meta-analysis of 106 studies to reveal global patterns of soil acidification in responses to N addition. We found that N addition significantly reduced soil pH by 0.26 on average globally. However, the responses of soil pH varied with ecosystem types, N addition rate, N fertilization forms, and experimental durations. Soil pH decreased most in grassland, whereas boreal forest was not observed a decrease to N addition in soil acidification. Soil pH decreased linearly with N addition rates. Addition of urea and NH4NO3 contributed more to soil acidification than NH4-form fertilizer. When experimental duration was longer than 20 years, N addition effects on soil acidification diminished. Environmental factors such as initial soil pH, soil carbon and nitrogen content, precipitation, and temperature all influenced the responses of soil pH. Base cations of Ca2+, Mg2+ and K+ were critical important in buffering against N-induced soil acidification at the early stage. However, N addition has shifted global soils into the Al3+ buffering phase. Overall, this study indicates that acidification in global soils is very sensitive to N deposition, which is greatly modified by biotic and abiotic factors. Global soils are now at a buffering transition from base cations (Ca2+, Mg2+ and K+) to non-base cations (Mn2+ and Al3+). This calls our attention to care about the limitation of base cations and the toxic impact of non-base cations for terrestrial ecosystems with N deposition.
Sensitivity Analysis of Wing Aeroelastic Responses
NASA Technical Reports Server (NTRS)
Issac, Jason Cherian
1995-01-01
Design for prevention of aeroelastic instability (that is, the critical speeds leading to aeroelastic instability lie outside the operating range) is an integral part of the wing design process. Availability of the sensitivity derivatives of the various critical speeds with respect to shape parameters of the wing could be very useful to a designer in the initial design phase, when several design changes are made and the shape of the final configuration is not yet frozen. These derivatives are also indispensable for a gradient-based optimization with aeroelastic constraints. In this study, flutter characteristic of a typical section in subsonic compressible flow is examined using a state-space unsteady aerodynamic representation. The sensitivity of the flutter speed of the typical section with respect to its mass and stiffness parameters, namely, mass ratio, static unbalance, radius of gyration, bending frequency, and torsional frequency is calculated analytically. A strip theory formulation is newly developed to represent the unsteady aerodynamic forces on a wing. This is coupled with an equivalent plate structural model and solved as an eigenvalue problem to determine the critical speed of the wing. Flutter analysis of the wing is also carried out using a lifting-surface subsonic kernel function aerodynamic theory (FAST) and an equivalent plate structural model. Finite element modeling of the wing is done using NASTRAN so that wing structures made of spars and ribs and top and bottom wing skins could be analyzed. The free vibration modes of the wing obtained from NASTRAN are input into FAST to compute the flutter speed. An equivalent plate model which incorporates first-order shear deformation theory is then examined so it can be used to model thick wings, where shear deformations are important. The sensitivity of natural frequencies to changes in shape parameters is obtained using ADIFOR. A simple optimization effort is made towards obtaining a minimum weight
On computational schemes for global-local stress analysis
NASA Technical Reports Server (NTRS)
Reddy, J. N.
1989-01-01
An overview is given of global-local stress analysis methods and associated difficulties and recommendations for future research. The phrase global-local analysis is understood to be an analysis in which some parts of the domain or structure are identified, for reasons of accurate determination of stresses and displacements or for more refined analysis than in the remaining parts. The parts of refined analysis are termed local and the remaining parts are called global. Typically local regions are small in size compared to global regions, while the computational effort can be larger in local regions than in global regions.
Uncertainty and sensitivity analysis for photovoltaic system modeling.
Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk
2013-12-01
We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.
Tsunamis: Global Exposure and Local Risk Analysis
NASA Astrophysics Data System (ADS)
Harbitz, C. B.; Løvholt, F.; Glimsdal, S.; Horspool, N.; Griffin, J.; Davies, G.; Frauenfelder, R.
2014-12-01
The 2004 Indian Ocean tsunami led to a better understanding of the likelihood of tsunami occurrence and potential tsunami inundation, and the Hyogo Framework for Action (HFA) was one direct result of this event. The United Nations International Strategy for Disaster Risk Reduction (UN-ISDR) adopted HFA in January 2005 in order to reduce disaster risk. As an instrument to compare the risk due to different natural hazards, an integrated worldwide study was implemented and published in several Global Assessment Reports (GAR) by UN-ISDR. The results of the global earthquake induced tsunami hazard and exposure analysis for a return period of 500 years are presented. Both deterministic and probabilistic methods (PTHA) are used. The resulting hazard levels for both methods are compared quantitatively for selected areas. The comparison demonstrates that the analysis is rather rough, which is expected for a study aiming at average trends on a country level across the globe. It is shown that populous Asian countries account for the largest absolute number of people living in tsunami prone areas, more than 50% of the total exposed people live in Japan. Smaller nations like Macao and the Maldives are among the most exposed by population count. Exposed nuclear power plants are limited to Japan, China, India, Taiwan, and USA. On the contrary, a local tsunami vulnerability and risk analysis applies information on population, building types, infrastructure, inundation, flow depth for a certain tsunami scenario with a corresponding return period combined with empirical data on tsunami damages and mortality. Results and validation of a GIS tsunami vulnerability and risk assessment model are presented. The GIS model is adapted for optimal use of data available for each study. Finally, the importance of including landslide sources in the tsunami analysis is also discussed.
Adjoint sensitivity structures of typhoon DIANMU (2010) based on a global model
NASA Astrophysics Data System (ADS)
Kim, S.; Kim, H.; Joo, S.; Shin, H.; Won, D.
2010-12-01
Sung-Min Kim1, Hyun Mee Kim1, Sang-Won Joo2, Hyun-Cheol Shin2, DukJin Won2 Department of Atmospheric Sciences, Yonsei University, Seoul, Korea1 Korea Meteorological Administration2 Submitted to AGU 2010 Fall Meeting 13-17 December 2010, San Francisco, CA The path and intensity forecast of typhoons (TYs) depend on the initial condition of the TY itself and surrounding background fields. Because TYs are evolved on the ocean, there are not many observational data available. In this sense, additional observations on the western North Pacific are necessary to get the proper initial condition of TYs. Due to the limited resource of observing facilities, identifying the sensitive regions for the specific forecast aspect in the forecast region of interest will be very beneficial to decide where to deploy additional observations. The additional observations deployed in those sensitive regions are called as the adaptive observations, and the strategies to decide the sensitive regions are called as the adaptive observation strategies. Among the adaptive observation strategies, the adjoint sensitivity represents the gradient of some forecast aspects with respect to the control variables of the model (i.e., initial conditions, boundary conditions, and parameters) (Errico 1997). According to a recent research on the adjoint sensitivity of a TY based on a regional model, the sensitive regions are located horizontally in the right half circle of the TY, and vertically in the lower and upper troposphere near the TY (Kim and Jung 2006). Because the adjoint sensitivity based on a regional model is calculated in a relatively small domain, the adjoint sensitivity structures may be affected by the size and location of the domain. In this study, the adjoint sensitivity distributions for TY DIANMU (2010) based on a global model are investigated. The adjoint sensitivity based on a global model is calculated by using the perturbation forecast (PF) and adjoint PF model of the Unified Model at
Tilt-Sensitivity Analysis for Space Telescopes
NASA Technical Reports Server (NTRS)
Papalexandris, Miltiadis; Waluschka, Eugene
2003-01-01
A report discusses a computational-simulation study of phase-front propagation in the Laser Interferometer Space Antenna (LISA), in which space telescopes would transmit and receive metrological laser beams along 5-Gm interferometer arms. The main objective of the study was to determine the sensitivity of the average phase of a beam with respect to fluctuations in pointing of the beam. The simulations account for the effects of obscurations by a secondary mirror and its supporting struts in a telescope, and for the effects of optical imperfections (especially tilt) of a telescope. A significant innovation introduced in this study is a methodology, applicable to space telescopes in general, for predicting the effects of optical imperfections. This methodology involves a Monte Carlo simulation in which one generates many random wavefront distortions and studies their effects through computational simulations of propagation. Then one performs a statistical analysis of the results of the simulations and computes the functional relations among such important design parameters as the sizes of distortions and the mean value and the variance of the loss of performance. These functional relations provide information regarding position and orientation tolerances relevant to design and operation.
Wear-Out Sensitivity Analysis Project Abstract
NASA Technical Reports Server (NTRS)
Harris, Adam
2015-01-01
During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.
Sensitivity analysis of retrovirus HTLV-1 transactivation.
Corradin, Alberto; Di Camillo, Barbara; Ciminale, Vincenzo; Toffolo, Gianna; Cobelli, Claudio
2011-02-01
Human T-cell leukemia virus type 1 is a human retrovirus endemic in many areas of the world. Although many studies indicated a key role of the viral protein Tax in the control of viral transcription, the mechanisms controlling HTLV-1 expression and its persistence in vivo are still poorly understood. To assess Tax effects on viral kinetics, we developed a HTLV-1 model. Two parameters that capture both its deterministic and stochastic behavior were quantified: Tax signal-to-noise ratio (SNR), which measures the effect of stochastic phenomena on Tax expression as the ratio between the protein steady-state level and the variance of the noise causing fluctuations around this value; t(1/2), a parameter representative of the duration of Tax transient expression pulses, that is, of Tax bursts due to stochastic phenomena. Sensitivity analysis indicates that the major determinant of Tax SNR is the transactivation constant, the system parameter weighting the enhancement of retrovirus transcription due to transactivation. In contrast, t(1/2) is strongly influenced by the degradation rate of the mRNA. In addition to shedding light into the mechanism of Tax transactivation, the obtained results are of potential interest for novel drug development strategies since the two parameters most affecting Tax transactivation can be experimentally tuned, e.g. by perturbing protein phosphorylation and by RNA interference.
Sensitivity analysis of volume scattering phase functions.
Tuchow, Noah; Broughton, Jennifer; Kudela, Raphael
2016-08-01
To solve the radiative transfer equation and relate inherent optical properties (IOPs) to apparent optical properties (AOPs), knowledge of the volume scattering phase function is required. Due to the difficulty of measuring the phase function, it is frequently approximated. We explore the sensitivity of derived AOPs to the phase function parameterization, and compare measured and modeled values of both the AOPs and estimated phase functions using data from Monterey Bay, California during an extreme "red tide" bloom event. Using in situ measurements of absorption and attenuation coefficients, as well as two sets of measurements of the volume scattering function (VSF), we compared output from the Hydrolight radiative transfer model to direct measurements. We found that several common assumptions used in parameterizing the radiative transfer model consistently introduced overestimates of modeled versus measured remote-sensing reflectance values. Phase functions from VSF data derived from measurements at multiple wavelengths and a single scattering single angle significantly overestimated reflectances when using the manufacturer-supplied corrections, but were substantially improved using newly published corrections; phase functions calculated from VSF measurements using three angles and three wavelengths and processed using manufacture-supplied corrections were comparable, demonstrating that reasonable predictions can be made using two commercially available instruments. While other studies have reached similar conclusions, our work extends the analysis to coastal waters dominated by an extreme algal bloom with surface chlorophyll concentrations in excess of 100 mg m^{-3}. PMID:27505819
Sensitivity Studies for Space-Based Global Measurements of Atmospheric Carbon Dioxide
NASA Technical Reports Server (NTRS)
Mao, Jian-Ping; Kawa, S. Randolph; Bhartia, P. K. (Technical Monitor)
2001-01-01
Carbon dioxide (CO2) is well known as the primary forcing agent of global warming. Although the climate forcing due to CO2 is well known, the sources and sinks of CO2 are not well understood. Currently the lack of global atmospheric CO2 observations limits our ability to diagnose the global carbon budget (e.g., finding the so-called "missing sink") and thus limits our ability to understand past climate change and predict future climate response. Space-based techniques are being developed to make high-resolution and high-precision global column CO2 measurements. One of the proposed techniques utilizes the passive remote sensing of Earth's reflected solar radiation at the weaker vibration-rotation band of CO2 in the near infrared (approx. 1.57 micron). We use a line-by-line radiative transfer model to explore the potential of this method. Results of sensitivity studies for CO2 concentration variation and geophysical conditions (i.e., atmospheric temperature, surface reflectivity, solar zenith angle, aerosol, and cirrus cloud) will be presented. We will also present sensitivity results for an O2 A-band (approx. 0.76 micron) sensor that will be needed along with CO2 to make surface pressure and cloud height measurements.
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2014-12-01
Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.
Global Proteome Analysis of Leptospira interrogans
2009-01-01
Comparative global proteome analyses were performed on Leptospira interrogans serovar Copenhageni grown under conventional in vitro conditions and those mimicking in vivo conditions (iron limitation and serum presence). Proteomic analyses were conducted using iTRAQ and LC-ESI-tandem mass spectrometry complemented with two-dimensional gel electrophoresis and MALDI-TOF mass spectrometry. A total of 563 proteins were identified in this study. Altered expression of 65 proteins, including upregulation of the L. interrogans virulence factor Loa22 and 5 novel proteins with homology to virulence factors found in other pathogens, was observed between the comparative conditions. Immunoblot analyses confirmed upregulation of 5 of the known or putative virulence factors in L. interrogans exposed to the in vivo-like environmental conditions. Further, ELISA analyses using serum from patients with leptospirosis and immunofluorescence studies performed on liver sections derived from L. interrogans-infected hamsters verified expression of all but one of the identified proteins during infection. These studies, which represent the first documented comparative global proteome analysis of Leptospira, demonstrated proteome alterations under conditions that mimic in vivo infection and allowed for the identification of novel putative L. interrogans virulence factors. PMID:19663501
Sensitivity of Water Scarcity Events to ENSO-Driven Climate Variability at the Global Scale
NASA Technical Reports Server (NTRS)
Veldkamp, T. I. E.; Eisner, S.; Wada, Y.; Aerts, J. C. J. H.; Ward, P. J.
2015-01-01
Globally, freshwater shortage is one of the most dangerous risks for society. Changing hydro-climatic and socioeconomic conditions have aggravated water scarcity over the past decades. A wide range of studies show that water scarcity will intensify in the future, as a result of both increased consumptive water use and, in some regions, climate change. Although it is well-known that El Niño- Southern Oscillation (ENSO) affects patterns of precipitation and drought at global and regional scales, little attention has yet been paid to the impacts of climate variability on water scarcity conditions, despite its importance for adaptation planning. Therefore, we present the first global-scale sensitivity assessment of water scarcity to ENSO, the most dominant signal of climate variability. We show that over the time period 1961-2010, both water availability and water scarcity conditions are significantly correlated with ENSO-driven climate variability over a large proportion of the global land area (> 28.1 %); an area inhabited by more than 31.4% of the global population. We also found, however, that climate variability alone is often not enough to trigger the actual incidence of water scarcity events. The sensitivity of a region to water scarcity events, expressed in terms of land area or population exposed, is determined by both hydro-climatic and socioeconomic conditions. Currently, the population actually impacted by water scarcity events consists of 39.6% (CTA: consumption-to-availability ratio) and 41.1% (WCI: water crowding index) of the global population, whilst only 11.4% (CTA) and 15.9% (WCI) of the global population is at the same time living in areas sensitive to ENSO-driven climate variability. These results are contrasted, however, by differences in growth rates found under changing socioeconomic conditions, which are relatively high in regions exposed to water scarcity events. Given the correlations found between ENSO and water availability and scarcity
Globally convergent autocalibration using interval analysis.
Fusiello, Andrea; Benedetti, Arrigo; Farenzena, Michela; Busti, Alessandro
2004-12-01
We address the problem of autocalibration of a moving camera with unknown constant intrinsic parameters. Existing autocalibration techniques use numerical optimization algorithms whose convergence to the correct result cannot be guaranteed, in general. To address this problem, we have developed a method where an interval branch-and-bound method is employed for numerical minimization. Thanks to the properties of Interval Analysis this method converges to the global solution with mathematical certainty and arbitrary accuracy and the only input information it requires from the user are a set of point correspondences and a search interval. The cost function is based on the Huang-Faugeras constraint of the essential matrix. A recently proposed interval extension based on Bernstein polynomial forms has been investigated to speed up the search for the solution. Finally, experimental results are presented. PMID:15573823
NASA Astrophysics Data System (ADS)
Grose, Michael R.; Colman, Robert; Bhend, Jonas; Moise, Aurel F.
2016-07-01
The projected warming of surface air temperature at the global and regional scale by the end of the century is directly related to emissions and Earth's climate sensitivity. Projections are typically produced using an ensemble of climate models such as CMIP5, however the range of climate sensitivity in models doesn't cover the entire range considered plausible by expert judgment. Of particular interest from a risk-management perspective is the lower impact outcome associated with low climate sensitivity and the low-probability, high-impact outcomes associated with the top of the range. Here we scale climate model output to the limits of expert judgment of climate sensitivity to explore these limits. This scaling indicates an expanded range of projected change for each emissions pathway, including a much higher upper bound for both the globe and Australia. We find the possibility of exceeding a warming of 2 °C since pre-industrial is projected under high emissions for every model even scaled to the lowest estimate of sensitivity, and is possible under low emissions under most estimates of sensitivity. Although these are not quantitative projections, the results may be useful to inform thinking about the limits to change until the sensitivity can be more reliably constrained, or this expanded range of possibilities can be explored in a more formal way. When viewing climate projections, accounting for these low-probability but high-impact outcomes in a risk management approach can complement the focus on the likely range of projections. They can also highlight the scale of the potential reduction in range of projections, should tight constraints on climate sensitivity be established by future research.
Derivative based sensitivity analysis of gamma index.
Sarkar, Biplab; Pradhan, Anirudh; Ganesh, T
2015-01-01
Originally developed as a tool for patient-specific quality assurance in advanced treatment delivery methods to compare between measured and calculated dose distributions, the gamma index (γ) concept was later extended to compare between any two dose distributions. It takes into effect both the dose difference (DD) and distance-to-agreement (DTA) measurements in the comparison. Its strength lies in its capability to give a quantitative value for the analysis, unlike other methods. For every point on the reference curve, if there is at least one point in the evaluated curve that satisfies the pass criteria (e.g., δDD = 1%, δDTA = 1 mm), the point is included in the quantitative score as "pass." Gamma analysis does not account for the gradient of the evaluated curve - it looks at only the minimum gamma value, and if it is <1, then the point passes, no matter what the gradient of evaluated curve is. In this work, an attempt has been made to present a derivative-based method for the identification of dose gradient. A mathematically derived reference profile (RP) representing the penumbral region of 6 MV 10 cm × 10 cm field was generated from an error function. A general test profile (GTP) was created from this RP by introducing 1 mm distance error and 1% dose error at each point. This was considered as the first of the two evaluated curves. By its nature, this curve is a smooth curve and would satisfy the pass criteria for all points in it. The second evaluated profile was generated as a sawtooth test profile (STTP) which again would satisfy the pass criteria for every point on the RP. However, being a sawtooth curve, it is not a smooth one and would be obviously poor when compared with the smooth profile. Considering the smooth GTP as an acceptable profile when it passed the gamma pass criteria (1% DD and 1 mm DTA) against the RP, the first and second order derivatives of the DDs (δD', δD") between these two curves were derived and used as the boundary values
Sensitivity of the global water cycle to the water-holding capacity of land
Milly, P.C.D.; Dunne, K.A. )
1994-04-01
The sensitivity of the global water cycle to the water-holding capacity of the plant-root zone of continental soils is estimated by simulations using a mathematical model of the general circulation of the atmosphere, with prescribed ocean surface temperatures and prescribed cloud. With an increase of the globally constant storage capacity, evaporation from the continents rises and runoff falls, because a high storage capacity enhances the ability of the soil to store water from periods of excess for later evaporation during periods of shortage. In addition, atmospheric feedbacks associated with higher precipitation and lower potential evaporation drive further changes in evaporation and runoff. Most changes in evaporation and runoff occur in the tropics and the northern middle-latitude rain belts. Global evaporation from land increases by 7 cm for each doubling of storage capacity. Sensitivity is negligible for capacity above 60 cm. In the tropics and in the extratropics,increased continental evaporation is split between increased continental precipitation and decreased convergence of atmospheric water vapor from ocean to land. In the tropics, this partitioning is strongly affected by induced circulation changes, which are themselves forced by changes in latent heating. In the northern middle and high latitudes, the increased continental evaporation moistens the atmosphere. This change in humidity of the atmosphere is greater above the continents than above the oceans, and the resulting reduction in the sea-land humidity gradient causes a decreased onshore transport of water vapor by transient eddies. Results here may have implications for problems in global hydrology and climate dynamics, including effects of water resource development on global precipitation, climatic control of plant rooting characteristics, climatic effects of tropical deforestation, and climate-model errors. 21 refs., 13 figs., 21 tabs.
NASA Technical Reports Server (NTRS)
Liu, Hongyu; Crawford, James H.; Considine, David B.; Platnick, Steven; Norris, Peter M.; Duncan, Bryan N.; Pierce, Robert B.; Chen, Gao; Yantosca, Robert M.
2009-01-01
Clouds affect tropospheric photochemistry through modification of solar radiation that determines photolysis frequencies. As a follow-up study to our recent assessment of the radiative effects of clouds on tropospheric chemistry, this paper presents an analysis of the sensitivity of such effects to cloud vertical distributions and optical properties (cloud optical depths (CODs) and cloud single scattering albedo), in a global 3-D chemical transport model (GEOS-Chem). GEOS-Chem was driven with a series of meteorological archives (GEOS1- STRAT, GEOS-3 and GEOS-4) generated by the NASA Goddard Earth Observing System data assimilation system. Clouds in GEOS1-STRAT and GEOS-3 have more similar vertical distributions (with substantially smaller CODs in GEOS1-STRAT) while those in GEOS-4 are optically much thinner in the tropical upper troposphere. We find that the radiative impact of clouds on global photolysis frequencies and hydroxyl radical (OH) is more sensitive to the vertical distribution of clouds than to the magnitude of column CODs. With random vertical overlap for clouds, the model calculated changes in global mean OH (J(O1D), J(NO2)) due to the radiative effects of clouds in June are about 0.0% (0.4%, 0.9%), 0.8% (1.7%, 3.1%), and 7.3% (4.1%, 6.0%), for GEOS1-STRAT, GEOS-3 and GEOS-4, respectively; the geographic distributions of these quantities show much larger changes, with maximum decrease in OH concentrations of approx.15-35% near the midlatitude surface. The much larger global impact of clouds in GEOS-4 reflects the fact that more solar radiation is able to penetrate through the optically thin upper-tropospheric clouds, increasing backscattering from low-level clouds. Model simulations with each of the three cloud distributions all show that the change in the global burden of ozone due to clouds is less than 5%. Model perturbation experiments with GEOS-3, where the magnitude of 3-D CODs are progressively varied from -100% to 100%, predict only modest
Naujokaitis-Lewis, Ilona; Curtis, Janelle M R
2016-01-01
Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along
Naujokaitis-Lewis, Ilona; Curtis, Janelle M R
2016-01-01
Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along
Curtis, Janelle M.R.
2016-01-01
Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along
Sensitivity of the global submarine hydrate inventory to scenarios of future climate change
NASA Astrophysics Data System (ADS)
Hunter, S. J.; Goldobin, D. S.; Haywood, A. M.; Ridgwell, A.; Rees, J. G.
2013-04-01
The global submarine inventory of methane hydrate is thought to be considerable. The stability of marine hydrates is sensitive to changes in temperature and pressure and once destabilised, hydrates release methane into sediments and ocean and potentially into the atmosphere, creating a positive feedback with climate change. Here we present results from a multi-model study investigating how the methane hydrate inventory dynamically responds to different scenarios of future climate and sea level change. The results indicate that a warming-induced reduction is dominant even when assuming rather extreme rates of sea level rise (up to 20 mm yr-1) under moderate warming scenarios (RCP 4.5). Over the next century modelled hydrate dissociation is focussed in the top ˜100m of Arctic and Subarctic sediments beneath <500m water depth. Predicted dissociation rates are particularly sensitive to the modelled vertical hydrate distribution within sediments. Under the worst case business-as-usual scenario (RCP 8.5), upper estimates of resulting global sea-floor methane fluxes could exceed estimates of natural global fluxes by 2100 (>30-50TgCH4yr-1), although subsequent oxidation in the water column could reduce peak atmospheric release rates to 0.75-1.4 Tg CH4 yr-1.
Ma, Hsi-Yen; Xiao, Heng; Mechoso, C. R.; Xue, Yongkang
2013-03-01
This study examines the sensitivity of global tropical climate to land surface processes (LSP) using an atmospheric general circulation model both uncoupled (with prescribed SSTs) and coupled to an oceanic general circulation model. The emphasis is on the interactive soil moisture and vegetation biophysical processes, which have first order influence on the surface energy and water budgets. The sensitivity to those processes is represented by the differences between model simulations, in which two land surface schemes are considered: 1) a simple land scheme that specifies surface albedo and soil moisture availability, and 2) the Simplified Simple Biosphere Model (SSiB), which allows for consideration of interactive soil moisture and vegetation biophysical process. Observational datasets are also employed to assess the reality of model-revealed sensitivity. The mean state sensitivity to different LSP is stronger in the coupled mode, especially in the tropical Pacific. Furthermore, seasonal cycle of SSTs in the equatorial Pacific, as well as ENSO frequency, amplitude, and locking to the seasonal cycle of SSTs are significantly modified and more realistic with SSiB. This outstanding sensitivity of the atmosphere-ocean system develops through changes in the intensity of equatorial Pacific trades modified by convection over land. Our results further demonstrate that the direct impact of land-atmosphere interactions on the tropical climate is modified by feedbacks associated with perturbed oceanic conditions ("indirect effect" of LSP). The magnitude of such indirect effect is strong enough to suggest that comprehensive studies on the importance of LSP on the global climate have to be made in a system that allows for atmosphere-ocean interactions.
A discourse on sensitivity analysis for discretely-modeled structures
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Haftka, Raphael T.
1991-01-01
A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.
Global analysis of the immune response
NASA Astrophysics Data System (ADS)
Ribeiro, Leonardo C.; Dickman, Ronald; Bernardes, Américo T.
2008-10-01
The immune system may be seen as a complex system, characterized using tools developed in the study of such systems, for example, surface roughness and its associated Hurst exponent. We analyze densitometric (Panama blot) profiles of immune reactivity, to classify individuals into groups with similar roughness statistics. We focus on a population of individuals living in a region in which malaria endemic, as well as a control group from a disease-free region. Our analysis groups individuals according to the presence, or absence, of malaria symptoms and number of malaria manifestations. Applied to the Panama blot data, our method proves more effective at discriminating between groups than principal-components analysis or super-paramagnetic clustering. Our findings provide evidence that some phenomena observed in the immune system can be only understood from a global point of view. We observe similar tendencies between experimental immune profiles and those of artificial profiles, obtained from an immune network model. The statistical entropy of the experimental profiles is found to exhibit variations similar to those observed in the Hurst exponent.
Global analysis of nuclear parton distributions
NASA Astrophysics Data System (ADS)
de Florian, Daniel; Sassot, Rodolfo; Zurita, Pia; Stratmann, Marco
2012-04-01
We present a new global QCD analysis of nuclear parton distribution functions and their uncertainties. In addition to the most commonly analyzed data sets for the deep-inelastic scattering of charged leptons off nuclei and Drell-Yan dilepton production, we include also measurements for neutrino-nucleus scattering and inclusive pion production in deuteron-gold collisions. The analysis is performed at next-to-leading order accuracy in perturbative QCD in a general mass variable flavor number scheme, adopting a current set of free nucleon parton distribution functions, defined accordingly, as reference. The emerging picture is one of consistency, where universal nuclear modification factors for each parton flavor reproduce the main features of all data without any significant tension among the different sets. We use the Hessian method to estimate the uncertainties of the obtained nuclear modification factors and examine critically their range of validity in view of the sparse kinematic coverage of the present data. We briefly present several applications of our nuclear parton densities in hard nuclear reactions at BNL-RHIC, CERN-LHC, and a future electron-ion collider.
Global dynamics analysis of nappe oscillation
NASA Astrophysics Data System (ADS)
De Rosa, Fortunato; Girfoglio, Michele; de Luca, Luigi
2014-12-01
The unsteady global dynamics of a gravitational liquid sheet interacting with a one-sided adjacent air enclosure, typically referred to as nappe oscillation, is addressed, under the assumptions of potential flow and absence of surface tension effects. To the purpose of shedding physical insights, the investigation examines both the dynamics and the energy aspects. An interesting re-formulation of the problem consists of recasting the nappe global behavior as a driven damped spring-mass oscillator, where the inertial effects are linked to the liquid sheet mass and the spring is represented by the equivalent stiffness of the air enclosure acting on the average displacement of the compliant nappe centerline. The investigation is carried out through a modal (i.e., time asymptotic) and a non-modal (i.e., short-time transient) linear approach, which are corroborated by direct numerical simulations of the governing equation. The modal analysis shows that the flow system is characterized by low-frequency and high-frequency oscillations, the former related to the crossing time of the perturbations over the whole domain and the latter related to the spring-mass oscillator. The low-frequency oscillations, observed in real life systems, are produced by the (linear) combination of multiple modes. The non-normality of the operator is responsible for short-time energy amplifications even in asymptotically stable configurations, which are confirmed by numerical simulations and justified by energy budget considerations. Strong analogies with the edge-tone problem are encountered; in particular, the integer-plus-one-quarter resonance criterion is uncovered, where the basic frequency to be multiplied by n + /1 4 is just the one related to the spacing among the imaginary parts of the eigenvalues.
Sensitivity analysis of channel-bend hydraulics influenced by vegetation
NASA Astrophysics Data System (ADS)
Bywater-Reyes, S.; Manners, R.; McDonald, R.; Wilcox, A. C.
2015-12-01
Alternating bars influence hydraulics by changing the force balance of channels as part of a morphodynamic feedback loop that dictates channel geometry. Pioneer woody riparian trees recruit on river bars and may steer flow, alter cross-stream and downstream force balances, and ultimately change channel morphology. Quantifying the influence of vegetation on stream hydraulics is difficult, and researchers increasingly rely on two-dimensional hydraulic models. In many cases, channel characteristics (channel drag and lateral eddy viscosity) and vegetation characteristics (density, frontal area, and drag coefficient) are uncertain. This study uses a beta version of FaSTMECH that models vegetation explicitly as a drag force to test the sensitivity of channel-bend hydraulics to riparian vegetation. We use a simplified, scale model of a meandering river with bars and conduct a global sensitivity analysis that ranks the influence of specified channel characteristics (channel drag and lateral eddy viscosity) against vegetation characteristics (density, frontal area, and drag coefficient) on cross-stream hydraulics. The primary influence on cross-stream velocity and shear stress is channel drag (i.e., bed roughness), followed by the near-equal influence of all vegetation parameters and lateral eddy viscosity. To test the implication of the sensitivity indices on bend hydraulics, we hold calibrated channel characteristics constant for a wandering gravel-bed river with bars (Bitterroot River, MT), and vary vegetation parameters on a bar. For a dense vegetation scenario, we find flow to be steered away from the bar, and velocity and shear stress to be reduced within the thalweg. This provides insight into how the morphodynamic evolution of vegetated bars differs from unvegetated bars.
Kleidon, Alex; Kravitz, Benjamin S.; Renner, Maik
2015-01-16
We derive analytic expressions of the transient response of the hydrological cycle to surface warming from an extremely simple energy balance model in which turbulent heat fluxes are constrained by the thermodynamic limit of maximum power. For a given magnitude of steady-state temperature change, this approach predicts the transient response as well as the steady-state change in surface energy partitioning and the hydrologic cycle. We show that the transient behavior of the simple model as well as the steady state hydrological sensitivities to greenhouse warming and solar geoengineering are comparable to results from simulations using highly complex models. Many of the global-scale hydrological cycle changes can be understood from a surface energy balance perspective, and our thermodynamically-constrained approach provides a physically robust way of estimating global hydrological changes in response to altered radiative forcing.
Sensitivity of tropospheric hydrogen peroxide to global chemical and climate change
NASA Technical Reports Server (NTRS)
Thompson, Anne M.; Stewart, Richard W.; Owens, Melody A.
1989-01-01
The sensitivities of tropospheric HO2 and hydrogen peroxide (H2O2) levels to increases in CH4, CO, and NO emissions and to changes in stratospheric O3 and tropospheric O3 and H2O have been evaluated with a one-dimensional photochemical model. Specific scenarios of CH4-CO-NO(x) emissions and global climate changes are used to predict HO2 and H2O2 changes between 1980 and 2030. Calculations are made for urban and nonurban continental conditions and for low latitudes. Generally, CO and CH4 emissions will enhance H2O2; NO emissions will suppress H2O2 except in very low NO(x) regions. A global warming or stratospheric O3 depletion will add to H2O2. Hydrogen peroxide increases from 1980 to 2030 could be 100 percent or more in the urban boundary layer.
Ensemble reconstruction constraints on the global carbon cycle sensitivity to climate.
Frank, David C; Esper, Jan; Raible, Christoph C; Büntgen, Ulf; Trouet, Valerie; Stocker, Benjamin; Joos, Fortunat
2010-01-28
The processes controlling the carbon flux and carbon storage of the atmosphere, ocean and terrestrial biosphere are temperature sensitive and are likely to provide a positive feedback leading to amplified anthropogenic warming. Owing to this feedback, at timescales ranging from interannual to the 20-100-kyr cycles of Earth's orbital variations, warming of the climate system causes a net release of CO(2) into the atmosphere; this in turn amplifies warming. But the magnitude of the climate sensitivity of the global carbon cycle (termed gamma), and thus of its positive feedback strength, is under debate, giving rise to large uncertainties in global warming projections. Here we quantify the median gamma as 7.7 p.p.m.v. CO(2) per degrees C warming, with a likely range of 1.7-21.4 p.p.m.v. CO(2) per degrees C. Sensitivity experiments exclude significant influence of pre-industrial land-use change on these estimates. Our results, based on the coupling of a probabilistic approach with an ensemble of proxy-based temperature reconstructions and pre-industrial CO(2) data from three ice cores, provide robust constraints for gamma on the policy-relevant multi-decadal to centennial timescales. By using an ensemble of >200,000 members, quantification of gamma is not only improved, but also likelihoods can be assigned, thereby providing a benchmark for future model simulations. Although uncertainties do not at present allow exclusion of gamma calculated from any of ten coupled carbon-climate models, we find that gamma is about twice as likely to fall in the lowermost than in the uppermost quartile of their range. Our results are incompatibly lower (P < 0.05) than recent pre-industrial empirical estimates of approximately 40 p.p.m.v. CO(2) per degrees C (refs 6, 7), and correspondingly suggest approximately 80% less potential amplification of ongoing global warming.
Extended forward sensitivity analysis of one-dimensional isothermal flow
Johnson, M.; Zhao, H.
2013-07-01
Sensitivity analysis and uncertainty quantification is an important part of nuclear safety analysis. In this work, forward sensitivity analysis is used to compute solution sensitivities on 1-D fluid flow equations typical of those found in system level codes. Time step sensitivity analysis is included as a method for determining the accumulated error from time discretization. The ability to quantify numerical error arising from the time discretization is a unique and important feature of this method. By knowing the relative sensitivity of time step with other physical parameters, the simulation is allowed to run at optimized time steps without affecting the confidence of the physical parameter sensitivity results. The time step forward sensitivity analysis method can also replace the traditional time step convergence studies that are a key part of code verification with much less computational cost. One well-defined benchmark problem with manufactured solutions is utilized to verify the method; another test isothermal flow problem is used to demonstrate the extended forward sensitivity analysis process. Through these sample problems, the paper shows the feasibility and potential of using the forward sensitivity analysis method to quantify uncertainty in input parameters and time step size for a 1-D system-level thermal-hydraulic safety code. (authors)
Attainability analysis in the stochastic sensitivity control
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina
2015-02-01
For nonlinear dynamic stochastic control system, we construct a feedback regulator that stabilises an equilibrium and synthesises a required dispersion of random states around this equilibrium. Our approach is based on the stochastic sensitivity functions technique. We focus on the investigation of attainability sets for 2-D systems. A detailed parametric description of the attainability domains for various types of control inputs for stochastic Brusselator is presented. It is shown that the new regulator provides a low level of stochastic sensitivity and can suppress oscillations of large amplitude.
Quantifying PM2.5-meteorology sensitivities in a global climate model
NASA Astrophysics Data System (ADS)
Westervelt, D. M.; Horowitz, L. W.; Naik, V.; Tai, A. P. K.; Fiore, A. M.; Mauzerall, D. L.
2016-10-01
Climate change can influence fine particulate matter concentrations (PM2.5) through changes in air pollution meteorology. Knowledge of the extent to which climate change can exacerbate or alleviate air pollution in the future is needed for robust climate and air pollution policy decision-making. To examine the influence of climate on PM2.5, we use the Geophysical Fluid Dynamics Laboratory Coupled Model version 3 (GFDL CM3), a fully-coupled chemistry-climate model, combined with future emissions and concentrations provided by the four Representative Concentration Pathways (RCPs). For each of the RCPs, we conduct future simulations in which emissions of aerosols and their precursors are held at 2005 levels while other climate forcing agents evolve in time, such that only climate (and thus meteorology) can influence PM2.5 surface concentrations. We find a small increase in global, annual mean PM2.5 of about 0.21 μg m-3 (5%) for RCP8.5, a scenario with maximum warming. Changes in global mean PM2.5 are at a maximum in the fall and are mainly controlled by sulfate followed by organic aerosol with minimal influence of black carbon. RCP2.6 is the only scenario that projects a decrease in global PM2.5 with future climate changes, albeit only by -0.06 μg m-3 (1.5%) by the end of the 21st century. Regional and local changes in PM2.5 are larger, reaching upwards of 2 μg m-3 for polluted (eastern China) and dusty (western Africa) locations on an annually averaged basis in RCP8.5. Using multiple linear regression, we find that future PM2.5 concentrations are most sensitive to local temperature, followed by surface wind and precipitation. PM2.5 concentrations are robustly positively associated with temperature, while negatively related with precipitation and wind speed. Present-day (2006-2015) modeled sensitivities of PM2.5 to meteorological variables are evaluated against observations and found to agree reasonably well with observed sensitivities (within 10-50% over the
NASA Technical Reports Server (NTRS)
Watkins, A. Neal; Leighty, Bradley D.; Lipford, William E.; Wong, Oliver D.; Oglesby, Donald M.; Ingram, JoAnne L.
2007-01-01
This paper will describe the results from a proof of concept test to examine the feasibility of using Pressure Sensitive Paint (PSP) to measure global surface pressures on rotorcraft blades in hover. The test was performed using the U.S. Army 2-meter Rotor Test Stand (2MRTS) and 15% scale swept rotor blades. Data were collected from five blades using both the intensity- and lifetime-based approaches. This paper will also outline several modifications and improvements that are underway to develop a system capable of measuring pressure distributions on up to four blades simultaneously at hover and forward flight conditions.
A pathway analysis of global aerosol processes
NASA Astrophysics Data System (ADS)
Schutgens, Nick; Stier, Philip
2014-05-01
smaller modes. Our analysis also suggests that coagulation serves mainly as a loss process for number densities and that it is a relatively unimportant contributor to composition changes of aerosol. Our results provide an objective way of complexity analysis in a global aerosol model and will be used in future work where we will reduce this complexity in ECHAM-HAM.
Implementation of efficient sensitivity analysis for optimization of large structures
NASA Technical Reports Server (NTRS)
Umaretiya, J. R.; Kamil, H.
1990-01-01
The paper presents the theoretical bases and implementation techniques of sensitivity analyses for efficient structural optimization of large structures, based on finite element static and dynamic analysis methods. The sensitivity analyses have been implemented in conjunction with two methods for optimization, namely, the Mathematical Programming and Optimality Criteria methods. The paper discusses the implementation of the sensitivity analysis method into our in-house software package, AutoDesign.
Sensitivity analysis of geometric errors in additive manufacturing medical models.
Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian
2015-03-01
Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms.
Design Parameters Influencing Reliability of CCGA Assembly: A Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Tasooji, Amaneh; Ghaffarian, Reza; Rinaldi, Antonio
2006-01-01
Area Array microelectronic packages with small pitch and large I/O counts are now widely used in microelectronics packaging. The impact of various package design and materials/process parameters on reliability has been studied through extensive literature review. Reliability of Ceramic Column Grid Array (CCGA) package assemblies has been evaluated using JPL thermal cycle test results (-50(deg)/75(deg)C, -55(deg)/100(deg)C, and -55(deg)/125(deg)C), as well as those reported by other investigators. A sensitivity analysis has been performed using the literature da to study the impact of design parameters and global/local stress conditions on assembly reliability. The applicability of various life-prediction models for CCGA design has been investigated by comparing model's predictions with the experimental thermal cycling data. Finite Element Method (FEM) analysis has been conducted to assess the state of the stress/strain in CCGA assembly under different thermal cycling, and to explain the different failure modes and locations observed in JPL test assemblies.
Global-local methodologies and their application to nonlinear analysis
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1989-01-01
An assessment is made of the potential of different global-local analysis strategies for predicting the nonlinear and postbuckling responses of structures. Two postbuckling problems of composite panels are used as benchmarks and the application of different global-local methodologies to these benchmarks is outlined. The key elements of each of the global-local strategies are discussed and future research areas needed to realize the full potential of global-local methodologies are identified.
Global analysis of photosynthesis transcriptional regulatory networks.
Imam, Saheed; Noguera, Daniel R; Donohue, Timothy J
2014-12-01
Photosynthesis is a crucial biological process that depends on the interplay of many components. This work analyzed the gene targets for 4 transcription factors: FnrL, PrrA, CrpK and MppG (RSP_2888), which are known or predicted to control photosynthesis in Rhodobacter sphaeroides. Chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) identified 52 operons under direct control of FnrL, illustrating its regulatory role in photosynthesis, iron homeostasis, nitrogen metabolism and regulation of sRNA synthesis. Using global gene expression analysis combined with ChIP-seq, we mapped the regulons of PrrA, CrpK and MppG. PrrA regulates ∼34 operons encoding mainly photosynthesis and electron transport functions, while CrpK, a previously uncharacterized Crp-family protein, regulates genes involved in photosynthesis and maintenance of iron homeostasis. Furthermore, CrpK and FnrL share similar DNA binding determinants, possibly explaining our observation of the ability of CrpK to partially compensate for the growth defects of a ΔFnrL mutant. We show that the Rrf2 family protein, MppG, plays an important role in photopigment biosynthesis, as part of an incoherent feed-forward loop with PrrA. Our results reveal a previously unrealized, high degree of combinatorial regulation of photosynthetic genes and significant cross-talk between their transcriptional regulators, while illustrating previously unidentified links between photosynthesis and the maintenance of iron homeostasis.
Haberl, Helmut; Erb, Karl-Heinz; Krausmann, Fridolin; Bondeau, Alberte; Lauk, Christian; Müller, Christoph; Plutzar, Christoph; Steinberger, Julia K.
2011-01-01
There is a growing recognition that the interrelations between agriculture, food, bioenergy, and climate change have to be better understood in order to derive more realistic estimates of future bioenergy potentials. This article estimates global bioenergy potentials in the year 2050, following a “food first” approach. It presents integrated food, livestock, agriculture, and bioenergy scenarios for the year 2050 based on a consistent representation of FAO projections of future agricultural development in a global biomass balance model. The model discerns 11 regions, 10 crop aggregates, 2 livestock aggregates, and 10 food aggregates. It incorporates detailed accounts of land use, global net primary production (NPP) and its human appropriation as well as socioeconomic biomass flow balances for the year 2000 that are modified according to a set of scenario assumptions to derive the biomass potential for 2050. We calculate the amount of biomass required to feed humans and livestock, considering losses between biomass supply and provision of final products. Based on this biomass balance as well as on global land-use data, we evaluate the potential to grow bioenergy crops and estimate the residue potentials from cropland (forestry is outside the scope of this study). We assess the sensitivity of the biomass potential to assumptions on diets, agricultural yields, cropland expansion and climate change. We use the dynamic global vegetation model LPJmL to evaluate possible impacts of changes in temperature, precipitation, and elevated CO2 on agricultural yields. We find that the gross (primary) bioenergy potential ranges from 64 to 161 EJ y−1, depending on climate impact, yields and diet, while the dependency on cropland expansion is weak. We conclude that food requirements for a growing world population, in particular feed required for livestock, strongly influence bioenergy potentials, and that integrated approaches are needed to optimize food and bioenergy supply
Ringed Seal Search for Global Optimization via a Sensitive Search Model.
Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar
2016-01-01
The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global
Ringed Seal Search for Global Optimization via a Sensitive Search Model
Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar
2016-01-01
The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global
Ringed Seal Search for Global Optimization via a Sensitive Search Model.
Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar
2016-01-01
The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global
Grid sensitivity for aerodynamic optimization and flow analysis
NASA Technical Reports Server (NTRS)
Sadrehaghighi, I.; Tiwari, S. N.
1993-01-01
After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.
Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.
2015-01-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust
Zajac, Zuzanna; Stith, Bradley; Bowling, Andrea C; Langtimm, Catherine A; Swain, Eric D
2015-01-01
Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust
Frey, H Christopher
2002-06-01
This guest editorial is a summary of the NCSU/USDA Workshop on Sensitivity Analysis held June 11-12, 2001 at North Carolina State University and sponsored by the U.S. Department of Agriculture's Office of Risk Assessment and Cost Benefit Analysis. The objective of the workshop was to learn across disciplines in identifying, evaluating, and recommending sensitivity analysis methods and practices for application to food-safety process risk models. The workshop included presentations regarding the Hazard Assessment and Critical Control Points (HACCP) framework used in food-safety risk assessment, a survey of sensitivity analysis methods, invited white papers on sensitivity analysis, and invited case studies regarding risk assessment of microbial pathogens in food. Based on the sharing of interdisciplinary information represented by the presentations, the workshop participants, divided into breakout sessions, responded to three trigger questions: What are the key criteria for sensitivity analysis methods applied to food-safety risk assessment? What sensitivity analysis methods are most promising for application to food safety and risk assessment? and What are the key needs for implementation and demonstration of such methods? The workshop produced agreement regarding key criteria for sensitivity analysis methods and the need to use two or more methods to try to obtain robust insights. Recommendations were made regarding a guideline document to assist practitioners in selecting, applying, interpreting, and reporting the results of sensitivity analysis.
Discrete analysis of spatial-sensitivity models
NASA Technical Reports Server (NTRS)
Nielsen, Kenneth R. K.; Wandell, Brian A.
1988-01-01
Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.
Sensitivity analysis of Stirling engine design parameters
Naso, V.; Dong, W.; Lucentini, M.; Capata, R.
1998-07-01
In the preliminary Stirling engine design process, the values of some design parameters (temperature ratio, swept volume ratio, phase angle and dead volume ratio) have to be assumed; as a matter of fact it can be difficult to determine the best values of these parameters for a particular engine design. In this paper, a mathematical model is developed to analyze the sensitivity of engine's performance variations corresponding to variations of these parameters.
NASA Technical Reports Server (NTRS)
Winters, J. M.; Stark, L.
1984-01-01
Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.
Sensitivity Analysis of Situational Awareness Measures
NASA Technical Reports Server (NTRS)
Shively, R. J.; Davison, H. J.; Burdick, M. D.; Rutkowski, Michael (Technical Monitor)
2000-01-01
A great deal of effort has been invested in attempts to define situational awareness, and subsequently to measure this construct. However, relatively less work has focused on the sensitivity of these measures to manipulations that affect the SA of the pilot. This investigation was designed to manipulate SA and examine the sensitivity of commonly used measures of SA. In this experiment, we tested the most commonly accepted measures of SA: SAGAT, objective performance measures, and SART, against different levels of SA manipulation to determine the sensitivity of such measures in the rotorcraft flight environment. SAGAT is a measure in which the simulation blanks in the middle of a trial and the pilot is asked specific, situation-relevant questions about the state of the aircraft or the objective of a particular maneuver. In this experiment, after the pilot responded verbally to several questions, the trial continued from the point frozen. SART is a post-trial questionnaire that asked for subjective SA ratings from the pilot at certain points in the previous flight. The objective performance measures included: contacts with hazards (power lines and towers) that impeded the flight path, lateral and vertical anticipation of these hazards, response time to detection of other air traffic, and response time until an aberrant fuel gauge was detected. An SA manipulation of the flight environment was chosen that undisputedly affects a pilot's SA-- visibility. Four variations of weather conditions (clear, light rain, haze, and fog) resulted in a different level of visibility for each trial. Pilot SA was measured by either SAGAT or the objective performance measures within each level of visibility. This enabled us to not only determine the sensitivity within a measure, but also between the measures. The SART questionnaire and the NASA-TLX, a measure of workload, were distributed after every trial. Using the newly developed rotorcraft part-task laboratory (RPTL) at NASA Ames
NASA Astrophysics Data System (ADS)
Centoni, Federico; Stevenson, David; Fowler, David; Nemitz, Eiko; Coyle, Mhairi
2015-04-01
Concentrations of ozone at the surface are strongly affected by deposition to the surface. Deposition processes are very sensitive to temperature and relative humidity at the surface and are expected to respond to global change, with implications for both air quality and ecosystem services. Many studies have shown that ozone stomatal uptake by vegetation typically accounts for 40-60% of total deposition on average and the other part which occurs through non-stomatal pathways is not constant. Flux measurements show that non-stomatal removal increases with temperature and under wet conditions. There are large uncertainties in parameterising the non-stomatal ozone deposition term in climate chemistry models and model predictions vary greatly. In addition, different model treatments of dry deposition constitute a source of inter-model variability in surface ozone predictions. The main features of the original Unified Model-UK Chemistry and Aerosols (UM-UKCA) dry deposition scheme and the Zhang et al. 2003 scheme, which introduces in UM-UKCA a more developed non-stomatal deposition approach, are presented. This study also estimates the relative contributions of ozone flux via stomatal and non-stomatal uptakes at the global scale, and explores the sensitivity of simulated surface ozone and ozone deposition flux by implementing different non-stomatal parameterization terms. With a view to exploring the potential influence of future climate, we present results showing the effects of variations in some meteorological parameters on present day (2000) global ozone predictions. In particular, this study revealed that the implementation of a more mechanistic representation of the non-stomatal deposition in UM-UKCA model along with a decreased stomatal uptake due to the effect of blocking under wet conditions, accounted for a substantial reduction of ozone fluxes to broadleaf trees in the tropics with an increase of annual mean surface ozone. On the contrary, a large increase of
Global Analysis of Photosynthesis Transcriptional Regulatory Networks
Imam, Saheed; Noguera, Daniel R.; Donohue, Timothy J.
2014-01-01
Photosynthesis is a crucial biological process that depends on the interplay of many components. This work analyzed the gene targets for 4 transcription factors: FnrL, PrrA, CrpK and MppG (RSP_2888), which are known or predicted to control photosynthesis in Rhodobacter sphaeroides. Chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) identified 52 operons under direct control of FnrL, illustrating its regulatory role in photosynthesis, iron homeostasis, nitrogen metabolism and regulation of sRNA synthesis. Using global gene expression analysis combined with ChIP-seq, we mapped the regulons of PrrA, CrpK and MppG. PrrA regulates ∼34 operons encoding mainly photosynthesis and electron transport functions, while CrpK, a previously uncharacterized Crp-family protein, regulates genes involved in photosynthesis and maintenance of iron homeostasis. Furthermore, CrpK and FnrL share similar DNA binding determinants, possibly explaining our observation of the ability of CrpK to partially compensate for the growth defects of a ΔFnrL mutant. We show that the Rrf2 family protein, MppG, plays an important role in photopigment biosynthesis, as part of an incoherent feed-forward loop with PrrA. Our results reveal a previously unrealized, high degree of combinatorial regulation of photosynthetic genes and significant cross-talk between their transcriptional regulators, while illustrating previously unidentified links between photosynthesis and the maintenance of iron homeostasis. PMID:25503406
Biogeochemistry, An Analysis of Global Change
NASA Astrophysics Data System (ADS)
Leavit, Steven W.
Compared to the well-established disciplines, the field of Earth system science/global change has relatively few books from which to choose. Of the small subset of books dealing specifically with biogeochemical aspects of global change, the first edition of Schlesinger's Biogeochemistry in 1991 was an early entry. It has since gained sufficient popularity and demand to merit a second, extensively revised edition. The first part of the book provides a general introduction to biogeochemistry and cycles, and to the origin of elements, our planet, and life on Earth. It then describes the functioning and biogeochemistry of the atmosphere, lithosphere, biosphere, and hydrosphere, including marine and freshwater systems. Although system function and features are stressed, the author begins to introduce global change topics, such as soil organic matter and global change in Chapter 5, and landscape and mass balance in Chapter 6.
Partial Differential Algebraic Sensitivity Analysis Code
1995-05-15
PDASAC solves stiff, nonlinear initial-boundary-value in a timelike dimension t and a space dimension x. Plane, circular cylindrical or spherical boundaries can be handled. Mixed-order systems of partial differential and algebraic equations can be analyzed with members of order or 0 or 1 in t, 0,1 or 2 in x. Parametric sensitivities of the calculated states are compted simultaneously on request, via the Jacobian of the state equations. Initial and boundary conditions are efficiently reconciled.more » Local error control (in the max-norm or the 2-norm) is provided for the state vector and can include the parametric sensitivites if desired.« less
Sensitivity analysis of limit cycles with application to the Brusselator
Larter, R.; Rabitz, H.; Kramer, M.
1984-05-01
Sensitivity analysis, by which it is possible to determine the dependence of the solution of a system of differential equations to variations in the parameters, is applied to systems which have a limit cycle solution in some region of parameter space. The resulting expressions for the sensitivity coefficients, which are the gradients of the limit cycle solution in parameter space, are analyzed by a Fourier series approach; the sensitivity coefficients are found to contain information on the sensitivity of the period and other features of the limit cycle. The intimate relationship between Lyapounov stability analysis and sensitivity analysis is discussed. The results of our general derivation are applied to two limit cycle oscillators: (1) an exactly soluble two-species oscillator and (2) the Brusselator.
Aero-Structural Interaction, Analysis, and Shape Sensitivity
NASA Technical Reports Server (NTRS)
Newman, James C., III
1999-01-01
A multidisciplinary sensitivity analysis technique that has been shown to be independent of step-size selection is examined further. The accuracy of this step-size independent technique, which uses complex variables for determining sensitivity derivatives, has been previously established. The primary focus of this work is to validate the aero-structural analysis procedure currently being used. This validation consists of comparing computed and experimental data obtained for an Aeroelastic Research Wing (ARW-2). Since the aero-structural analysis procedure has the complex variable modifications already included into the software, sensitivity derivatives can automatically be computed. Other than for design purposes, sensitivity derivatives can be used for predicting the solution at nearby conditions. The use of sensitivity derivatives for predicting the aero-structural characteristics of this configuration is demonstrated.
Global sensitivity of high-resolution estimates of crop water footprint
NASA Astrophysics Data System (ADS)
Tuninetti, Marta; Tamea, Stefania; D'Odorico, Paolo; Laio, Francesco; Ridolfi, Luca
2015-10-01
Most of the human appropriation of freshwater resources is for agriculture. Water availability is a major constraint to mankind's ability to produce food. The notion of virtual water content (VWC), also known as crop water footprint, provides an effective tool to investigate the linkage between food and water resources as a function of climate, soil, and agricultural practices. The spatial variability in the virtual water content of crops is here explored, disentangling its dependency on climate and crop yields and assessing the sensitivity of VWC estimates to parameter variability and uncertainty. Here we calculate the virtual water content of four staple crops (i.e., wheat, rice, maize, and soybean) for the entire world developing a high-resolution (5 × 5 arc min) model, and we evaluate the VWC sensitivity to input parameters. We find that food production almost entirely depends on green water (>90%), but, when applied, irrigation makes crop production more water efficient, thus requiring less water. The spatial variability of the VWC is mostly controlled by the spatial patterns of crop yields with an average correlation coefficient of 0.83. The results of the sensitivity analysis show that wheat is most sensitive to the length of the growing period, rice to reference evapotranspiration, maize and soybean to the crop planting date. The VWC sensitivity varies not only among crops, but also across the harvested areas of the world, even at the subnational scale.
NASA Astrophysics Data System (ADS)
Poulter, Benjamin; Cadule, Patricia; Cheiney, Audrey; Ciais, Philippe; Hodson, Elke; Peylin, Philippe; Plummer, Stephen; Spessa, Allan; Saatchi, Sassan; Yue, Chao; Zimmermann, Niklaus E.
2015-02-01
Fire plays an important role in terrestrial ecosystems by regulating biogeochemistry, biogeography, and energy budgets, yet despite the importance of fire as an integral ecosystem process, significant advances remain to improve its prognostic representation in carbon cycle models. To recommend and to help prioritize model improvements, this study investigates the sensitivity of a coupled global biogeography and biogeochemistry model, LPJ, to observed burned area measured by three independent satellite-derived products, GFED v3.1, L3JRC, and GlobCarbon. Model variables are compared with benchmarks that include pantropical aboveground biomass, global tree cover, and CO2 and CO trace gas concentrations. Depending on prescribed burned area product, global aboveground carbon stocks varied by 300 Pg C, and woody cover ranged from 50 to 73 Mkm2. Tree cover and biomass were both reduced linearly with increasing burned area, i.e., at regional scales, a 10% reduction in tree cover per 1000 km2, and 0.04-to-0.40 Mg C reduction per 1000 km2. In boreal regions, satellite burned area improved simulated tree cover and biomass distributions, but in savanna regions, model-data correlations decreased. Global net biome production was relatively insensitive to burned area, and the long-term land carbon sink was robust, ~2.5 Pg C yr-1, suggesting that feedbacks from ecosystem respiration compensated for reductions in fuel consumption via fire. CO2 transport provided further evidence that heterotrophic respiration compensated any emission reductions in the absence of fire, with minor differences in modeled CO2 fluxes among burned area products. CO was a more sensitive indicator for evaluating fire emissions, with MODIS-GFED burned area producing CO concentrations largely in agreement with independent observations in high latitudes. This study illustrates how ensembles of burned area data sets can be used to diagnose model structures and parameters for further improvement and also
Advanced Fuel Cycle Economic Sensitivity Analysis
David Shropshire; Kent Williams; J.D. Smith; Brent Boore
2006-12-01
A fuel cycle economic analysis was performed on four fuel cycles to provide a baseline for initial cost comparison using the Gen IV Economic Modeling Work Group G4 ECON spreadsheet model, Decision Programming Language software, the 2006 Advanced Fuel Cycle Cost Basis report, industry cost data, international papers, the nuclear power related cost study from MIT, Harvard, and the University of Chicago. The analysis developed and compared the fuel cycle cost component of the total cost of energy for a wide range of fuel cycles including: once through, thermal with fast recycle, continuous fast recycle, and thermal recycle.
Is globalization healthy: a statistical indicator analysis of the impacts of globalization on health
2010-01-01
It is clear that globalization is something more than a purely economic phenomenon manifesting itself on a global scale. Among the visible manifestations of globalization are the greater international movement of goods and services, financial capital, information and people. In addition, there are technological developments, more transboundary cultural exchanges, facilitated by the freer trade of more differentiated products as well as by tourism and immigration, changes in the political landscape and ecological consequences. In this paper, we link the Maastricht Globalization Index with health indicators to analyse if more globalized countries are doing better in terms of infant mortality rate, under-five mortality rate, and adult mortality rate. The results indicate a positive association between a high level of globalization and low mortality rates. In view of the arguments that globalization provides winners and losers, and might be seen as a disequalizing process, we should perhaps be careful in interpreting the observed positive association as simple evidence that globalization is mostly good for our health. It is our hope that a further analysis of health impacts of globalization may help in adjusting and optimising the process of globalization on every level in the direction of a sustainable and healthy development for all. PMID:20849605
Martens, Pim; Akin, Su-Mia; Maud, Huynen; Mohsin, Raza
2010-09-17
It is clear that globalization is something more than a purely economic phenomenon manifesting itself on a global scale. Among the visible manifestations of globalization are the greater international movement of goods and services, financial capital, information and people. In addition, there are technological developments, more transboundary cultural exchanges, facilitated by the freer trade of more differentiated products as well as by tourism and immigration, changes in the political landscape and ecological consequences. In this paper, we link the Maastricht Globalization Index with health indicators to analyse if more globalized countries are doing better in terms of infant mortality rate, under-five mortality rate, and adult mortality rate. The results indicate a positive association between a high level of globalization and low mortality rates. In view of the arguments that globalization provides winners and losers, and might be seen as a disequalizing process, we should perhaps be careful in interpreting the observed positive association as simple evidence that globalization is mostly good for our health. It is our hope that a further analysis of health impacts of globalization may help in adjusting and optimising the process of globalization on every level in the direction of a sustainable and healthy development for all.
Global Human Settlement Analysis for Disaster Risk Reduction
NASA Astrophysics Data System (ADS)
Pesaresi, M.; Ehrlich, D.; Ferri, S.; Florczyk, A.; Freire, S.; Haag, F.; Halkia, M.; Julea, A. M.; Kemper, T.; Soille, P.
2015-04-01
The Global Human Settlement Layer (GHSL) is supported by the European Commission, Joint Research Center (JRC) in the frame of his institutional research activities. Scope of GHSL is developing, testing and applying the technologies and analysis methods integrated in the JRC Global Human Settlement analysis platform for applications in support to global disaster risk reduction initiatives (DRR) and regional analysis in the frame of the European Cohesion policy. GHSL analysis platform uses geo-spatial data, primarily remotely sensed and population. GHSL also cooperates with the Group on Earth Observation on SB-04-Global Urban Observation and Information, and various international partners andWorld Bank and United Nations agencies. Some preliminary results integrating global human settlement information extracted from Landsat data records of the last 40 years and population data are presented.
NASA Technical Reports Server (NTRS)
Malone, Brett; Mason, W. H.
1992-01-01
An extension of our parametric multidisciplinary optimization method to include design results connecting multiple objective functions is presented. New insight into the effect of the figure of merit (objective function) on aircraft configuration size and shape is demonstrated using this technique. An aircraft concept, subject to performance and aerodynamic constraints, is optimized using the global sensitivity equation method for a wide range of objective functions. These figures of merit are described parametrically such that a series of multiobjective optimal solutions can be obtained. Computational speed is facilitated by use of algebraic representations of the system technologies. Using this method, the evolution of an optimum design from one objective function to another is demonstrated. Specifically, combinations of minimum takeoff gross weight, fuel weight, and maximum cruise performance and productivity parameters are used as objective functions.
Perez, Romel B; Tischer, Alexander; Auton, Matthew; Whitten, Steven T
2014-12-01
Molecular transduction of biological signals is understood primarily in terms of the cooperative structural transitions of protein macromolecules, providing a mechanism through which discrete local structure perturbations affect global macromolecular properties. The recognition that proteins lacking tertiary stability, commonly referred to as intrinsically disordered proteins (IDPs), mediate key signaling pathways suggests that protein structures without cooperative intramolecular interactions may also have the ability to couple local and global structure changes. Presented here are results from experiments that measured and tested the ability of disordered proteins to couple local changes in structure to global changes in structure. Using the intrinsically disordered N-terminal region of the p53 protein as an experimental model, a set of proline (PRO) and alanine (ALA) to glycine (GLY) substitution variants were designed to modulate backbone conformational propensities without introducing non-native intramolecular interactions. The hydrodynamic radius (R(h)) was used to monitor changes in global structure. Circular dichroism spectroscopy showed that the GLY substitutions decreased polyproline II (PP(II)) propensities relative to the wild type, as expected, and fluorescence methods indicated that substitution-induced changes in R(h) were not associated with folding. The experiments showed that changes in local PP(II) structure cause changes in R(h) that are variable and that depend on the intrinsic chain propensities of PRO and ALA residues, demonstrating a mechanism for coupling local and global structure changes. Molecular simulations that model our results were used to extend the analysis to other proteins and illustrate the generality of the observed PRO and alanine effects on the structures of IDPs.
An Analysis of Solar Global Activity
NASA Astrophysics Data System (ADS)
Mouradian, Zadig
2013-02-01
This article proposes a unified observational model of solar activity based on sunspot number and the solar global activity in the rotation of the structures, both per 11-year cycle. The rotation rates show a variation of a half-century period and the same period is also associated to the sunspot amplitude variation. The global solar rotation interweaves with the observed global organisation of solar activity. An important role for this assembly is played by the Grand Cycle formed by the merging of five sunspot cycles: a forgotten discovery by R. Wolf. On the basis of these elements, the nature of the Dalton Minimum, the Maunder Minimum, the Gleissberg Cycle, and the Grand Minima are presented.
Sensitivity Analysis in Complex Plasma Chemistry Models
NASA Astrophysics Data System (ADS)
Turner, Miles
2015-09-01
The purpose of a plasma chemistry model is prediction of chemical species densities, including understanding the mechanisms by which such species are formed. These aims are compromised by an uncertain knowledge of the rate constants included in the model, which directly causes uncertainty in the model predictions. We recently showed that this predictive uncertainty can be large--a factor of ten or more in some cases. There is probably no context in which a plasma chemistry model might be used where the existence of uncertainty on this scale could not be a matter of concern. A question that at once follows is: Which rate constants cause such uncertainty? In the present paper we show how this question can be answered by applying a systematic screening procedure--the so-called Morris method--to identify sensitive rate constants. We investigate the topical example of the helium-oxygen chemistry. Beginning with a model with almost four hundred reactions, we show that only about fifty rate constants materially affect the model results, and as few as ten cause most of the uncertainty. This means that the model can be improved, and the uncertainty substantially reduced, by focussing attention on this tractably small set of rate constants. Work supported by Science Foundation Ireland under grant08/SRC/I1411, and by COST Action MP1101 ``Biomedical Applications of Atmospheric Pressure Plasmas.''
Selecting step sizes in sensitivity analysis by finite differences
NASA Technical Reports Server (NTRS)
Iott, J.; Haftka, R. T.; Adelman, H. M.
1985-01-01
This paper deals with methods for obtaining near-optimum step sizes for finite difference approximations to first derivatives with particular application to sensitivity analysis. A technique denoted the finite difference (FD) algorithm, previously described in the literature and applicable to one derivative at a time, is extended to the calculation of several simultaneously. Both the original and extended FD algorithms are applied to sensitivity analysis for a data-fitting problem in which derivatives of the coefficients of an interpolation polynomial are calculated with respect to uncertainties in the data. The methods are also applied to sensitivity analysis of the structural response of a finite-element-modeled swept wing. In a previous study, this sensitivity analysis of the swept wing required a time-consuming trial-and-error effort to obtain a suitable step size, but it proved to be a routine application for the extended FD algorithm herein.
Parameter sensitivity analysis for pesticide impacts on honeybee colonies
We employ Monte Carlo simulation and linear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed that simulate hive population trajectories, taking into account queen strength, foraging success, weather, colo...
Shen, W.; Tuleya, R.E.; Ginis, I.
2000-01-01
In this study, the effect of thermodynamic environmental changes on hurricane intensity is extensively investigated with the National Oceanic and Atmospheric Administration Geophysical Fluid Dynamics Laboratory hurricane model for a suite of experiments with different initial upper-tropospheric temperature anomalies up to {+-}4 C and sea surface temperatures ranging from 26 to 31 C given the same relative humidity profile. The results indicate that stabilization in the environmental atmosphere and sea surface temperature (SST) increase cause opposing effects on hurricane intensity. The offsetting relationship between the effects of atmospheric stability increase (decrease) and SST increase (decrease) is monotonic and systematic in the parameter space. This implies that hurricane intensity increase due to a possible global warming associated with increased CO{sub 2} is considerably smaller than that expected from warming of the oceanic waters alone. The results also indicate that the intensity of stronger (weaker) hurricanes is more (less) sensitive to atmospheric stability and SST changes. The model-attained hurricane intensity is found to be well correlated with the maximum surface evaporation and the large-scale environmental convective available potential energy. The model-attained hurricane intensity if highly correlated with the energy available from wet-adiabatic ascent near the eyewall relative to a reference sounding in the undisturbed environment for all the experiments. Coupled hurricane-ocean experiments show that hurricane intensity becomes less sensitive to atmospheric stability and SST changes since the ocean coupling causes larger (smaller) intensity reduction for stronger (weaker) hurricanes. This implies less increase of hurricane intensity related to a possible global warming due to increased CO{sub 2}.
Adjoint sensitivity analysis of plasmonic structures using the FDTD method.
Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H
2014-05-15
We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.
Sensitivity Analysis of the Gap Heat Transfer Model in BISON.
Swiler, Laura Painton; Schmidt, Rodney C.; Williamson, Richard; Perez, Danielle
2014-10-01
This report summarizes the result of a NEAMS project focused on sensitivity analysis of the heat transfer model in the gap between the fuel rod and the cladding used in the BISON fuel performance code of Idaho National Laboratory. Using the gap heat transfer models in BISON, the sensitivity of the modeling parameters and the associated responses is investigated. The study results in a quantitative assessment of the role of various parameters in the analysis of gap heat transfer in nuclear fuel.
Zhang, Ning; Liu, Yangang; Gao, Zhiqiu; Li, Dan
2015-04-27
The critical bulk Richardson number (Ricr) is an important parameter in planetary boundary layer (PBL) parameterization schemes used in many climate models. This paper examines the sensitivity of a Global Climate Model, the Beijing Climate Center Atmospheric General Circulation Model, BCC_AGCM to Ricr. The results show that the simulated global average of PBL height increases nearly linearly with Ricr, with a change of about 114 m for a change of 0.5 in Ricr. The surface sensible (latent) heat flux decreases (increases) as Ricr increases. The influence of Ricr on surface air temperature and specific humidity is not significant. The increasingmore » Ricr may affect the location of the Westerly Belt in the Southern Hemisphere. Further diagnosis reveals that changes in Ricr affect stratiform and convective precipitations differently. Increasing Ricr leads to an increase in the stratiform precipitation but a decrease in the convective precipitation. Significant changes of convective precipitation occur over the inter-tropical convergence zone, while changes of stratiform precipitation mostly appear over arid land such as North Africa and Middle East.« less
Tropical interannual variability in a global coupled GCM: Sensitivity to mean climate state
Moore, A.M.
1995-04-01
A global coupled ocean-atmosphere-sea ice general circulation model is used to study interannual variability in the Tropics. Flux correction is used to control the mean climate of the coupled system, and in one configuration of the coupled model, interannual variability in the tropical Pacific is dominated by westward moving anomalies. Through a series of experiments in which the equatorial ocean wave speeds and ocean-atmosphere coupling strength are varied, it is demonstrated that these westward moving disturbances are probably some manifestation of what Neelin describes as an {open_quotes}SST mode.{close_quotes} By modifying the flux correction procedure, the mean climate of the coupled model can be changed. A fairly modest change in the mean climate is all that is required to excite eastward moving anomalies in place of the westward moving SST modes found previously. The apparent sensitivity of the nature of tropical interannual variability to the mean climate state in a coupled general circulation model such as that used here suggests that caution is advisable if we try to use such models to answer questions relating to changes in ENSO-like variability associated with global climate change. 41 refs., 23 figs., 1 tab.
Zhang, Ning; Liu, Yangang; Gao, Zhiqiu; Li, Dan
2015-04-27
The critical bulk Richardson number (Ri_{cr}) is an important parameter in planetary boundary layer (PBL) parameterization schemes used in many climate models. This paper examines the sensitivity of a Global Climate Model, the Beijing Climate Center Atmospheric General Circulation Model, BCC_AGCM to Ri_{cr}. The results show that the simulated global average of PBL height increases nearly linearly with Ri_{cr}, with a change of about 114 m for a change of 0.5 in Ri_{cr}. The surface sensible (latent) heat flux decreases (increases) as Ri_{cr} increases. The influence of Ri_{cr} on surface air temperature and specific humidity is not significant. The increasing Ri_{cr} may affect the location of the Westerly Belt in the Southern Hemisphere. Further diagnosis reveals that changes in Ri_{cr} affect stratiform and convective precipitations differently. Increasing Ri_{cr} leads to an increase in the stratiform precipitation but a decrease in the convective precipitation. Significant changes of convective precipitation occur over the inter-tropical convergence zone, while changes of stratiform precipitation mostly appear over arid land such as North Africa and Middle East.
NASA Astrophysics Data System (ADS)
Storto, Andrea; Yang, Chunxue; Masina, Simona
2016-05-01
The global ocean heat content evolution is a key component of the Earth's energy budget and can be consistently determined by ocean reanalyses that assimilate hydrographic profiles. This work investigates the impact of the atmospheric reanalysis forcing through a multiforcing ensemble ocean reanalysis, where the ensemble members are forced by five state-of-the-art atmospheric reanalyses during the meteorological satellite era (1979-2013). Data assimilation leads the ensemble to converge toward robust estimates of ocean warming rates and significantly reduces the spread (1.48 ± 0.18 W/m2, per unit area of the World Ocean); hence, the impact of the atmospheric forcing appears only marginal for the global heat content estimates in both upper and deeper oceans. A sensitivity assessment performed through realistic perturbation of the main sources of uncertainty in ocean reanalyses highlights that bias correction and preprocessing of in situ observations represent the most crucial component of the reanalysis, whose perturbation accounts for up to 60% of the ocean heat content anomaly variability in the pre-Argo period. Although these results may depend on the single reanalysis system used, they reveal useful information for the ocean observation community and for the optimal generation of perturbations in ocean ensemble systems.
Mathematical Modeling and Sensitivity Analysis of Acid Deposition
NASA Astrophysics Data System (ADS)
Cho, Seog-Yeon
Atmospheric processes influencing acid deposition are investigated by using mathematical model and sensitivity analysis. Sensitivity analysis techniques including Green's function analysis, constraint sensitivities, and lumped sensitivities are applied to temporal problems describing gas and liquid phase chemistry and to space-time problems describing pollutant transport and deposition. The sensitivity analysis techniques are used to; (1) investigate the chemical and physical processes related to acid depositions and (2) evaluate the linearity hypothesis, and source and receptor relationships. Results from analysis of the chemistry processes show that the relationship between SO(,2) concentration and the amount of sulfate produced is linear in gas phase but it may be nonlinear in liquid phase when there exists an excess amount of SO(,2) compared to H(,2)O(,2). Under the simulated conditions, the deviation of linearity between ambient sulfur present and the amount of sulfur deposited after 2 hours, is less than 10% in a convective storm situation when the liquid phase chemistry, gas phases chemistry, and cloud processes are considered simultaneously. Efficient ways of sensitivity analysis of time-space problems are also developed and used to evaluate the source and receptor relationships in an Eulerian transport, chemistry, removal model.
Global Proteome Analysis of Leptospira interrogans
Technology Transfer Automated Retrieval System (TEKTRAN)
Comparative global proteome analyses were performed on Leptospira interrogans serovar Copenhageni grown under conventional in vitro conditions and those mimicking in vivo conditions (iron limitation and serum presence). Proteomic analyses were conducted using iTRAQ and LC-ESI-tandem mass spectrometr...
Sensitivity to environmental properties in globally averaged synthetic spectra of Earth
NASA Astrophysics Data System (ADS)
Tinetti, G.; Meadows, V. S.; Crisp, D.; Fong, W.; Velusamy, T.; Fishbein, E.
2003-12-01
We are using computer models to explore the observational sensitivity to changes in atmospheric and surface properties, and the detectability of biosignatures, in the globally averaged spectrum of the Earth. Using AIRS (Atmospheric Infrared Sounder) data, as input on atmospheric and surface properties, we have generated spatially resolved high-resolution synthetic spectra using the SMART radiative transfer model (developed by D. Crisp), for a variety of conditions, from the UV to the far-IR (beyond the range of current Earth-based satellite data). We have then averaged over the visible disk for a number of different viewing geometries to quantify the sensitivity to surface types and atmospheric features as a function of viewing geometry, and spatial and spectral resolution. These results have been processed with an instrument simulator to improve our understanding of the detectable characteristics of Earth-like planets as viewed by the first (and probably second) generation extrasolar terrestrial planet detection and characterization missions (Terrestrial Planet Finder/Darwin and Life finder). This model can also be used to analyze Earth-shine data for detectability of planetary characteristics in disk-averaged spectra.
NASA Astrophysics Data System (ADS)
Hollingsworth, J. L.; Young, R. E.; Schubert, G.; Covey, C.; Grossman, A. S.
2007-03-01
A 3D global circulation model is adapted to the atmosphere of Venus to explore the nature of the planet's atmospheric superrotation. The model employs the full meteorological primitive equations and simplified forms for diabatic and other nonconservative forcings. It is therefore economical for performing very long simulations. To assess circulation equilibration and the occurrence of atmospheric superrotation, the climate model is run for 10,000-20,000 day integrations at 4° × 5° latitude-longitude horizontal resolution, and 56 vertical levels (denoted L56). The sensitivity of these simulations to imposed Venus-like diabatic heating rates, momentum dissipation rates, and various other key parameters (e.g., near-surface momentum drag), in addition to model configuration (e.g., low versus high vertical domain and number of atmospheric levels), is examined. We find equatorial superrotation in several of our numerical experiments, but the magnitude of superrotation is often less than observed. Further, the meridional structure of the mean zonal overturning (i.e., Hadley circulation) can consist of numerous cells which are symmetric about the equator and whose depth scale appears sensitive to the number of vertical layers imposed in the model atmosphere. We find that when realistic diabatic heating is imposed in the lowest several scales heights, only extremely weak atmospheric superrotation results.
Cacuci, Dan G.; Ionescu-Bujor, Mihaela
2004-07-15
Part II of this review paper highlights the salient features of the most popular statistical methods currently used for local and global sensitivity and uncertainty analysis of both large-scale computational models and indirect experimental measurements. These statistical procedures represent sampling-based methods (random sampling, stratified importance sampling, and Latin Hypercube sampling), first- and second-order reliability algorithms (FORM and SORM, respectively), variance-based methods (correlation ratio-based methods, the Fourier Amplitude Sensitivity Test, and the Sobol Method), and screening design methods (classical one-at-a-time experiments, global one-at-a-time design methods, systematic fractional replicate designs, and sequential bifurcation designs). It is emphasized that all statistical uncertainty and sensitivity analysis procedures first commence with the 'uncertainty analysis' stage and only subsequently proceed to the 'sensitivity analysis' stage; this path is the exact reverse of the conceptual path underlying the methods of deterministic sensitivity and uncertainty analysis where the sensitivities are determined prior to using them for uncertainty analysis. By comparison to deterministic methods, statistical methods for uncertainty and sensitivity analysis are relatively easier to develop and use but cannot yield exact values of the local sensitivities. Furthermore, current statistical methods have two major inherent drawbacks as follows: 1. Since many thousands of simulations are needed to obtain reliable results, statistical methods are at best expensive (for small systems) or, at worst, impracticable (e.g., for large time-dependent systems).2. Since the response sensitivities and parameter uncertainties are inherently and inseparably amalgamated in the results produced by these methods, improvements in parameter uncertainties cannot be directly propagated to improve response uncertainties; rather, the entire set of simulations and
NASA Technical Reports Server (NTRS)
Dong, Stanley B.
1989-01-01
An important consideration in the global local finite-element method (GLFEM) is the availability of global functions for the given problem. The role and mathematical requirements of these global functions in a GLFEM analysis of localized stress states in prismatic structures are discussed. A method is described for determining these global functions. Underlying this method are theorems due to Toupin and Knowles on strain energy decay rates, which are related to a quantitative expression of Saint-Venant's principle. It is mentioned that a mathematically complete set of global functions can be generated, so that any arbitrary interface condition between the finite element and global subregions can be represented. Convergence to the true behavior can be achieved with increasing global functions and finite-element degrees of freedom. Specific attention is devoted to mathematically two-dimensional and three-dimensional prismatic structures. Comments are offered on the GLFEM analysis of NASA flat panel with a discontinuous stiffener. Methods for determining global functions for other effects are also indicated, such as steady-state dynamics and bodies under initial stress.
Global inventory of methane clathrate: sensitivity to changes in the deep ocean
NASA Astrophysics Data System (ADS)
Buffett, Bruce; Archer, David
2004-11-01
We present a mechanistic model for the distribution of methane clathrate in marine sediments, and use it to predict the sensitivity of the steady-state methane inventory to changes in the deep ocean. The methane inventory is determined by binning the seafloor area according to water depth, temperature, and O 2 concentration. Organic carbon rain to the seafloor is treated as a simple function of water depth, and carbon burial for each bin is estimated using a sediment diagenesis model called Muds [Glob. Biogeochem. Cycles 16 (2002)]. The predicted concentration of organic carbon is fed into a clathrate model [J. Geophys. Res. 108 (2003)] to calculate steady-state profiles of dissolved, frozen, and gaseous methane. We estimate the amount of methane in ocean sediments by multiplying the sediment column inventories by the corresponding binned seafloor areas. Our estimate of the methane inventory is sensitive to the efficiency of methane production from organic matter and to the rate of fluid flow within the sediment column. Preferred values for these parameters are taken from previous studies of both passive and active margins, yielding a global estimate of 3×10 18 g of carbon (3000 Gton C) in clathrate and 2×10 18 g (2000 Gton C) in methane bubbles. The predicted methane inventory decreases by 85% in response to 3 °C of warming. Conversely, the methane inventory increases by a factor of 2 if the O 2 concentration of the deep ocean decreases by 40 μM or carbon rain increases by 50% (due to an increase in primary production). Changes in sea level have a small effect. We use these sensitivities to assess the past and future state of the methane clathrate reservoir.
Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem
Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; Chowdhary, Kenny; Debusschere, Bert; Swiler, Laura P.; Eldred, Michael S.
2015-01-01
In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.
FOCUS - An experimental environment for fault sensitivity analysis
NASA Technical Reports Server (NTRS)
Choi, Gwan S.; Iyer, Ravishankar K.
1992-01-01
FOCUS, a simulation environment for conducting fault-sensitivity analysis of chip-level designs, is described. The environment can be used to evaluate alternative design tactics at an early design stage. A range of user specified faults is automatically injected at runtime, and their propagation to the chip I/O pins is measured through the gate and higher levels. A number of techniques for fault-sensitivity analysis are proposed and implemented in the FOCUS environment. These include transient impact assessment on latch, pin and functional errors, external pin error distribution due to in-chip transients, charge-level sensitivity analysis, and error propagation models to depict the dynamic behavior of latch errors. A case study of the impact of transient faults on a microprocessor-based jet-engine controller is used to identify the critical fault propagation paths, the module most sensitive to fault propagation, and the module with the highest potential for causing external errors.
Design sensitivity analysis using EAL. Part 1: Conventional design parameters
NASA Technical Reports Server (NTRS)
Dopker, B.; Choi, Kyung K.; Lee, J.
1986-01-01
A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.
Global Analysis of Aerosol Properties Above Clouds
NASA Technical Reports Server (NTRS)
Waquet, F.; Peers, F.; Ducos, F.; Goloub, P.; Platnick, S. E.; Riedi, J.; Tanre, D.; Thieuleux, F.
2013-01-01
The seasonal and spatial varability of Aerosol Above Cloud (AAC) properties are derived from passive satellite data for the year 2008. A significant amount of aerosols are transported above liquid water clouds on the global scale. For particles in the fine mode (i.e., radius smaller than 0.3 m), including both clear sky and AAC retrievals increases the global mean aerosol optical thickness by 25(+/- 6%). The two main regions with man-made AAC are the tropical Southeast Atlantic, for biomass burning aerosols, and the North Pacific, mainly for pollutants. Man-made AAC are also detected over the Arctic during the spring. Mineral dust particles are detected above clouds within the so-called dust belt region (5-40 N). AAC may cause a warming effect and bias the retrieval of the cloud properties. This study will then help to better quantify the impacts of aerosols on clouds and climate.
Global spatial sensitivity of runoff to subsurface permeability using the active subspace method
NASA Astrophysics Data System (ADS)
Gilbert, James M.; Jefferson, Jennifer L.; Constantine, Paul G.; Maxwell, Reed M.
2016-06-01
Hillslope scale runoff is generated as a result of interacting factors that include water influx rate, surface and subsurface properties, and antecedent saturation. Heterogeneity of these factors affects the existence and characteristics of runoff. This heterogeneity becomes an increasingly relevant consideration as hydrologic models are extended and employed to capture greater detail in runoff generating processes. We investigate the impact of one type of heterogeneity - subsurface permeability - on runoff using the integrated hydrologic model ParFlow. Specifically, we examine the sensitivity of runoff to variation in three-dimensional subsurface permeability fields for scenarios dominated by either Hortonian or Dunnian runoff mechanisms. Ten thousand statistically consistent subsurface permeability fields are parameterized using a truncated Karhunen-Loéve (KL) series and used as inputs to 48-h simulations of integrated surface-subsurface flow in an idealized 'tilted-v' domain. Coefficients of the spatial modes of the KL permeability fields provide the parameter space for analysis using the active subspace method. The analysis shows that for Dunnian-dominated runoff conditions the cumulative runoff volume is sensitive primarily to the first spatial mode, corresponding to permeability values in the center of the three-dimensional model domain. In the Hortonian case, runoff volume is sensitive to multiple smaller-scale spatial modes and the locus of that sensitivity is in the near-surface zone upslope from the domain outlet. Variation in runoff volume resulting from random heterogeneity configurations can be expressed as an approximately univariate function of the active variable, a weighted combination of spatial parameterization coefficients computed through the active subspace method. However, this relationship between the active variable and runoff volume is more well-defined for Dunnian runoff than for the Hortonian scenario.
NASA Astrophysics Data System (ADS)
Piecuch, Christopher G.; Heimbach, Patrick; Ponte, Rui M.; Forget, Gaël
2015-12-01
Geothermal fluxes constitute a sizable fraction of the present-day Earth net radiative imbalance and corresponding ocean heat uptake. Model simulations of contemporary sea level that impose a geothermal flux boundary condition are becoming increasingly common. To quantify the impact of geothermal fluxes on model estimates of contemporary (1993-2010) sea level changes, two ocean circulation model experiments are compared. The two simulations are based on a global ocean state estimate, produced by the Estimating the Circulation and Climate of the Ocean (ECCO) consortium, and differ only with regard to whether geothermal forcing is applied as a boundary condition. Geothermal forcing raises the global-mean sea level trend by 0.11 mm yr-1 in the perturbation experiment by suppressing a cooling trend present in the baseline solution below 2000 m. The imposed forcing also affects regional sea level trends. The Southern Ocean is particularly sensitive. In this region, anomalous heat redistribution due to geothermal fluxes results in steric height trends of up to ± 1 mm yr-1 in the perturbation experiment relative to the baseline simulation. Analysis of a passive tracer experiment suggests that the geothermal input itself is transported by horizontal diffusion, resulting in more thermal expansion over deeper ocean basins. Thermal expansion in the perturbation simulation gives rise to bottom pressure increase over shallower regions and decrease over deeper areas relative to the baseline run, consistent with mass redistribution expected for deep ocean warming. These results elucidate the influence of geothermal fluxes on sea level rise and global heat budgets in model simulations of contemporary ocean circulation and climate.
Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)
1996-01-01
Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite
ERIC Educational Resources Information Center
Clayton, Thomas
2004-01-01
In recent years, many scholars have become fascinated by a contemporary, multidimensional process that has come to be known as "globalization." Globalization originally described economic developments at the world level. More specifically, scholars invoked the concept in reference to the process of global economic integration and the seemingly…
Sensitivity and Uncertainty Analysis of the keff for VHTR fuel
NASA Astrophysics Data System (ADS)
Han, Tae Young; Lee, Hyun Chul; Noh, Jae Man
2014-06-01
For the uncertainty and sensitivity analysis of PMR200 designed as a VHTR in KAERI, MUSAD was implemented based on the deterministic method in the connection with DeCART/CAPP code system. The sensitivity of the multiplication factor was derived using the classical perturbation theory and the sensitivity coefficients for the individual cross sections were obtained by the adjoint method within the framework of the transport equation. Then, the uncertainty of the multiplication factor was calculated from the product of the covariance matrix and the sensitivity. For the verification calculation of the implemented code, the uncertainty analysis on GODIVA benchmark and PMR200 pin cell problem were carried out and the results were compared with the reference codes, TSUNAMI and McCARD. As a result, they are in a good agreement except the uncertainty by the scattering cross section which was calculated using the different scattering moment.
Sensitivity Analysis of the Integrated Medical Model for ISS Programs
NASA Technical Reports Server (NTRS)
Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.
2016-01-01
Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral
NASA Technical Reports Server (NTRS)
Considine, David B.; Connell, Peter S.; Bergmann, Daniel J.; Rotman, Douglas A.; Strahan, Susan E.
2004-01-01
We use the Global Modeling Initiative chemistry and transport model to simulate the evolution of stratospheric ozone between 1995 and 2030, using boundary conditions consistent with the recent World Meteorological Organization ozone assessment. We compare the Antarctic ozone recovery predictions of two simulations, one driven by an annually repeated year of meteorological data from a general circulation model (GCM), the other using a year of output from a data assimilation system (DAS), to examine the sensitivity of Antarctic ozone recovery predictions to the characteristic dynamical differences between GCM- and DAS-generated meteorological data. Although the age of air in the Antarctic lower stratosphere differs by a factor of 2 between the simulations, we find little sensitivity of the 1995-2030 Antarctic ozone recovery between 350 and 650 K to the differing meteorological fields, particularly when the recovery is specified in mixing ratio units. Percent changes are smaller in the DAS-driven simulation compared to the GCM-driven simulation because of a surplus of Antarctic ozone in the DAS-driven simulation which is not consistent with observations. The peak ozone change between 1995 and 2030 in both simulations is approx.20% lower than photochemical expectations, indicating that changes in ozone transport due to changing ozone gradients at 450 K between 1995 and 2030 constitute a small negative feedback. Total winter/spring ozone loss during the base year (1995) of both simulations and the rate of ozone loss during August and September is somewhat weaker than observed. This appears to be due to underestimates of Antarctic Cl(sub y) at the 450 K potential temperature level.
Design sensitivity analysis of mechanical systems in frequency domain
NASA Astrophysics Data System (ADS)
Nalecz, A. G.; Wicher, J.
1988-02-01
A procedure for determining the sensitivity functions of mechanical systems in the frequency domain by use of a vector-matrix approach is presented. Two examples, one for a ground vehicle passive front suspension, and the second for a vehicle active suspension, illustrate the practical applications of parametric sensitivity analysis for redesign and modification of mechanical systems. The sensitivity functions depend on the frequency of the system's oscillations. They can be easily related to the system's frequency characteristics which describe the dynamic properties of the system.
The resolution sensitivity of the South Asian monsoon and Indo-Pacific in a global 0.35° AGCM
NASA Astrophysics Data System (ADS)
Johnson, Stephanie J.; Levine, Richard C.; Turner, Andrew G.; Martin, Gill M.; Woolnough, Steven J.; Schiemann, Reinhard; Mizielinski, Matthew S.; Roberts, Malcolm J.; Vidale, Pier Luigi; Demory, Marie-Estelle; Strachan, Jane
2016-02-01
The South Asian monsoon is one of the most significant manifestations of the seasonal cycle. It directly impacts nearly one third of the world's population and also has substantial global influence. Using 27-year integrations of a high-resolution atmospheric general circulation model (Met Office Unified Model), we study changes in South Asian monsoon precipitation and circulation when horizontal resolution is increased from approximately 200-40 km at the equator (N96-N512, 1.9°-0.35°). The high resolution, integration length and ensemble size of the dataset make this the most extensive dataset used to evaluate the resolution sensitivity of the South Asian monsoon to date. We find a consistent pattern of JJAS precipitation and circulation changes as resolution increases, which include a slight increase in precipitation over peninsular India, changes in Indian and Indochinese orographic rain bands, increasing wind speeds in the Somali Jet, increasing precipitation over the Maritime Continent islands and decreasing precipitation over the northern Maritime Continent seas. To diagnose which resolution-related processes cause these changes, we compare them to published sensitivity experiments that change regional orography and coastlines. Our analysis indicates that improved resolution of the East African Highlands results in the improved representation of the Somali Jet and further suggests that improved resolution of orography over Indochina and the Maritime Continent results in more precipitation over the Maritime Continent islands at the expense of reduced precipitation further north. We also evaluate the resolution sensitivity of monsoon depressions and lows, which contribute more precipitation over northeast India at higher resolution. We conclude that while increasing resolution at these scales does not solve the many monsoon biases that exist in GCMs, it has a number of small, beneficial impacts.
Global eradication of poliomyelitis: benefit-cost analysis.
Bart, K. J.; Foulds, J.; Patriarca, P.
1996-01-01
A benefit-cost analysis of the Poliomyelitis Eradication Initiative was undertaken to facilitate national and international decision-making with regard to financial support. The base case examined the net costs and benefits during the period 1986-2040; the model assumed differential costs for oral poliovirus vaccine (OPV) and vaccine delivery in industrialized and developing countries, and ignored all benefits aside from reductions in direct costs for treatment and rehabilitation. The model showed that the "break-even" point at which benefits exceeded costs was the year 2007, with a saving of US$ 13 600 million by the year 2040. Sensitivity analyses revealed only small differences in the break-even point and in the dollars saved, when compared with the base case, even with large variations in the target age group for vaccination, the proportion of case-patients seeking medical attention, and the cost of vaccine delivery. The technical feasibility of global eradication is supported by the availability of an easily administered, inexpensive vaccine (OPV), the epidemiological characteristics of poliomyelitis, and the successful experience in the Americas with elimination of wild poliovirus infection. This model demonstrates that the Poliomyelitis Eradication Initiative is economically justified. PMID:8653814
Geostationary Coastal and Air Pollution Events (GEO-CAPE) Sensitivity Analysis Experiment
NASA Technical Reports Server (NTRS)
Lee, Meemong; Bowman, Kevin
2014-01-01
Geostationary Coastal and Air pollution Events (GEO-CAPE) is a NASA decadal survey mission to be designed to provide surface reflectance at high spectral, spatial, and temporal resolutions from a geostationary orbit necessary for studying regional-scale air quality issues and their impact on global atmospheric composition processes. GEO-CAPE's Atmospheric Science Questions explore the influence of both gases and particles on air quality, atmospheric composition, and climate. The objective of the GEO-CAPE Observing System Simulation Experiment (OSSE) is to analyze the sensitivity of ozone to the global and regional NOx emissions and improve the science impact of GEO-CAPE with respect to the global air quality. The GEO-CAPE OSSE team at Jet propulsion Laboratory has developed a comprehensive OSSE framework that can perform adjoint-sensitivity analysis for a wide range of observation scenarios and measurement qualities. This report discusses the OSSE framework and presents the sensitivity analysis results obtained from the GEO-CAPE OSSE framework for seven observation scenarios and three instrument systems.
Parametric sensitivity for frequency response analysis of large-scale flows
NASA Astrophysics Data System (ADS)
Fosas de Pando, Miguel; Schmid, Peter
2014-11-01
When studying the frequency response of globally stable flows, direct and adjoint information from a resolvent analysis has to be computed. These computations involve a sizeable amount of effort, which suggests their reuse to identify sensitivity measures to changes in the governing parameters, base/mean flow fields, boundary conditions or other changes to the underlying linearized operator. We introduce and demonstrate a general technique to determine first-order changes in the frequency response induced by general changes to the governing equations. Examples will include changes to the Reynolds and Mach number for a tonal-noise airfoil problem, sensitivity to heating of a mixing layer past a splitter plate and closeness to global instability for a simplified model equation.
NASA Technical Reports Server (NTRS)
Adler, Robert F.; Huffman, George; Curtis, Scott; Bolvin, David; Nelkin, Eric; Einaudi, Franco (Technical Monitor)
2001-01-01
The 22 year, monthly, globally complete precipitation analysis of the World Climate Research Program's (WCRP/GEWEX) Global Precipitation Climatology Project (GPCP) and the four year (1997-present) daily GPCP analysis are described in terms of the data sets and analysis techniques used in their preparation. These analyses are then used to study global and regional variations and trends during the 22 years and the shorter-time scale events that constitute those variations. The GPCP monthly data set shows no significant trend in global precipitation over the twenty years, unlike the positive trend in global surface temperatures over the past century. The global trend analysis must be interpreted carefully, however, because the inhomogeneity of the data set makes detecting a small signal very difficult, especially over this relatively short period. The relation of global (and tropical) total precipitation and ENSO (El Nino and Southern Oscillation) events is quantified with no significant signal when land and ocean are combined. In terms of regional trends 1979 to 2000 the tropics have a distribution of regional rainfall trends that has an ENSO-like pattern with features of both the El Nino and La Nina. This feature is related to a possible trend in the frequency of ENSO events (either El Nino or La Nina) over the past 20 years. Monthly anomalies of precipitation are related to ENSO variations with clear signals extending into middle and high latitudes of both hemispheres. The El Nino and La Nina mean anomalies are near mirror images of each other and when combined produce an ENSO signal with significant spatial continuity over large distances. A number of the features are shown to extend into high latitudes. Positive anomalies extend in the Southern Hemisphere from the Pacific southeastward across Chile and Argentina into the south Atlantic Ocean. In the Northern Hemisphere the counterpart feature extends across the southern U.S. and Atlantic Ocean into Europe. In the
Analysis and visualization of global magnetospheric processes
Winske, D.; Mozer, F.S.; Roth, I.
1998-12-31
This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). The purpose of this project is to develop new computational and visualization tools to analyze particle dynamics in the Earth`s magnetosphere. These tools allow the construction of a global picture of particle fluxes, which requires only a small number of in situ spacecraft measurements as input parameters. The methods developed in this project have led to a better understanding of particle dynamics in the Earth`s magnetotail in the presence of turbulent wave fields. They have also been used to demonstrate how large electromagnetic pulses in the solar wind can interact with the magnetosphere to increase the population of energetic particles and even form new radiation belts.
National health expenditures: a global analysis.
Murray, C. J.; Govindaraj, R.; Musgrove, P.
1994-01-01
As part of the background research to the World development report 1993: investing in health, an effort was made to estimate public, private and total expenditures on health for all countries of the world. Estimates could be found for public spending for most countries, but for private expenditure in many fewer countries. Regressions were used to predict the missing values of regional and global estimates. These econometric exercises were also used to relate expenditure to measures of health status. In 1990 the world spent an estimated US$ 1.7 trillion (1.7 x 10(12) on health, or $1.9 trillion (1.9 x 10(12)) in dollars adjusted for higher purchasing power in poorer countries. This amount was about 60% public and 40% private in origin. However, as incomes rise, public health expenditure tends to displace private spending and to account for the increasing share of incomes devoted to health. PMID:7923542
Water Grabbing analysis at global scale
NASA Astrophysics Data System (ADS)
Rulli, M.; Saviori, A.; D'Odorico, P.
2012-12-01
"Land grabbing" is the acquisition of agricultural land by foreign governments and corporations, a phenomenon that has greatly intensified over the last few years as a result of the increase in food prices and biofuel demand. Land grabbing is inherently associated with an appropriation of freshwater resources that has never been investigated before. Here we provide a global assessment of the total grabbed land and water resources. Using process-based agro-hydrological models we estimate the rates of freshwater grabbing worldwide. We find that this phenomenon is occurring at alarming rates in all continents except Antarctica. The per capita volume of grabbed water often exceeds the water requirements for a balanced diet and would be sufficient to abate malnourishment in the grabbed countries. High rates of water grabbing are often associated with deforestation and the increase in water withdrawals for irrigation.
Pujol-Vila, F; Vigués, N; Díaz-González, M; Muñoz-Berbel, X; Mas, J
2015-05-15
Global urban and industrial growth, with the associated environmental contamination, is promoting the development of rapid and inexpensive general toxicity methods. Current microbial methodologies for general toxicity determination rely on either bioluminescent bacteria and specific medium solution (i.e. Microtox(®)) or low sensitivity and diffusion limited protocols (i.e. amperometric microbial respirometry). In this work, fast and sensitive optical toxicity bioassay based on dual wavelength analysis of bacterial ferricyanide reduction kinetics is presented, using Escherichia coli as a bacterial model. Ferricyanide reduction kinetic analysis (variation of ferricyanide absorption with time), much more sensitive than single absorbance measurements, allowed for direct and fast toxicity determination without pre-incubation steps (assay time=10 min) and minimizing biomass interference. Dual wavelength analysis at 405 (ferricyanide and biomass) and 550 nm (biomass), allowed for ferricyanide monitoring without interference of biomass scattering. On the other hand, refractive index (RI) matching with saccharose reduced bacterial light scattering around 50%, expanding the analytical linear range in the determination of absorbent molecules. With this method, different toxicants such as metals and organic compounds were analyzed with good sensitivities. Half maximal effective concentrations (EC50) obtained after 10 min bioassay, 2.9, 1.0, 0.7 and 18.3 mg L(-1) for copper, zinc, acetic acid and 2-phenylethanol respectively, were in agreement with previously reported values for longer bioassays (around 60 min). This method represents a promising alternative for fast and sensitive water toxicity monitoring, opening the possibility of quick in situ analysis.
Imaging system sensitivity analysis with NV-IPM
NASA Astrophysics Data System (ADS)
Fanning, Jonathan; Teaney, Brian
2014-05-01
This paper describes the sensitivity analysis capabilities to be added to version 1.2 of the NVESD imaging sensor model NV-IPM. Imaging system design always involves tradeoffs to design the best system possible within size, weight, and cost constraints. In general, the performance of a well designed system will be limited by the largest, heaviest, and most expensive components. Modeling is used to analyze system designs before the system is built. Traditionally, NVESD models were only used to determine the performance of a given system design. NV-IPM has the added ability to automatically determine the sensitivity of any system output to changes in the system parameters. The component-based structure of NV-IPM tracks the dependence between outputs and inputs such that only the relevant parameters are varied in the sensitivity analysis. This allows sensitivity analysis of an output such as probability of identification to determine the limiting parameters of the system. Individual components can be optimized by doing sensitivity analysis of outputs such as NETD or SNR. This capability will be demonstrated by analyzing example imaging systems.
Sensitivity analysis for missing data in regulatory submissions.
Permutt, Thomas
2016-07-30
The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. PMID:26567763
Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC
NASA Astrophysics Data System (ADS)
Yang, J.; Castelli, F.; Chen, Y.
2014-10-01
Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ɛ-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more
Sensitivity analysis approach to multibody systems described by natural coordinates
NASA Astrophysics Data System (ADS)
Li, Xiufeng; Wang, Yabin
2014-03-01
The classical natural coordinate modeling method which removes the Euler angles and Euler parameters from the governing equations is particularly suitable for the sensitivity analysis and optimization of multibody systems. However, the formulation has so many principles in choosing the generalized coordinates that it hinders the implementation of modeling automation. A first order direct sensitivity analysis approach to multibody systems formulated with novel natural coordinates is presented. Firstly, a new selection method for natural coordinate is developed. The method introduces 12 coordinates to describe the position and orientation of a spatial object. On the basis of the proposed natural coordinates, rigid constraint conditions, the basic constraint elements as well as the initial conditions for the governing equations are derived. Considering the characteristics of the governing equations, the newly proposed generalized-α integration method is used and the corresponding algorithm flowchart is discussed. The objective function, the detailed analysis process of first order direct sensitivity analysis and related solving strategy are provided based on the previous modeling system. Finally, in order to verify the validity and accuracy of the method presented, the sensitivity analysis of a planar spinner-slider mechanism and a spatial crank-slider mechanism are conducted. The test results agree well with that of the finite difference method, and the maximum absolute deviation of the results is less than 3%. The proposed approach is not only convenient for automatic modeling, but also helpful for the reduction of the complexity of sensitivity analysis, which provides a practical and effective way to obtain sensitivity for the optimization problems of multibody systems.
Personalization of models with many model parameters: an efficient sensitivity analysis approach.
Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T
2015-10-01
Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. PMID:26017545
Sensitivity analysis of dynamic biological systems with time-delays
2010-01-01
Background Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. Results We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. Conclusions By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex
Sensitivity analysis for handling uncertainty in an economic evaluation.
Limwattananon, Supon
2014-05-01
To meet updated international standards, this paper revises the previous Thai guidelines for conducting sensitivity analyses as part of the decision analysis model for health technology assessment. It recommends both deterministic and probabilistic sensitivity analyses to handle uncertainty of the model parameters, which are best represented graphically. Two new methodological issues are introduced-a threshold analysis of medicines' unit prices for fulfilling the National Lists of Essential Medicines' requirements and the expected value of information for delaying decision-making in contexts where there are high levels of uncertainty. Further research is recommended where parameter uncertainty is significant and where the cost of conducting the research is not prohibitive. PMID:24964700
Sensitivity analysis of the fission gas behavior model in BISON.
Swiler, Laura Painton; Pastore, Giovanni; Perez, Danielle; Williamson, Richard
2013-05-01
This report summarizes the result of a NEAMS project focused on sensitivity analysis of a new model for the fission gas behavior (release and swelling) in the BISON fuel performance code of Idaho National Laboratory. Using the new model in BISON, the sensitivity of the calculated fission gas release and swelling to the involved parameters and the associated uncertainties is investigated. The study results in a quantitative assessment of the role of intrinsic uncertainties in the analysis of fission gas behavior in nuclear fuel.
Efficient sensitivity analysis method for chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Liao, Haitao
2016-05-01
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
Bayesian sensitivity analysis of a nonlinear finite element model
NASA Astrophysics Data System (ADS)
Becker, W.; Oakley, J. E.; Surace, C.; Gili, P.; Rowson, J.; Worden, K.
2012-10-01
A major problem in uncertainty and sensitivity analysis is that the computational cost of propagating probabilistic uncertainty through large nonlinear models can be prohibitive when using conventional methods (such as Monte Carlo methods). A powerful solution to this problem is to use an emulator, which is a mathematical representation of the model built from a small set of model runs at specified points in input space. Such emulators are massively cheaper to run and can be used to mimic the "true" model, with the result that uncertainty analysis and sensitivity analysis can be performed for a greatly reduced computational cost. The work here investigates the use of an emulator known as a Gaussian process (GP), which is an advanced probabilistic form of regression. The GP is particularly suited to uncertainty analysis since it is able to emulate a wide class of models, and accounts for its own emulation uncertainty. Additionally, uncertainty and sensitivity measures can be estimated analytically, given certain assumptions. The GP approach is explained in detail here, and a case study of a finite element model of an airship is used to demonstrate the method. It is concluded that the GP is a very attractive way of performing uncertainty and sensitivity analysis on large models, provided that the dimensionality is not too high.
Global/local stress analysis of composite panels
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.; Knight, Norman F., Jr.
1989-01-01
A method for performing a global/local stress analysis is described, and its capabilities are demonstrated. The method employs spline interpolation functions which satisfy the linear plate bending equation to determine displacements and rotations from a global model which are used as boundary conditions for the local model. Then, the local model is analyzed independent of the global model of the structure. This approach can be used to determine local, detailed stress states for specific structural regions using independent, refined local models which exploit information from less-refined global models. The method presented is not restricted to having a priori knowledge of the location of the regions requiring local detailed stress analysis. This approach also reduces the computational effort necessary to obtain the detailed stress state. Criteria for applying the method are developed. The effectiveness of the method is demonstrated using a classical stress concentration problem and a graphite-epoxy blade-stiffened panel with a discontinuous stiffener.
Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.
2007-01-01
To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.
Sensitivity analysis in a Lassa fever deterministic mathematical model
NASA Astrophysics Data System (ADS)
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
The Volatility of Data Space: Topology Oriented Sensitivity Analysis
Du, Jing; Ligmann-Zielinska, Arika
2015-01-01
Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929
Sensitive analysis of a finite element model of orthogonal cutting
NASA Astrophysics Data System (ADS)
Brocail, J.; Watremez, M.; Dubar, L.
2011-01-01
This paper presents a two-dimensional finite element model of orthogonal cutting. The proposed model has been developed with Abaqus/explicit software. An Arbitrary Lagrangian-Eulerian (ALE) formulation is used to predict chip formation, temperature, chip-tool contact length, chip thickness, and cutting forces. This numerical model of orthogonal cutting will be validated by comparing these process variables to experimental and numerical results obtained by Filice et al. [1]. This model can be considered to be reliable enough to make qualitative analysis of entry parameters related to cutting process and frictional models. A sensitivity analysis is conducted on the main entry parameters (coefficients of the Johnson-Cook law, and contact parameters) with the finite element model. This analysis is performed with two levels for each factor. The sensitivity analysis realised with the numerical model on the entry parameters has allowed the identification of significant parameters and the margin identification of parameters.
Global processing takes time: A meta-analysis on local-global visual processing in ASD.
Van der Hallen, Ruth; Evers, Kris; Brewaeys, Katrien; Van den Noortgate, Wim; Wagemans, Johan
2015-05-01
What does an individual with autism spectrum disorder (ASD) perceive first: the forest or the trees? In spite of 30 years of research and influential theories like the weak central coherence (WCC) theory and the enhanced perceptual functioning (EPF) account, the interplay of local and global visual processing in ASD remains only partly understood. Research findings vary in indicating a local processing bias or a global processing deficit, and often contradict each other. We have applied a formal meta-analytic approach and combined 56 articles that tested about 1,000 ASD participants and used a wide range of stimuli and tasks to investigate local and global visual processing in ASD. Overall, results show no enhanced local visual processing nor a deficit in global visual processing. Detailed analysis reveals a difference in the temporal pattern of the local-global balance, that is, slow global processing in individuals with ASD. Whereas task-dependent interaction effects are obtained, gender, age, and IQ of either participant groups seem to have no direct influence on performance. Based on the overview of the literature, suggestions are made for future research.
NASA Astrophysics Data System (ADS)
Anenberg, S.; Talgo, K.; Dolwick, P.; Jang, C.; Arunachalam, S.; West, J.
2010-12-01
Black carbon (BC), a component of fine particulate matter (PM2.5) released during incomplete combustion, is associated with atmospheric warming and deleterious health impacts, including premature cardiopulmonary and lung cancer mortality. A growing body of literature suggests that controlling emissions may therefore have dual benefits for climate and health. Several studies have focused on quantifying the potential impacts of reducing BC emissions from various world regions and economic sectors on radiative forcing. However, the impacts of these reductions on human health have been less well studied. Here, we use a global chemical transport model (MOZART-4) and a health impact function to quantify the surface air quality and human health benefits of controlling BC emissions. We simulate a base case and several emission control scenarios, where anthropogenic BC emissions are reduced by half globally, individually in each of eight world regions, and individually from the residential, industrial, and transportation sectors. We also simulate a global 50% reduction of both BC and organic carbon (OC) together, since they are co-emitted and both are likely to be impacted by actual control measures. Meteorology and biomass burning emissions are for the year 2002 with anthropogenic BC and OC emissions for 2000 from the IPCC AR5 inventory. Model performance is evaluated by comparing to global surface measurements of PM2.5 components. Avoided premature mortalities are calculated using the change in PM2.5 concentration between the base case and emission control scenarios and a concentration-response factor for chronic mortality from the epidemiology literature.
Beyond the GUM: variance-based sensitivity analysis in metrology
NASA Astrophysics Data System (ADS)
Lira, I.
2016-07-01
Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand.
Omitted Variable Sensitivity Analysis with the Annotated Love Plot
ERIC Educational Resources Information Center
Hansen, Ben B.; Fredrickson, Mark M.
2014-01-01
The goal of this research is to make sensitivity analysis accessible not only to empirical researchers but also to the various stakeholders for whom educational evaluations are conducted. To do this it derives anchors for the omitted variable (OV)-program participation association intrinsically, using the Love plot to present a wide range of…
Bayesian Sensitivity Analysis of Statistical Models with Missing Data
ZHU, HONGTU; IBRAHIM, JOSEPH G.; TANG, NIANSHENG
2013-01-01
Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures. PMID:24753718
Drought-Net: A global network to assess terrestrial ecosystem sensitivity to drought
NASA Astrophysics Data System (ADS)
Smith, Melinda; Sala, Osvaldo; Phillips, Richard
2015-04-01
All ecosystems will be impacted to some extent by climate change, with forecasts for more frequent and severe drought likely to have the greatest impact on terrestrial ecosystems. Terrestrial ecosystems are known to vary dramatically in their responses to drought. However, the factors that may make some ecosystems respond more or less than others remains unknown, but such understanding is critical for predicting drought impacts at regional and continental scales. To effectively forecast terrestrial ecosystem responses to drought, ecologists must assess responses of a range of different ecosystems to drought, and then improve existing models by incorporating the factors that cause such variation in response. Traditional site-based research cannot provide this knowledge because experiments conducted at individual sites are often not directly comparable due to differences in methodologies employed. Coordinated experimental networks, with identical protocols and comparable measurements, are ideally suited for comparative studies at regional to global scales. The US National Science Foundation-funded Drought-Net Research Coordination Network (www.drought-net.org) will advance understanding of the determinants of terrestrial ecosystem responses to drought by bringing together an international group of scientists to conduct two key activities conducted over the next five years: 1) planning and coordinating new research using standardized measurements to leverage the value of existing drought experiments across the globe (Enhancing Existing Experiments, EEE), and 2) finalizing the design and facilitating the establishment of a new international network of coordinated drought experiments (the International Drought Experiment, IDE). The primary goals of these activities are to assess: (1) patterns of differential terrestrial ecosystem sensitivity to drought and (2) potential mechanisms underlying those patterns.
On global energy scenario, dye-sensitized solar cells and the promise of nanotechnology.
Reddy, K Govardhan; Deepak, T G; Anjusree, G S; Thomas, Sara; Vadukumpully, Sajini; Subramanian, K R V; Nair, Shantikumar V; Nair, A Sreekumaran
2014-04-21
One of the major problems that humanity has to face in the next 50 years is the energy crisis. The rising population, rapidly changing life styles of people, heavy industrialization and changing landscape of cities have increased energy demands, enormously. The present annual worldwide electricity consumption is 12 TW and is expected to become 24 TW by 2050, leaving a challenging deficit of 12 TW. The present energy scenario of using fossil fuels to meet the energy demand is unable to meet the increase in demand effectively, as these fossil fuel resources are non-renewable and limited. Also, they cause significant environmental hazards, like global warming and the associated climatic issues. Hence, there is an urgent necessity to adopt renewable sources of energy, which are eco-friendly and not extinguishable. Of the various renewable sources available, such as wind, tidal, geothermal, biomass, solar, etc., solar serves as the most dependable option. Solar energy is freely and abundantly available. Once installed, the maintenance cost is very low. It is eco-friendly, safely fitting into our society without any disturbance. Producing electricity from the Sun requires the installation of solar panels, which incurs a huge initial cost and requires large areas of lands for installation. This is where nanotechnology comes into the picture and serves the purpose of increasing the efficiency to higher levels, thus bringing down the overall cost for energy production. Also, emerging low-cost solar cell technologies, e.g. thin film technologies and dye-sensitized solar cells (DSCs) help to replace the use of silicon, which is expensive. Again, nanotechnological implications can be applied in these solar cells, to achieve higher efficiencies. This paper vividly deals with the various available solar cells, choosing DSCs as the most appropriate ones. The nanotechnological implications which help to improve their performance are dealt with, in detail. Additionally, the
On global energy scenario, dye-sensitized solar cells and the promise of nanotechnology.
Reddy, K Govardhan; Deepak, T G; Anjusree, G S; Thomas, Sara; Vadukumpully, Sajini; Subramanian, K R V; Nair, Shantikumar V; Nair, A Sreekumaran
2014-04-21
One of the major problems that humanity has to face in the next 50 years is the energy crisis. The rising population, rapidly changing life styles of people, heavy industrialization and changing landscape of cities have increased energy demands, enormously. The present annual worldwide electricity consumption is 12 TW and is expected to become 24 TW by 2050, leaving a challenging deficit of 12 TW. The present energy scenario of using fossil fuels to meet the energy demand is unable to meet the increase in demand effectively, as these fossil fuel resources are non-renewable and limited. Also, they cause significant environmental hazards, like global warming and the associated climatic issues. Hence, there is an urgent necessity to adopt renewable sources of energy, which are eco-friendly and not extinguishable. Of the various renewable sources available, such as wind, tidal, geothermal, biomass, solar, etc., solar serves as the most dependable option. Solar energy is freely and abundantly available. Once installed, the maintenance cost is very low. It is eco-friendly, safely fitting into our society without any disturbance. Producing electricity from the Sun requires the installation of solar panels, which incurs a huge initial cost and requires large areas of lands for installation. This is where nanotechnology comes into the picture and serves the purpose of increasing the efficiency to higher levels, thus bringing down the overall cost for energy production. Also, emerging low-cost solar cell technologies, e.g. thin film technologies and dye-sensitized solar cells (DSCs) help to replace the use of silicon, which is expensive. Again, nanotechnological implications can be applied in these solar cells, to achieve higher efficiencies. This paper vividly deals with the various available solar cells, choosing DSCs as the most appropriate ones. The nanotechnological implications which help to improve their performance are dealt with, in detail. Additionally, the
NASA Technical Reports Server (NTRS)
Bowman, Kenneth P.; Sacks, Jerome; Chang, Yue-Fang
1993-01-01
Methods for the design and analysis of numerical experiments that are especially useful and efficient in multidimensional parameter spaces are presented. The analysis method, which is similar to kriging in the spatial analysis literature, fits a statistical model to the output of the numerical model. The method is applied to a fully nonlinear, global, equivalent-barotropic dynamical model. The statistical model also provides estimates for the uncertainty of predicted numerical model output, which can provide guidance on where in the parameter space to conduct further experiments, if necessary. The method can provide significant improvements in the efficiency with which numerical sensitivity experiments are conducted.
NASA Astrophysics Data System (ADS)
Fersch, Benjamin; Kunstmann, Harald
2014-05-01
Driving data and physical parametrizations can significantly impact the performance of regional dynamical atmospheric models in reproducing hydrometeorologically relevant variables. Our study addresses the water budget sensitivity of the Weather Research and Forecasting Model System WRF (WRF-ARW) with respect to two cumulus parametrizations (Kain-Fritsch, Betts-Miller-Janjić), two global driving reanalyses (ECMWF ERA-INTERIM and NCAR/NCEP NNRP), time variant and invariant sea surface temperature and optional gridded nudging. The skill of global and downscaled models is evaluated against different gridded observations for precipitation, 2 m-temperature, evapotranspiration, and against measured discharge time-series on a monthly basis. Multi-year spatial deviation patterns and basin aggregated time series are examined for four globally distributed regions with different climatic characteristics: Siberia, Northern and Western Africa, the Central Australian Plane, and the Amazonian tropics. The simulations cover the period from 2003 to 2006 with a horizontal mesh of 30 km. The results suggest a high sensitivity of the physical parametrizations and the driving data on the water budgets of the regional atmospheric simulations. While the global reanalyses tend to underestimate 2 m-temperature by 0.2-2 K, the regional simulations are typically 0.5-3 K warmer than observed. Many configurations show difficulties in reproducing the water budget terms, e.g. with long-term mean precipitation biases of 150 mm month-1 and higher. Nevertheless, with the water budget analysis viable setups can be deduced for all four study regions.
Annual flood sensitivities to El Niño-Southern Oscillation at the global scale
Ward, Philip J.; Eisner, S.; Flörke, M.; Dettinger, Michael D.; Kummu, M.
2013-01-01
Floods are amongst the most dangerous natural hazards in terms of economic damage. Whilst a growing number of studies have examined how river floods are influenced by climate change, the role of natural modes of interannual climate variability remains poorly understood. We present the first global assessment of the influence of El Niño–Southern Oscillation (ENSO) on annual river floods, defined here as the peak daily discharge in a given year. The analysis was carried out by simulating daily gridded discharges using the WaterGAP model (Water – a Global Assessment and Prognosis), and examining statistical relationships between these discharges and ENSO indices. We found that, over the period 1958–2000, ENSO exerted a significant influence on annual floods in river basins covering over a third of the world's land surface, and that its influence on annual floods has been much greater than its influence on average flows. We show that there are more areas in which annual floods intensify with La Niña and decline with El Niño than vice versa. However, we also found that in many regions the strength of the relationships between ENSO and annual floods have been non-stationary, with either strengthening or weakening trends during the study period. We discuss the implications of these findings for science and management. Given the strong relationships between ENSO and annual floods, we suggest that more research is needed to assess relationships between ENSO and flood impacts (e.g. loss of lives or economic damage). Moreover, we suggest that in those regions where useful relationships exist, this information could be combined with ongoing advances in ENSO prediction research, in order to provide year-to-year probabilistic flood risk forecasts.
Bayesian global analysis of neutrino oscillation data
NASA Astrophysics Data System (ADS)
Bergström, Johannes; Gonzalez-Garcia, M. C.; Maltoni, Michele; Schwetz, Thomas
2015-09-01
We perform a Bayesian analysis of current neutrino oscillation data. When estimating the oscillation parameters we find that the results generally agree with those of the χ 2 method, with some differences involving s 23 2 and CP-violating effects. We discuss the additional subtleties caused by the circular nature of the CP-violating phase, and how it is possible to obtain correlation coefficients with s 23 2 . When performing model comparison, we find that there is no significant evidence for any mass ordering, any octant of s 23 2 or a deviation from maximal mixing, nor the presence of CP-violation.
Sensitivity analysis of a ground-water-flow model
Torak, Lynn J.; ,
1991-01-01
A sensitivity analysis was performed on 18 hydrological factors affecting steady-state groundwater flow in the Upper Floridan aquifer near Albany, southwestern Georgia. Computations were based on a calibrated, two-dimensional, finite-element digital model of the stream-aquifer system and the corresponding data inputs. Flow-system sensitivity was analyzed by computing water-level residuals obtained from simulations involving individual changes to each hydrological factor. Hydrological factors to which computed water levels were most sensitive were those that produced the largest change in the sum-of-squares of residuals for the smallest change in factor value. Plots of the sum-of-squares of residuals against multiplier or additive values that effect change in the hydrological factors are used to evaluate the influence of each factor on the simulated flow system. The shapes of these 'sensitivity curves' indicate the importance of each hydrological factor to the flow system. Because the sensitivity analysis can be performed during the preliminary phase of a water-resource investigation, it can be used to identify the types of hydrological data required to accurately characterize the flow system prior to collecting additional data or making management decisions.
Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry
NASA Technical Reports Server (NTRS)
West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat
2016-01-01
The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.
Self-validated Variance-based Methods for Sensitivity Analysis of Model Outputs
Tong, C
2009-04-20
Global sensitivity analysis (GSA) has the advantage over local sensitivity analysis in that GSA does not require strong model assumptions such as linearity or monotonicity. As a result, GSA methods such as those based on variance decomposition are well-suited to multi-physics models, which are often plagued by large nonlinearities. However, as with many other sampling-based methods, inadequate sample size can badly pollute the result accuracies. A natural remedy is to adaptively increase the sample size until sufficient accuracy is obtained. This paper proposes an iterative methodology comprising mechanisms for guiding sample size selection and self-assessing result accuracy. The elegant features in the the proposed methodology are the adaptive refinement strategies for stratified designs. We first apply this iterative methodology to the design of a self-validated first-order sensitivity analysis algorithm. We also extend this methodology to design a self-validated second-order sensitivity analysis algorithm based on refining replicated orthogonal array designs. Several numerical experiments are given to demonstrate the effectiveness of these methods.
Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit
NASA Astrophysics Data System (ADS)
Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie
2015-09-01
The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity
Recurrence quantification analysis of global stock markets
NASA Astrophysics Data System (ADS)
Bastos, João A.; Caiado, Jorge
2011-04-01
This study investigates the presence of deterministic dependencies in international stock markets using recurrence plots and recurrence quantification analysis (RQA). The results are based on a large set of free float-adjusted market capitalization stock indices, covering a period of 15 years. The statistical tests suggest that the dynamics of stock prices in emerging markets is characterized by higher values of RQA measures when compared to their developed counterparts. The behavior of stock markets during critical financial events, such as the burst of the technology bubble, the Asian currency crisis, and the recent subprime mortgage crisis, is analyzed by performing RQA in sliding windows. It is shown that during these events stock markets exhibit a distinctive behavior that is characterized by temporary decreases in the fraction of recurrence points contained in diagonal and vertical structures.
Double Precision Differential/Algebraic Sensitivity Analysis Code
1995-06-02
DDASAC solves nonlinear initial-value problems involving stiff implicit systems of ordinary differential and algebraic equations. Purely algebraic nonlinear systems can also be solved, given an initial guess within the region of attraction of a solution. Options include automatic reconciliation of inconsistent initial states and derivatives, automatic initial step selection, direct concurrent parametric sensitivity analysis, and stopping at a prescribed value of any user-defined functional of the current solution vector. Local error control (in the max-normmore » or the 2-norm) is provided for the state vector and can include the sensitivities on request.« less
A sensitivity analysis for subverting randomization in controlled trials.
Marcus, S M
2001-02-28
In some randomized controlled trials, subjects with a better prognosis may be diverted into the treatment group. This subverting of randomization results in an unobserved non-compliance with the originally intended treatment assignment. Consequently, the estimate of treatment effect from these trials may be biased. This paper clarifies the determinants of the magnitude of the bias and gives a sensitivity analysis that associates the amount that randomization is subverted and the resulting bias in treatment effect estimation. The methods are illustrated with a randomized controlled trial that evaluates the efficacy of a culturally sensitive AIDS education video.
Sensitivity Analysis of Chaotic Flow around Two-Dimensional Airfoil
NASA Astrophysics Data System (ADS)
Blonigan, Patrick; Wang, Qiqi; Nielsen, Eric; Diskin, Boris
2015-11-01
Computational methods for sensitivity analysis are invaluable tools for fluid dynamics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods, including the adjoint method, break down when applied to long-time averaged quantities in chaotic fluid flow fields, such as high-fidelity turbulence simulations. This break down is due to the ``Butterfly Effect'' the high sensitivity of chaotic dynamical systems to the initial condition. A new sensitivity analysis method developed by the authors, Least Squares Shadowing (LSS), can compute useful and accurate gradients for quantities of interest in chaotic dynamical systems. LSS computes gradients using the ``shadow trajectory'', a phase space trajectory (or solution) for which perturbations to the flow field do not grow exponentially in time. To efficiently compute many gradients for one objective function, we use an adjoint version of LSS. This talk will briefly outline Least Squares Shadowing and demonstrate it on chaotic flow around a Two-Dimensional airfoil.
Design sensitivity analysis of rotorcraft airframe structures for vibration reduction
NASA Technical Reports Server (NTRS)
Murthy, T. Sreekanta
1987-01-01
Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.
Zhao, Huaying; Piszczek, Grzegorz; Schuck, Peter
2014-01-01
Isothermal titration calorimetry experiments can provide significantly more detailed information about molecular interactions when combined in global analysis. For example, global analysis can improve the precision of binding affinity and enthalpy, and of possible linkage parameters, even for simple bimolecular interactions, and greatly facilitate the study of multi-site and multi-component systems with competition or cooperativity. A pre-requisite for global analysis is the departure from the traditional binding model, including an ‘n’-value describing unphysical, non-integral numbers of sites. Instead, concentration correction factors can be introduced to account for either errors in the concentration determination or for the presence of inactive fractions of material. SEDPHAT is a computer program that embeds these ideas and provides a graphical user interface for the seamless combination of biophysical experiments to be globally modeled with a large number of different binding models. It offers statistical tools for the rigorous determination of parameter errors, correlations, as well as advanced statistical functions for global ITC (gITC) and global multi-method analysis (GMMA). SEDPHAT will also take full advantage of error bars of individual titration data points determined with the unbiased integration software NITPIC. The present communication reviews principles and strategies of global analysis for ITC and its extension to GMMA in SEDPHAT. We will also introduce a new graphical tool for aiding experimental design by surveying the concentration space and generating simulated data sets, which can be subsequently statistically examined for their information content. This procedure can replace the ‘c’-value as an experimental design parameter, which ceases to be helpful for multi-site systems and in the context of gITC. PMID:25477226
Zhao, Huaying; Piszczek, Grzegorz; Schuck, Peter
2015-04-01
Isothermal titration calorimetry experiments can provide significantly more detailed information about molecular interactions when combined in global analysis. For example, global analysis can improve the precision of binding affinity and enthalpy, and of possible linkage parameters, even for simple bimolecular interactions, and greatly facilitate the study of multi-site and multi-component systems with competition or cooperativity. A pre-requisite for global analysis is the departure from the traditional binding model, including an 'n'-value describing unphysical, non-integral numbers of sites. Instead, concentration correction factors can be introduced to account for either errors in the concentration determination or for the presence of inactive fractions of material. SEDPHAT is a computer program that embeds these ideas and provides a graphical user interface for the seamless combination of biophysical experiments to be globally modeled with a large number of different binding models. It offers statistical tools for the rigorous determination of parameter errors, correlations, as well as advanced statistical functions for global ITC (gITC) and global multi-method analysis (GMMA). SEDPHAT will also take full advantage of error bars of individual titration data points determined with the unbiased integration software NITPIC. The present communication reviews principles and strategies of global analysis for ITC and its extension to GMMA in SEDPHAT. We will also introduce a new graphical tool for aiding experimental design by surveying the concentration space and generating simulated data sets, which can be subsequently statistically examined for their information content. This procedure can replace the 'c'-value as an experimental design parameter, which ceases to be helpful for multi-site systems and in the context of gITC.
Breastfeeding policy: a globally comparative analysis
Raub, Amy; Earle, Alison
2013-01-01
Abstract Objective To explore the extent to which national policies guaranteeing breastfeeding breaks to working women may facilitate breastfeeding. Methods An analysis was conducted of the number of countries that guarantee breastfeeding breaks, the daily number of hours guaranteed, and the duration of guarantees. To obtain current, detailed information on national policies, original legislation as well as secondary sources on 182 of the 193 Member States of the United Nations were examined. Regression analyses were conducted to test the association between national policy and rates of exclusive breastfeeding while controlling for national income level, level of urbanization, female percentage of the labour force and female literacy rate. Findings Breastfeeding breaks with pay are guaranteed in 130 countries (71%) and unpaid breaks are guaranteed in seven (4%). No policy on breastfeeding breaks exists in 45 countries (25%). In multivariate models, the guarantee of paid breastfeeding breaks for at least 6 months was associated with an increase of 8.86 percentage points in the rate of exclusive breastfeeding (P < 0.05). Conclusion A greater percentage of women practise exclusive breastfeeding in countries where laws guarantee breastfeeding breaks at work. If these findings are confirmed in longitudinal studies, health outcomes could be improved by passing legislation on breastfeeding breaks in countries that do not yet ensure the right to breastfeed. PMID:24052676
Sensitivity analysis of infectious disease models: methods, advances and their application.
Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V
2013-09-01
Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods-scatter plots, the Morris and Sobol' methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method-and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497
Sensitivity and Uncertainty Analysis to Burnup Estimates on ADS using the ACAB Code
NASA Astrophysics Data System (ADS)
Cabellos, O.; Sanz, J.; Rodríguez, A.; González, E.; Embid, M.; Alvarez, F.; Reyes, S.
2005-05-01
Within the scope of the Accelerator Driven System (ADS) concept for nuclear waste management applications, the burnup uncertainty estimates due to uncertainty in the activation cross sections (XSs) are important regarding both the safety and the efficiency of the waste burning process. We have applied both sensitivity analysis and Monte Carlo methodology to actinides burnup calculations in a lead-bismuth cooled subcritical ADS. The sensitivity analysis is used to identify the reaction XSs and the dominant chains that contribute most significantly to the uncertainty. The Monte Carlo methodology gives the burnup uncertainty estimates due to the synergetic/global effect of the complete set of XS uncertainties. These uncertainty estimates are valuable to assess the need of any experimental or systematic re-evaluation of some uncertainty XSs for ADS.
Sensitivity and Uncertainty Analysis to Burn-up Estimates on ADS Using ACAB Code
Cabellos, O; Sanz, J; Rodriguez, A; Gonzalez, E; Embid, M; Alvarez, F; Reyes, S
2005-02-11
Within the scope of the Accelerator Driven System (ADS) concept for nuclear waste management applications, the burnup uncertainty estimates due to uncertainty in the activation cross sections (XSs) are important regarding both the safety and the efficiency of the waste burning process. We have applied both sensitivity analysis and Monte Carlo methodology to actinides burnup calculations in a lead-bismuth cooled subcritical ADS. The sensitivity analysis is used to identify the reaction XSs and the dominant chains that contribute most significantly to the uncertainty. The Monte Carlo methodology gives the burnup uncertainty estimates due to the synergetic/global effect of the complete set of XS uncertainties. These uncertainty estimates are valuable to assess the need of any experimental or systematic reevaluation of some uncertainty XSs for ADS.
Sensitivity and Uncertainty Analysis to Burnup Estimates on ADS using the ACAB Code
Cabellos, O.; Sanz, J.; Rodriguez, A.; Gonzalez, E.; Embid, M.; Alvarez, F.; Reyes, S.
2005-05-24
Within the scope of the Accelerator Driven System (ADS) concept for nuclear waste management applications, the burnup uncertainty estimates due to uncertainty in the activation cross sections (XSs) are important regarding both the safety and the efficiency of the waste burning process. We have applied both sensitivity analysis and Monte Carlo methodology to actinides burnup calculations in a lead-bismuth cooled subcritical ADS. The sensitivity analysis is used to identify the reaction XSs and the dominant chains that contribute most significantly to the uncertainty. The Monte Carlo methodology gives the burnup uncertainty estimates due to the synergetic/global effect of the complete set of XS uncertainties. These uncertainty estimates are valuable to assess the need of any experimental or systematic re-evaluation of some uncertainty XSs for ADS.
A global optimization approach to multi-polarity sentiment analysis.
Li, Xinmiao; Li, Jing; Wu, Yukeng
2015-01-01
Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From
Redox Sensitivities of Global Cellular Cysteine Residues under Reductive and Oxidative Stress.
Araki, Kazutaka; Kusano, Hidewo; Sasaki, Naoyuki; Tanaka, Riko; Hatta, Tomohisa; Fukui, Kazuhiko; Natsume, Tohru
2016-08-01
The protein cysteine residue is one of the amino acids most susceptible to oxidative modifications, frequently caused by oxidative stress. Several applications have enabled cysteine-targeted proteomics analysis with simultaneous detection and quantitation. In this study, we employed a quantitative approach using a set of iodoacetyl-based cysteine reactive isobaric tags (iodoTMT) and evaluated the transient cellular oxidation ratio of free and reversibly modified cysteine thiols under DTT and hydrogen peroxide (H2O2) treatments. DTT treatment (1 mM for 5 min) reduced most cysteine thiols, irrespective of their cellular localizations. It also caused some unique oxidative shifts, including for peroxiredoxin 2 (PRDX2), uroporphyrinogen decarboxylase (UROD), and thioredoxin (TXN), proteins reportedly affected by cellular reactive oxygen species production. Modest H2O2 treatment (50 μM for 5 min) did not cause global oxidations but instead had apparently reductive effects. Moreover, with H2O2, significant oxidative shifts were observed only in redox active proteins, like PRDX2, peroxiredoxin 1 (PRDX1), TXN, and glyceraldehyde 3-phosphate dehydrogenase (GAPDH). Overall, our quantitative data illustrated both H2O2- and reduction-mediated cellular responses, whereby while redox homeostasis is maintained, highly reactive thiols can potentiate the specific, rapid cellular signaling to counteract acute redox stress.
Efficient sensitivity analysis and optimization of a helicopter rotor
NASA Technical Reports Server (NTRS)
Lim, Joon W.; Chopra, Inderjit
1989-01-01
Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.
Leek, E Charles; Roberts, Mark; Oliver, Zoe J; Cristino, Filipe; Pegna, Alan J
2016-08-01
Here we investigated the time course underlying differential processing of local and global shape information during the perception of complex three-dimensional (3D) objects. Observers made shape matching judgments about pairs of sequentially presented multi-part novel objects. Event-related potentials (ERPs) were used to measure perceptual sensitivity to 3D shape differences in terms of local part structure and global shape configuration - based on predictions derived from hierarchical structural description models of object recognition. There were three types of different object trials in which stimulus pairs (1) shared local parts but differed in global shape configuration; (2) contained different local parts but shared global configuration or (3) shared neither local parts nor global configuration. Analyses of the ERP data showed differential amplitude modulation as a function of shape similarity as early as the N1 component between 146-215ms post-stimulus onset. These negative amplitude deflections were more similar between objects sharing global shape configuration than local part structure. Differentiation among all stimulus types was reflected in N2 amplitude modulations between 276-330ms. sLORETA inverse solutions showed stronger involvement of left occipitotemporal areas during the N1 for object discrimination weighted towards local part structure. The results suggest that the perception of 3D object shape involves parallel processing of information at local and global scales. This processing is characterised by relatively slow derivation of 'fine-grained' local shape structure, and fast derivation of 'coarse-grained' global shape configuration. We propose that the rapid early derivation of global shape attributes underlies the observed patterns of N1 amplitude modulations.
Leek, E Charles; Roberts, Mark; Oliver, Zoe J; Cristino, Filipe; Pegna, Alan J
2016-08-01
Here we investigated the time course underlying differential processing of local and global shape information during the perception of complex three-dimensional (3D) objects. Observers made shape matching judgments about pairs of sequentially presented multi-part novel objects. Event-related potentials (ERPs) were used to measure perceptual sensitivity to 3D shape differences in terms of local part structure and global shape configuration - based on predictions derived from hierarchical structural description models of object recognition. There were three types of different object trials in which stimulus pairs (1) shared local parts but differed in global shape configuration; (2) contained different local parts but shared global configuration or (3) shared neither local parts nor global configuration. Analyses of the ERP data showed differential amplitude modulation as a function of shape similarity as early as the N1 component between 146-215ms post-stimulus onset. These negative amplitude deflections were more similar between objects sharing global shape configuration than local part structure. Differentiation among all stimulus types was reflected in N2 amplitude modulations between 276-330ms. sLORETA inverse solutions showed stronger involvement of left occipitotemporal areas during the N1 for object discrimination weighted towards local part structure. The results suggest that the perception of 3D object shape involves parallel processing of information at local and global scales. This processing is characterised by relatively slow derivation of 'fine-grained' local shape structure, and fast derivation of 'coarse-grained' global shape configuration. We propose that the rapid early derivation of global shape attributes underlies the observed patterns of N1 amplitude modulations. PMID:27396674
Shape sensitivity analysis of flutter response of a laminated wing
NASA Technical Reports Server (NTRS)
Bergen, Fred D.; Kapania, Rakesh K.
1988-01-01
A method is presented for calculating the shape sensitivity of a wing aeroelastic response with respect to changes in geometric shape. Yates' modified strip method is used in conjunction with Giles' equivalent plate analysis to predict the flutter speed, frequency, and reduced frequency of the wing. Three methods are used to calculate the sensitivity of the eigenvalue. The first method is purely a finite difference calculation of the eigenvalue derivative directly from the solution of the flutter problem corresponding to the two different values of the shape parameters. The second method uses an analytic expression for the eigenvalue sensitivities of a general complex matrix, where the derivatives of the aerodynamic, mass, and stiffness matrices are computed using a finite difference approximation. The third method also uses an analytic expression for the eigenvalue sensitivities, but the aerodynamic matrix is computed analytically. All three methods are found to be in good agreement with each other. The sensitivities of the eigenvalues were used to predict the flutter speed, frequency, and reduced frequency. These approximations were found to be in good agreement with those obtained using a complete reanalysis.
Gerstl, S.A.W.
1980-01-01
SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.
Sensitivity Analysis and Optimal Control of Anthroponotic Cutaneous Leishmania
Zamir, Muhammad; Zaman, Gul; Alshomrani, Ali Saleh
2016-01-01
This paper is focused on the transmission dynamics and optimal control of Anthroponotic Cutaneous Leishmania. The threshold condition R0 for initial transmission of infection is obtained by next generation method. Biological sense of the threshold condition is investigated and discussed in detail. The sensitivity analysis of the reproduction number is presented and the most sensitive parameters are high lighted. On the basis of sensitivity analysis, some control strategies are introduced in the model. These strategies positively reduce the effect of the parameters with high sensitivity indices, on the initial transmission. Finally, an optimal control strategy is presented by taking into account the cost associated with control strategies. It is also shown that an optimal control exists for the proposed control problem. The goal of optimal control problem is to minimize, the cost associated with control strategies and the chances of infectious humans, exposed humans and vector population to become infected. Numerical simulations are carried out with the help of Runge-Kutta fourth order procedure. PMID:27505634
Sensitivity analysis for improving nanomechanical photonic transducers biosensors
NASA Astrophysics Data System (ADS)
Fariña, D.; Álvarez, M.; Márquez, S.; Dominguez, C.; Lechuga, L. M.
2015-08-01
The achievement of high sensitivity and highly integrated transducers is one of the main challenges in the development of high-throughput biosensors. The aim of this study is to improve the final sensitivity of an opto-mechanical device to be used as a reliable biosensor. We report the analysis of the mechanical and optical properties of optical waveguide microcantilever transducers, and their dependency on device design and dimensions. The selected layout (geometry) based on two butt-coupled misaligned waveguides displays better sensitivities than an aligned one. With this configuration, we find that an optimal microcantilever thickness range between 150 nm and 400 nm would increase both microcantilever bending during the biorecognition process and increase optical sensitivity to 4.8 × 10-2 nm-1, an order of magnitude higher than other similar opto-mechanical devices. Moreover, the analysis shows that a single mode behaviour of the propagating radiation is required to avoid modal interference that could misinterpret the readout signal.
Graphical methods for the sensitivity analysis in discriminant analysis
Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang
2015-09-30
Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern ofmore » the change.« less
Graphical methods for the sensitivity analysis in discriminant analysis
Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang
2015-09-30
Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern of the change.
Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations
Wang, Qiqi Hu, Rui Blonigan, Patrick
2014-06-15
The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.
Sensitivity analysis of transport modeling in a fractured gneiss aquifer
NASA Astrophysics Data System (ADS)
Abdelaziz, Ramadan; Merkel, Broder J.
2015-03-01
Modeling solute transport in fractured aquifers is still challenging for scientists and engineers. Tracer tests are a powerful tool to investigate fractured aquifers with complex geometry and variable heterogeneity. This research focuses on obtaining hydraulic and transport parameters from an experimental site with several wells. At the site, a tracer test with NaCl was performed under natural gradient conditions. Observed concentrations of tracer test were used to calibrate a conservative solute transport model by inverse modeling based on UCODE2013, MODFLOW, and MT3DMS. In addition, several statistics are employed for sensitivity analysis. Sensitivity analysis results indicate that hydraulic conductivity and immobile porosity play important role in the late arrive for breakthrough curve. The results proved that the calibrated model fits well with the observed data set.
Control of a mechanical aeration process via topological sensitivity analysis
NASA Astrophysics Data System (ADS)
Abdelwahed, M.; Hassine, M.; Masmoudi, M.
2009-06-01
The topological sensitivity analysis method gives the variation of a criterion with respect to the creation of a small hole in the domain. In this paper, we use this method to control the mechanical aeration process in eutrophic lakes. A simplified model based on incompressible Navier-Stokes equations is used, only considering the liquid phase, which is the dominant one. The injected air is taken into account through local boundary conditions for the velocity, on the injector holes. A 3D numerical simulation of the aeration effects is proposed using a mixed finite element method. In order to generate the best motion in the fluid for aeration purposes, the optimization of the injector location is considered. The main idea is to carry out topological sensitivity analysis with respect to the insertion of an injector. Finally, a topological optimization algorithm is proposed and some numerical results, showing the efficiency of our approach, are presented.
Global/local finite element analysis for textile composites
NASA Technical Reports Server (NTRS)
Woo, Kyeongsik; Whitcomb, John
1993-01-01
Conventional analysis of textile composites is impractical because of the complex microstructure. Global/local methodology combined with special macro elements is proposed herein as a practical alternative. Initial tests showed dramatic reductions in the computational effort with only small loss in accuracy.
Global Analysis of Helicity PDFs: past - present - future
de Florian, D.; Stratmann, M.; Sassot, R.; Vogelsang, W.
2011-04-11
We discuss the current status of the DSSV global analysis of helicity-dependent parton densities. A comparison with recent semi-inclusive DIS data from COMPASS is presented, and constraints on the polarized strangeness density are examined in some detail.
Ecological network analysis on global virtual water trade.
Yang, Zhifeng; Mao, Xufeng; Zhao, Xu; Chen, Bin
2012-02-01
Global water interdependencies are likely to increase with growing virtual water trade. To address the issues of the indirect effects of water trade through the global economic circulation, we use ecological network analysis (ENA) to shed insight into the complicated system interactions. A global model of virtual water flow among agriculture and livestock production trade in 1995-1999 is also built as the basis for network analysis. Control analysis is used to identify the quantitative control or dependency relations. The utility analysis provides more indicators for describing the mutual relationship between two regions/countries by imitating the interactions in the ecosystem and distinguishes the beneficiary and the contributor of virtual water trade system. Results show control and utility relations can well depict the mutual relation in trade system, and direct observable relations differ from integral ones with indirect interactions considered. This paper offers a new way to depict the interrelations between trade components and can serve as a meaningful start as we continue to use ENA in providing more valuable implications for freshwater study on a global scale. PMID:22243129
Global Analysis of Horizontal Gene Transfer in Fusarium verticillioides
Technology Transfer Automated Retrieval System (TEKTRAN)
The co-occurrence of microbes within plants and other specialized niches may facilitate horizontal gene transfer (HGT) affecting host-pathogen interactions. We recently identified fungal-to-fungal HGTs involving metabolic gene clusters. For a global analysis of HGTs in the maize pathogen Fusarium ve...
Ecological network analysis on global virtual water trade.
Yang, Zhifeng; Mao, Xufeng; Zhao, Xu; Chen, Bin
2012-02-01
Global water interdependencies are likely to increase with growing virtual water trade. To address the issues of the indirect effects of water trade through the global economic circulation, we use ecological network analysis (ENA) to shed insight into the complicated system interactions. A global model of virtual water flow among agriculture and livestock production trade in 1995-1999 is also built as the basis for network analysis. Control analysis is used to identify the quantitative control or dependency relations. The utility analysis provides more indicators for describing the mutual relationship between two regions/countries by imitating the interactions in the ecosystem and distinguishes the beneficiary and the contributor of virtual water trade system. Results show control and utility relations can well depict the mutual relation in trade system, and direct observable relations differ from integral ones with indirect interactions considered. This paper offers a new way to depict the interrelations between trade components and can serve as a meaningful start as we continue to use ENA in providing more valuable implications for freshwater study on a global scale.
Globalization and International Student Mobility: A Network Analysis
ERIC Educational Resources Information Center
Shields, Robin
2013-01-01
This article analyzes changes to the network of international student mobility in higher education over a 10-year period (1999-2008). International student flows have increased rapidly, exceeding 3 million in 2009, and extensive data on mobility provide unique insight into global educational processes. The analysis is informed by three theoretical…
Teaching Reading: Mexico's Global Method of Structural Analysis.
ERIC Educational Resources Information Center
Orozco, Cecilio
In 1985, the Global Method of Structural Analysis (GMSA) for teaching reading was introduced to first and second graders in Mexico. Breaking away from the more traditional educational methods, it established a basis for more flexible education and effectively utilized critical thinking skills. The preparation stage (reading readiness) begins in…
Objective analysis of the ARM IOP data: method and sensitivity
Cedarwall, R; Lin, J L; Xie, S C; Yio, J J; Zhang, M H
1999-04-01
Motivated by the need of to obtain accurate objective analysis of field experimental data to force physical parameterizations in numerical models, this paper -first reviews the existing objective analysis methods and interpolation schemes that are used to derive atmospheric wind divergence, vertical velocity, and advective tendencies. Advantages and disadvantages of each method are discussed. It is shown that considerable uncertainties in the analyzed products can result from the use of different analysis schemes and even more from different implementations of a particular scheme. The paper then describes a hybrid approach to combine the strengths of the regular grid method and the line-integral method, together with a variational constraining procedure for the analysis of field experimental data. In addition to the use of upper air data, measurements at the surface and at the top-of-the-atmosphere are used to constrain the upper air analysis to conserve column-integrated mass, water, energy, and momentum. Analyses are shown for measurements taken in the Atmospheric Radiation Measurement Programs (ARM) July 1995 Intensive Observational Period (IOP). Sensitivity experiments are carried out to test the robustness of the analyzed data and to reveal the uncertainties in the analysis. It is shown that the variational constraining process significantly reduces the sensitivity of the final data products.
Global land cover mapping: a review and uncertainty analysis
Congalton, Russell G.; Gu, Jianyu; Yadav, Kamini; Thenkabail, Prasad S.; Ozdogan, Mutlu
2014-01-01
Given the advances in remotely sensed imagery and associated technologies, several global land cover maps have been produced in recent times including IGBP DISCover, UMD Land Cover, Global Land Cover 2000 and GlobCover 2009. However, the utility of these maps for specific applications has often been hampered due to considerable amounts of uncertainties and inconsistencies. A thorough review of these global land cover projects including evaluating the sources of error and uncertainty is prudent and enlightening. Therefore, this paper describes our work in which we compared, summarized and conducted an uncertainty analysis of the four global land cover mapping projects using an error budget approach. The results showed that the classification scheme and the validation methodology had the highest error contribution and implementation priority. A comparison of the classification schemes showed that there are many inconsistencies between the definitions of the map classes. This is especially true for the mixed type classes for which thresholds vary for the attributes/discriminators used in the classification process. Examination of these four global mapping projects provided quite a few important lessons for the future global mapping projects including the need for clear and uniform definitions of the classification scheme and an efficient, practical, and valid design of the accuracy assessment.
Development and application of optimum sensitivity analysis of structures
NASA Technical Reports Server (NTRS)
Barthelemy, J. F. M.; Hallauer, W. L., Jr.
1984-01-01
The research focused on developing an algorithm applying optimum sensitivity analysis for multilevel optimization. The research efforts have been devoted to assisting NASA Langley's Interdisciplinary Research Office (IRO) in the development of a mature methodology for a multilevel approach to the design of complex (large and multidisciplinary) engineering systems. An effort was undertaken to identify promising multilevel optimization algorithms. In the current reporting period, the computer program generating baseline single level solutions was completed and tested out.
Trame, MN; Lesko, LJ
2015-01-01
A systems pharmacology model typically integrates pharmacokinetic, biochemical network, and systems biology concepts into a unifying approach. It typically consists of a large number of parameters and reaction species that are interlinked based upon the underlying (patho)physiology and the mechanism of drug action. The more complex these models are, the greater the challenge of reliably identifying and estimating respective model parameters. Global sensitivity analysis provides an innovative tool that can meet this challenge. CPT Pharmacometrics Syst. Pharmacol. (2015) 4, 69–79; doi:10.1002/psp4.6; published online 25 February 2015 PMID:27548289
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
Sensitivity analysis in multiple imputation in effectiveness studies of psychotherapy
Crameri, Aureliano; von Wyl, Agnes; Koemeda, Margit; Schulthess, Peter; Tschuschke, Volker
2015-01-01
The importance of preventing and treating incomplete data in effectiveness studies is nowadays emphasized. However, most of the publications focus on randomized clinical trials (RCT). One flexible technique for statistical inference with missing data is multiple imputation (MI). Since methods such as MI rely on the assumption of missing data being at random (MAR), a sensitivity analysis for testing the robustness against departures from this assumption is required. In this paper we present a sensitivity analysis technique based on posterior predictive checking, which takes into consideration the concept of clinical significance used in the evaluation of intra-individual changes. We demonstrate the possibilities this technique can offer with the example of irregular longitudinal data collected with the Outcome Questionnaire-45 (OQ-45) and the Helping Alliance Questionnaire (HAQ) in a sample of 260 outpatients. The sensitivity analysis can be used to (1) quantify the degree of bias introduced by missing not at random data (MNAR) in a worst reasonable case scenario, (2) compare the performance of different analysis methods for dealing with missing data, or (3) detect the influence of possible violations to the model assumptions (e.g., lack of normality). Moreover, our analysis showed that ratings from the patient's and therapist's version of the HAQ could significantly improve the predictive value of the routine outcome monitoring based on the OQ-45. Since analysis dropouts always occur, repeated measurements with the OQ-45 and the HAQ analyzed with MI are useful to improve the accuracy of outcome estimates in quality assurance assessments and non-randomized effectiveness studies in the field of outpatient psychotherapy. PMID:26283989
NASA Technical Reports Server (NTRS)
McGhee, David S.; Peck, Jeff A.; McDonald, Emmett J.
2012-01-01
This paper examines Probabilistic Sensitivity Analysis (PSA) methods and tools in an effort to understand their utility in vehicle loads and dynamic analysis. Specifically, this study addresses how these methods may be used to establish limits on payload mass and cg location and requirements on adaptor stiffnesses while maintaining vehicle loads and frequencies within established bounds. To this end, PSA methods and tools are applied to a realistic, but manageable, integrated launch vehicle analysis where payload and payload adaptor parameters are modeled as random variables. This analysis is used to study both Regional Response PSA (RRPSA) and Global Response PSA (GRPSA) methods, with a primary focus on sampling based techniques. For contrast, some MPP based approaches are also examined.
Global Analysis, Interpretation, and Modelling: First Science Conference
NASA Technical Reports Server (NTRS)
Sahagian, Dork
1995-01-01
Topics considered include: Biomass of termites and their emissions of methane and carbon dioxide - A global database; Carbon isotope discrimination during photosynthesis and the isotope ratio of respired CO2 in boreal forest ecosystems; Estimation of methane emission from rice paddies in mainland China; Climate and nitrogen controls on the geography and timescales of terrestrial biogeochemical cycling; Potential role of vegetation feedback in the climate sensitivity of high-latitude regions - A case study at 6000 years B.P.; Interannual variation of carbon exchange fluxes in terrestrial ecosystems; and Variations in modeled atmospheric transport of carbon dioxide and the consequences for CO2 inversions.
Sensitivity Analysis for Atmospheric Infrared Sounder (AIRS) CO2 Retrieval
NASA Technical Reports Server (NTRS)
Gat, Ilana
2012-01-01
The Atmospheric Infrared Sounder (AIRS) is a thermal infrared sensor able to retrieve the daily atmospheric state globally for clear as well as partially cloudy field-of-views. The AIRS spectrometer has 2378 channels sensing from 15.4 micrometers to 3.7 micrometers, of which a small subset in the 15 micrometers region has been selected, to date, for CO2 retrieval. To improve upon the current retrieval method, we extended the retrieval calculations to include a prior estimate component and developed a channel ranking system to optimize the channels and number of channels used. The channel ranking system uses a mathematical formalism to rapidly process and assess the retrieval potential of large numbers of channels. Implementing this system, we identifed a larger optimized subset of AIRS channels that can decrease retrieval errors and minimize the overall sensitivity to other iridescent contributors, such as water vapor, ozone, and atmospheric temperature. This methodology selects channels globally by accounting for the latitudinal, longitudinal, and seasonal dependencies of the subset. The new methodology increases accuracy in AIRS CO2 as well as other retrievals and enables the extension of retrieved CO2 vertical profiles to altitudes ranging from the lower troposphere to upper stratosphere. The extended retrieval method for CO2 vertical profile estimation using a maximum-likelihood estimation method. We use model data to demonstrate the beneficial impact of the extended retrieval method using the new channel ranking system on CO2 retrieval.
Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models
NASA Astrophysics Data System (ADS)
Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.
2014-01-01
This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based "local" methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative "bucket-style" hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.
Global analysis of large-scale chemical and biological experiments
Root, David E; Kelley, Brian P
2005-01-01
Research in the life sciences is increasingly dominated by high-throughput data collection methods that benefit from a global approach to data analysis. Recent innovations that facilitate such comprehensive analyses are highlighted. Several developments enable the study of the relationships between newly derived experimental information, such as biological activity in chemical screens or gene expression studies, and prior information, such as physical descriptors for small molecules or functional annotation for genes. The way in which global analyses can be applied to both chemical screens and transcription profiling experiments using a set of common machine learning tools is discussed. PMID:12058610
Sensitivity analysis of fine sediment models using heterogeneous data
NASA Astrophysics Data System (ADS)
Kamel, A. M. Yousif; Bhattacharya, B.; El Serafy, G. Y.; van Kessel, T.; Solomatine, D. P.
2012-04-01
Sediments play an important role in many aquatic systems. Their transportation and deposition has significant implication on morphology, navigability and water quality. Understanding the dynamics of sediment transportation in time and space is therefore important in drawing interventions and making management decisions. This research is related to the fine sediment dynamics in the Dutch coastal zone, which is subject to human interference through constructions, fishing, navigation, sand mining, etc. These activities do affect the natural flow of sediments and sometimes lead to environmental concerns or affect the siltation rates in harbours and fairways. Numerical models are widely used in studying fine sediment processes. Accuracy of numerical models depends upon the estimation of model parameters through calibration. Studying the model uncertainty related to these parameters is important in improving the spatio-temporal prediction of suspended particulate matter (SPM) concentrations, and determining the limits of their accuracy. This research deals with the analysis of a 3D numerical model of North Sea covering the Dutch coast using the Delft3D modelling tool (developed at Deltares, The Netherlands). The methodology in this research was divided into three main phases. The first phase focused on analysing the performance of the numerical model in simulating SPM concentrations near the Dutch coast by comparing the model predictions with SPM concentrations estimated from NASA's MODIS sensors at different time scales. The second phase focused on carrying out a sensitivity analysis of model parameters. Four model parameters were identified for the uncertainty and sensitivity analysis: the sedimentation velocity, the critical shear stress above which re-suspension occurs, the shields shear stress for re-suspension pick-up, and the re-suspension pick-up factor. By adopting different values of these parameters the numerical model was run and a comparison between the
Floquet theoretic approach to sensitivity analysis for periodic systems
NASA Astrophysics Data System (ADS)
Larter, Raima
1986-12-01
The mathematical relationship between sensitivity analysis and Floquet theory is explored. The former technique has been used in recent years to study the parameter sensitivity of numerical models in chemical kinetics, scattering theory, and other problems in chemistry. In the present work, we derive analytical expressions for the sensitivity coefficients for models of oscillating chemical reactions. These reactions have been the subject of increased interest in recent years because of their relationship to fundamental biological problems, such as development, and because of their similarity to related phenomena in fields such as hydrodynamics, plasma physics, meteorology, geology, etc. The analytical form of the sensitivity coefficients derived here can be used to determine the explicit time dependence of the initial transient and any secular term. The method is applicable to unstable as well as stable oscillations and is illustrated by application to the Brusselator and to a three variable model due to Hassard, Kazarinoff, and Wan. It is shown that our results reduce to those previously derived by Edelson, Rabitz, and others in certain limits. The range of validity of these formerly derived expressions is thus elucidated.
Species sensitivity analysis of heavy metals to freshwater organisms.
Xin, Zheng; Wenchao, Zang; Zhenguang, Yan; Yiguo, Hong; Zhengtao, Liu; Xianliang, Yi; Xiaonan, Wang; Tingting, Liu; Liming, Zhou
2015-10-01
Acute toxicity data of six heavy metals [Cu, Hg, Cd, Cr(VI), Pb, Zn] to aquatic organisms were collected and screened. Species sensitivity distributions (SSD) curves of vertebrate and invertebrate were constructed by log-logistic model separately. The comprehensive comparisons of the sensitivities of different trophic species to six typical heavy metals were performed. The results indicated invertebrate taxa to each heavy metal exhibited higher sensitivity than vertebrates. However, with respect to the same taxa species, Cu had the most adverse effect on vertebrate, followed by Hg, Cd, Zn and Cr. When datasets from all species were included, Cu and Hg were still more toxic than the others. In particular, the toxicities of Pb to vertebrate and fish were complicated as the SSD curves of Pb intersected with those of other heavy metals, while the SSD curves of Pb constructed by total species no longer crossed with others. The hazardous concentrations for 5 % of the species (HC5) affected were derived to determine the concentration protecting 95 % of species. The HC5 values of the six heavy metals were in the descending order: Zn > Pb > Cr > Cd > Hg > Cu, indicating toxicities in opposite order. Moreover, potential affected fractions were calculated to assess the ecological risks of different heavy metals at certain concentrations of the selected heavy metals. Evaluations of sensitivities of the species at various trophic levels and toxicity analysis of heavy metals are necessary prior to derivation of water quality criteria and the further environmental protection.
Sensitivity Analysis of a Pharmacokinetic Model of Vaginal Anti-HIV Microbicide Drug Delivery.
Jarrett, Angela M; Gao, Yajing; Hussaini, M Yousuff; Cogan, Nicholas G; Katz, David F
2016-05-01
Uncertainties in parameter values in microbicide pharmacokinetics (PK) models confound the models' use in understanding the determinants of drug delivery and in designing and interpreting dosing and sampling in PK studies. A global sensitivity analysis (Sobol' indices) was performed for a compartmental model of the pharmacokinetics of gel delivery of tenofovir to the vaginal mucosa. The model's parameter space was explored to quantify model output sensitivities to parameters characterizing properties for the gel-drug product (volume, drug transport, initial loading) and host environment (thicknesses of the mucosal epithelium and stroma and the role of ambient vaginal fluid in diluting gel). Greatest sensitivities overall were to the initial drug concentration in gel, gel-epithelium partition coefficient for drug, and rate constant for gel dilution by vaginal fluid. Sensitivities for 3 PK measures of drug concentration values were somewhat different than those for the kinetic PK measure. Sensitivities in the stromal compartment (where tenofovir acts against host cells) and a simulated biopsy also depended on thicknesses of epithelium and stroma. This methodology and results here contribute an approach to help interpret uncertainties in measures of vaginal microbicide gel properties and their host environment. In turn, this will inform rational gel design and optimization. PMID:27012224
Low global sensitivity of metabolic rate to temperature in calcified marine invertebrates.
Watson, Sue-Ann; Morley, Simon A; Bates, Amanda E; Clark, Melody S; Day, Robert W; Lamare, Miles; Martin, Stephanie M; Southgate, Paul C; Tan, Koh Siang; Tyler, Paul A; Peck, Lloyd S
2014-01-01
Metabolic rate is a key component of energy budgets that scales with body size and varies with large-scale environmental geographical patterns. Here we conduct an analysis of standard metabolic rates (SMR) of marine ectotherms across a 70° latitudinal gradient in both hemispheres that spanned collection temperatures of 0-30 °C. To account for latitudinal differences in the size and skeletal composition between species, SMR was mass normalized to that of a standard-sized (223 mg) ash-free dry mass individual. SMR was measured for 17 species of calcified invertebrates (bivalves, gastropods, urchins and brachiopods), using a single consistent methodology, including 11 species whose SMR was described for the first time. SMR of 15 out of 17 species had a mass-scaling exponent between 2/3 and 1, with no greater support for a 3/4 rather than a 2/3 scaling exponent. After accounting for taxonomy and variability in parameter estimates among species using variance-weighted linear mixed effects modelling, temperature sensitivity of SMR had an activation energy (Ea) of 0.16 for both Northern and Southern Hemisphere species which was lower than predicted under the metabolic theory of ecology (Ea 0.2-1.2 eV). Northern Hemisphere species, however, had a higher SMR at each habitat temperature, but a lower mass-scaling exponent relative to SMR. Evolutionary trade-offs that may be driving differences in metabolic rate (such as metabolic cold adaptation of Northern Hemisphere species) will have important impacts on species abilities to respond to changing environments.
Low global sensitivity of metabolic rate to temperature in calcified marine invertebrates.
Watson, Sue-Ann; Morley, Simon A; Bates, Amanda E; Clark, Melody S; Day, Robert W; Lamare, Miles; Martin, Stephanie M; Southgate, Paul C; Tan, Koh Siang; Tyler, Paul A; Peck, Lloyd S
2014-01-01
Metabolic rate is a key component of energy budgets that scales with body size and varies with large-scale environmental geographical patterns. Here we conduct an analysis of standard metabolic rates (SMR) of marine ectotherms across a 70° latitudinal gradient in both hemispheres that spanned collection temperatures of 0-30 °C. To account for latitudinal differences in the size and skeletal composition between species, SMR was mass normalized to that of a standard-sized (223 mg) ash-free dry mass individual. SMR was measured for 17 species of calcified invertebrates (bivalves, gastropods, urchins and brachiopods), using a single consistent methodology, including 11 species whose SMR was described for the first time. SMR of 15 out of 17 species had a mass-scaling exponent between 2/3 and 1, with no greater support for a 3/4 rather than a 2/3 scaling exponent. After accounting for taxonomy and variability in parameter estimates among species using variance-weighted linear mixed effects modelling, temperature sensitivity of SMR had an activation energy (Ea) of 0.16 for both Northern and Southern Hemisphere species which was lower than predicted under the metabolic theory of ecology (Ea 0.2-1.2 eV). Northern Hemisphere species, however, had a higher SMR at each habitat temperature, but a lower mass-scaling exponent relative to SMR. Evolutionary trade-offs that may be driving differences in metabolic rate (such as metabolic cold adaptation of Northern Hemisphere species) will have important impacts on species abilities to respond to changing environments. PMID:24036933
Stability investigations of airfoil flow by global analysis
NASA Technical Reports Server (NTRS)
Morzynski, Marek; Thiele, Frank
1992-01-01
As the result of global, non-parallel flow stability analysis the single value of the disturbance growth-rate and respective frequency is obtained. This complex value characterizes the stability of the whole flow configuration and is not referred to any particular flow pattern. The global analysis assures that all the flow elements (wake, boundary and shear layer) are taken into account. The physical phenomena connected with the wake instability are properly reproduced by the global analysis. This enhances the investigations of instability of any 2-D flows, including ones in which the boundary layer instability effects are known to be of dominating importance. Assuming fully 2-D disturbance form, the global linear stability problem is formulated. The system of partial differential equations is solved for the eigenvalues and eigenvectors. The equations, written in the pure stream function formulation, are discretized via FDM using a curvilinear coordinate system. The complex eigenvalues and corresponding eigenvectors are evaluated by an iterative method. The investigations performed for various Reynolds numbers emphasize that the wake instability develops into the Karman vortex street. This phenomenon is shown to be connected with the first mode obtained from the non-parallel flow stability analysis. The higher modes are reflecting different physical phenomena as for example Tollmien-Schlichting waves, originating in the boundary layer and having the tendency to emerge as instabilities for the growing Reynolds number. The investigations are carried out for a circular cylinder, oblong ellipsis and airfoil. It is shown that the onset of the wake instability, the waves in the boundary layer, the shear layer instability are different solutions of the same eigenvalue problem, formulated using the non-parallel theory. The analysis offers large potential possibilities as the generalization of methods used till now for the stability analysis.
Sensitivity analysis of the GNSS derived Victoria plate motion
NASA Astrophysics Data System (ADS)
Apolinário, João; Fernandes, Rui; Bos, Machiel
2014-05-01
estimated trend (Williams 2003, Langbein 2012). Finally, our preferable angular velocity estimation is used to evaluate the consequences on the kinematics of the Victoria block, namely the magnitude and azimuth of the relative motions with respect to the Nubia and Somalia plates and their tectonic implications. References Agnew, D. C. (2013). Realistic simulations of geodetic network data: The Fakenet package, Seismol. Res. Lett., 84 , 426-432, doi:10.1785/0220120185. Blewitt, G. & Lavallee, D., (2002). Effect of annual signals on geodetic velocity, J. geophys. Res., 107(B7), doi:10.1029/2001JB000570. Bos, M.S., R.M.S. Fernandes, S. Williams, L. Bastos (2012) Fast Error Analysis of Continuous GNSS Observations with Missing Data, Journal of Geodesy, doi: 10.1007/s00190-012-0605-0. Bos, M.S., L. Bastos, R.M.S. Fernandes, (2009). The influence of seasonal signals on the estimation of the tectonic motion in short continuous GPS time-series, J. of Geodynamics, j.jog.2009.10.005. Fernandes, R.M.S., J. M. Miranda, D. Delvaux, D. S. Stamps and E. Saria (2013). Re-evaluation of the kinematics of Victoria Block using continuous GNSS data, Geophysical Journal International, doi:10.1093/gji/ggs071. Langbein, J. (2012). Estimating rate uncertainty with maximum likelihood: differences between power-law and flicker-random-walk models, Journal of Geodesy, Volume 86, Issue 9, pp 775-783, Williams, S. D. P. (2003). Offsets in Global Positioning System time series, J. Geophys. Res., 108, 2310, doi:10.1029/2002JB002156, B6.
Sensitivity-analysis techniques: self-teaching curriculum
Iman, R.L.; Conover, W.J.
1982-06-01
This self teaching curriculum on sensitivity analysis techniques consists of three parts: (1) Use of the Latin Hypercube Sampling Program (Iman, Davenport and Ziegler, Latin Hypercube Sampling (Program User's Guide), SAND79-1473, January 1980); (2) Use of the Stepwise Regression Program (Iman, et al., Stepwise Regression with PRESS and Rank Regression (Program User's Guide) SAND79-1472, January 1980); and (3) Application of the procedures to sensitivity and uncertainty analyses of the groundwater transport model MWFT/DVM (Campbell, Iman and Reeves, Risk Methodology for Geologic Disposal of Radioactive Waste - Transport Model Sensitivity Analysis; SAND80-0644, NUREG/CR-1377, June 1980: Campbell, Longsine, and Reeves, The Distributed Velocity Method of Solving the Convective-Dispersion Equation, SAND80-0717, NUREG/CR-1376, July 1980). This curriculum is one in a series developed by Sandia National Laboratories for transfer of the capability to use the technology developed under the NRC funded High Level Waste Methodology Development Program.
Analysis of frequency characteristics and sensitivity of compliant mechanisms
NASA Astrophysics Data System (ADS)
Liu, Shanzeng; Dai, Jiansheng; Li, Aimin; Sun, Zhaopeng; Feng, Shizhe; Cao, Guohua
2016-07-01
Based on a modified pseudo-rigid-body model, the frequency characteristics and sensitivity of the large-deformation compliant mechanism are studied. Firstly, the pseudo-rigid-body model under the static and kinetic conditions is modified to enable the modified pseudo-rigid-body model to be more suitable for the dynamic analysis of the compliant mechanism. Subsequently, based on the modified pseudo-rigid-body model, the dynamic equations of the ordinary compliant four-bar mechanism are established using the analytical mechanics. Finally, in combination with the finite element analysis software ANSYS, the frequency characteristics and sensitivity of the compliant mechanism are analyzed by taking the compliant parallel-guiding mechanism and the compliant bistable mechanism as examples. From the simulation results, the dynamic characteristics of compliant mechanism are relatively sensitive to the structure size, section parameter, and characteristic parameter of material on mechanisms. The results could provide great theoretical significance and application values for the structural optimization of compliant mechanisms, the improvement of their dynamic properties and the expansion of their application range.
LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.
2000-01-01
A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).
Life cycle assessment on biogas production from straw and its sensitivity analysis.
Wang, Qiao-Li; Li, Wei; Gao, Xiang; Li, Su-Jing
2016-02-01
This study aims to investigate the synthetically environmental impacts and Global Warming Potentials (GWPs) of straw-based biogas production process via cradle-to-gate life cycle assessment (LCA) technique. Eco-indicator 99 (H) and IPCC 2007 GWP with three time horizons are utilized. The results indicate that the biogas production process shows beneficial effect on synthetic environment and is harmful to GWPs. Its harmful effects on GWPs are strengthened with time. Usage of gas-fired power which burns the self-produced natural gas (NG) can create a more sustainable process. Moreover, sensitivity analysis indicated that total electricity consumption and CO2 absorbents in purification unit have the largest sensitivity to the environment. Hence, more efforts should be made on more efficient use of electricity and wiser selection of CO2 absorbent. PMID:26649899
Life cycle assessment on biogas production from straw and its sensitivity analysis.
Wang, Qiao-Li; Li, Wei; Gao, Xiang; Li, Su-Jing
2016-02-01
This study aims to investigate the synthetically environmental impacts and Global Warming Potentials (GWPs) of straw-based biogas production process via cradle-to-gate life cycle assessment (LCA) technique. Eco-indicator 99 (H) and IPCC 2007 GWP with three time horizons are utilized. The results indicate that the biogas production process shows beneficial effect on synthetic environment and is harmful to GWPs. Its harmful effects on GWPs are strengthened with time. Usage of gas-fired power which burns the self-produced natural gas (NG) can create a more sustainable process. Moreover, sensitivity analysis indicated that total electricity consumption and CO2 absorbents in purification unit have the largest sensitivity to the environment. Hence, more efforts should be made on more efficient use of electricity and wiser selection of CO2 absorbent.
Mechanical performance and parameter sensitivity analysis of 3D braided composites joints.
Wu, Yue; Nan, Bo; Chen, Liang
2014-01-01
3D braided composite joints are the important components in CFRP truss, which have significant influence on the reliability and lightweight of structures. To investigate the mechanical performance of 3D braided composite joints, a numerical method based on the microscopic mechanics is put forward, the modeling technologies, including the material constants selection, element type, grid size, and the boundary conditions, are discussed in detail. Secondly, a method for determination of ultimate bearing capacity is established, which can consider the strength failure. Finally, the effect of load parameters, geometric parameters, and process parameters on the ultimate bearing capacity of joints is analyzed by the global sensitivity analysis method. The results show that the main pipe diameter thickness ratio γ, the main pipe diameter D, and the braided angle α are sensitive to the ultimate bearing capacity N.
Mechanical Performance and Parameter Sensitivity Analysis of 3D Braided Composites Joints
Wu, Yue; Nan, Bo; Chen, Liang
2014-01-01
3D braided composite joints are the important components in CFRP truss, which have significant influence on the reliability and lightweight of structures. To investigate the mechanical performance of 3D braided composite joints, a numerical method based on the microscopic mechanics is put forward, the modeling technologies, including the material constants selection, element type, grid size, and the boundary conditions, are discussed in detail. Secondly, a method for determination of ultimate bearing capacity is established, which can consider the strength failure. Finally, the effect of load parameters, geometric parameters, and process parameters on the ultimate bearing capacity of joints is analyzed by the global sensitivity analysis method. The results show that the main pipe diameter thickness ratio γ, the main pipe diameter D, and the braided angle α are sensitive to the ultimate bearing capacity N. PMID:25121121
Gao, Kuikui; Zhang, Xu; Feng, Enmin; Xiu, Zhilong
2014-04-21
In this paper, we establish a modified fourteen-dimensional nonlinear hybrid dynamic system with genetic regulation to describe the microbial continuous culture, in which we consider that there are three possible ways for glycerol to pass the cell's membrane and one way for 1,3-PD (passive diffusion and active transport). Then we discuss the existence, uniqueness, continuous dependence of solutions and the compactness of the solution set. We construct a global sensitivity analysis approach to reduce the number of kinetic parameters. In order to infer the most reasonable transport mechanism of glycerol, we propose a parameter identification model aiming at identifying the parameter with higher sensitivity and transport of glycerol, which takes the robustness index of the intracellular substance together with the relative error between the experimental data and the computational values of the extracellular substance as a performance index. Finally, a parallel algorithm is applied to find the optimal transport of glycerol and parameters. PMID:24406809
Sensitivity Analysis of Hardwired Parameters in GALE Codes
Geelhood, Kenneth J.; Mitchell, Mark R.; Droppo, James G.
2008-12-01
The U.S. Nuclear Regulatory Commission asked Pacific Northwest National Laboratory to provide a data-gathering plan for updating the hardwired data tables and parameters of the Gaseous and Liquid Effluents (GALE) codes to reflect current nuclear reactor performance. This would enable the GALE codes to make more accurate predictions about the normal radioactive release source term applicable to currently operating reactors and to the cohort of reactors planned for construction in the next few years. A sensitivity analysis was conducted to define the importance of hardwired parameters in terms of each parameter’s effect on the emission rate of the nuclides that are most important in computing potential exposures. The results of this study were used to compile a list of parameters that should be updated based on the sensitivity of these parameters to outputs of interest.
A sensitivity analysis of regional and small watershed hydrologic models
NASA Technical Reports Server (NTRS)
Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.
1975-01-01
Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.
High derivatives for fast sensitivity analysis in linear magnetodynamics
Petin, P. |; Coulomb, J.L.; Conraux, P.
1997-03-01
In this article, the authors present a method of sensitivity analysis using high derivatives and Taylor development. The principle is to find a polynomial approximation of the finite elements solution towards the sensitivity parameters. While presenting the method, they explain why this method is applicable with special parameters only. They applied it on a magnetodynamic problem, simple enough to be able to find the analytical solution with a formal calculus tool. They then present the implementation and the good results obtained with the polynomial, first by comparing the derivatives themselves, then by comparing the approximate solution with the theoretical one. After this validation, the authors present results on a real 2D application and they underline the possibilities of reuse in other fields of physics.
SENSITIVITY ANALYSIS OF A TPB DEGRADATION RATE MODEL
Crawford, C; Tommy Edwards, T; Bill Wilmarth, B
2006-08-01
A tetraphenylborate (TPB) degradation model for use in aggregating Tank 48 material in Tank 50 is developed in this report. The influential factors for this model are listed as the headings in the table below. A sensitivity study of the predictions of the model over intervals of values for the influential factors affecting the model was conducted. These intervals bound the levels of these factors expected during Tank 50 aggregations. The results from the sensitivity analysis were used to identify settings for the influential factors that yielded the largest predicted TPB degradation rate. Thus, these factor settings are considered as those that yield the ''worst-case'' scenario for TPB degradation rate for Tank 50 aggregation, and, as such they would define the test conditions that should be studied in a waste qualification program whose dual purpose would be the investigation of the introduction of Tank 48 material for aggregation in Tank 50 and the bounding of TPB degradation rates for such aggregations.
Biosphere dose conversion Factor Importance and Sensitivity Analysis
M. Wasiolek
2004-10-15
This report presents importance and sensitivity analysis for the environmental radiation model for Yucca Mountain, Nevada (ERMYN). ERMYN is a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis concerns the output of the model, biosphere dose conversion factors (BDCFs) for the groundwater, and the volcanic ash exposure scenarios. It identifies important processes and parameters that influence the BDCF values and distributions, enhances understanding of the relative importance of the physical and environmental processes on the outcome of the biosphere model, includes a detailed pathway analysis for key radionuclides, and evaluates the appropriateness of selected parameter values that are not site-specific or have large uncertainty.
NASA Astrophysics Data System (ADS)
Ogle, K.; Ryan, E.; Pendall, E. G.
2013-12-01
The loss of carbon from ecosystems (ecosystem respiration, Reco) is a dynamic process and temperature is a key factor governing short- and long-term Reco dynamics. The goal of this study is to evaluate the temperature sensitivity of Reco and to learn how it adjusts to environmental conditions such as temperature and experimental warming. To do this, we synthesized 5 years of Reco data generated by the Prairie Heating and CO2 Enrichment (PHACE) study that was conducted in a semi-arid grassland in southeastern Wyoming. The PHACE experiment consists of 6 treatments involving atmospheric CO2 (ambient, elevated), temperature (ambient, warmed), and soil water (ambient, shallow irrigation, deep irrigation). Thus, PHACE provided a unique opportunity to explore how the temperature response of Reco over daily, weekly, and longer time scales is governed by the experimental treatments and co-occurring environmental variation. We synthesized the Reco data using a semi-mechanistic temperature response model that involves a temperature sensitivity term (Eo) analogous to an energy of activation. We explored how Eo varies in response to the experimental treatments and current and antecedent soil water and temperature within a Bayesian framework that allowed us to incorporate a novel stochastic model for the antecedent variables. For example, antecedent temperature is modeled as a weighted average of past daily temperatures, and we estimated the unknown daily weights that quantify potential lag responses and time-scales over which past temperatures affect Eo. Thus, changes in Eo describe instantaneous responses of Reco to temperature, the antecedent temperature effects describe relatively short-term (days/weeks) acclimatization, and the effects of experimental warming describe longer term (months/years) acclimatization potential of Reco. Our analysis predicted that Eo can vary by a factor of 2 or more over the growing season. Although not significant, the trend was for Eo to be
A Global Optimization Approach to Multi-Polarity Sentiment Analysis
Li, Xinmiao; Li, Jing; Wu, Yukeng
2015-01-01
Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From
Sensitivity Analysis of OECD Benchmark Tests in BISON
Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.; Williamson, Richard
2015-09-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.
Kefford, Ben J.; Hickey, Graeme L.; Gasith, Avital; Ben-David, Elad; Dunlop, Jason E.; Palmer, Carolyn G.; Allan, Kaylene; Choy, Satish C.; Piscart, Christophe
2012-01-01
Salinity is a key abiotic property of inland waters; it has a major influence on biotic communities and is affected by many natural and anthropogenic processes. Salinity of inland waters tends to increase with aridity, and biota of inland waters may have evolved greater salt tolerance in more arid regions. Here we compare the sensitivity of stream macroinvertebrate species to salinity from a relatively wet region in France (Lorraine and Brittany) to that in three relatively arid regions eastern Australia (Victoria, Queensland and Tasmania), South Africa (south-east of the Eastern Cape Province) and Israel using the identical experimental method in all locations. The species whose salinity tolerance was tested, were somewhat more salt tolerant in eastern Australia and South Africa than France, with those in Israel being intermediate. However, by far the greatest source of variation in species sensitivity was between taxonomic groups (Order and Class) and not between the regions. We used a Bayesian statistical model to estimate the species sensitivity distributions (SSDs) for salinity in eastern Australia and France adjusting for the assemblages of species in these regions. The assemblage in France was slightly more salinity sensitive than that in eastern Australia. We therefore suggest that regional salinity sensitivity is therefore likely to depend most on the taxonomic composition of respective macroinvertebrate assemblages. On this basis it would be possible to screen rivers globally for risk from salinisation. PMID:22567097
Kefford, Ben J; Hickey, Graeme L; Gasith, Avital; Ben-David, Elad; Dunlop, Jason E; Palmer, Carolyn G; Allan, Kaylene; Choy, Satish C; Piscart, Christophe
2012-01-01
Salinity is a key abiotic property of inland waters; it has a major influence on biotic communities and is affected by many natural and anthropogenic processes. Salinity of inland waters tends to increase with aridity, and biota of inland waters may have evolved greater salt tolerance in more arid regions. Here we compare the sensitivity of stream macroinvertebrate species to salinity from a relatively wet region in France (Lorraine and Brittany) to that in three relatively arid regions eastern Australia (Victoria, Queensland and Tasmania), South Africa (south-east of the Eastern Cape Province) and Israel using the identical experimental method in all locations. The species whose salinity tolerance was tested, were somewhat more salt tolerant in eastern Australia and South Africa than France, with those in Israel being intermediate. However, by far the greatest source of variation in species sensitivity was between taxonomic groups (Order and Class) and not between the regions. We used a bayesian statistical model to estimate the species sensitivity distributions (SSDs) for salinity in eastern Australia and France adjusting for the assemblages of species in these regions. The assemblage in France was slightly more salinity sensitive than that in eastern Australia. We therefore suggest that regional salinity sensitivity is therefore likely to depend most on the taxonomic composition of respective macroinvertebrate assemblages. On this basis it would be possible to screen rivers globally for risk from salinisation. PMID:22567097
Coral reefs are highly valued ecosystems that are currently imperiled. Although the value of coral reefs to human societies is only just being investigated and better understood, for many local and global economies coral reefs are important providers of ecosystem services that su...
Sensitivity of Simulated Global Ocean Carbon Flux Estimates to Forcing by Reanalysis Products
NASA Technical Reports Server (NTRS)
Gregg, Watson W.; Casey, Nancy W.; Rousseaux, Cecile S.
2015-01-01
Reanalysis products from MERRA, NCEP2, NCEP1, and ECMWF were used to force an established ocean biogeochemical model to estimate air-sea carbon fluxes (FCO2) and partial pressure of carbon dioxide (pCO2) in the global oceans. Global air-sea carbon fluxes and pCO2 were relatively insensitive to the choice of forcing reanalysis. All global FCO2 estimates from the model forced by the four different reanalyses were within 20% of in situ estimates (MERRA and NCEP1 were within 7%), and all models exhibited statistically significant positive correlations with in situ estimates across the 12 major oceanographic basins. Global pCO2 estimates were within 1% of in situ estimates with ECMWF being the outlier at 0.6%. Basin correlations were similar to FCO2. There were, however, substantial departures among basin estimates from the different reanalysis forcings. The high latitudes and tropics had the largest ranges in estimated fluxes among the reanalyses. Regional pCO2 differences among the reanalysis forcings were muted relative to the FCO2 results. No individual reanalysis was uniformly better or worse in the major oceanographic basins. The results provide information on the characterization of uncertainty in ocean carbon models due to choice of reanalysis forcing.
SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES
Flach, G.
2014-10-28
PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.
Sensitivity analysis of discrete structural systems: A survey
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.
1984-01-01
Methods for calculating sensitivity derivatives for discrete structural systems are surveyed, primarily covering literature published during the past two decades. Methods are described for calculating derivatives of static displacements and stresses, eigenvalues and eigenvectors, transient structural response, and derivatives of optimum structural designs with respect to problem parameters. The survey is focused on publications addressed to structural analysis, but also includes a number of methods developed in nonstructural fields such as electronics, controls, and physical chemistry which are directly applicable to structural problems. Most notable among the nonstructural-based methods are the adjoint variable technique from control theory, and the Green's function and FAST methods from physical chemistry.
Path-sensitive analysis for reducing rollback overheads
O'Brien, John K.P.; Wang, Kai-Ting Amy; Yamashita, Mark; Zhuang, Xiaotong
2014-07-22
A mechanism is provided for path-sensitive analysis for reducing rollback overheads. The mechanism receives, in a compiler, program code to be compiled to form compiled code. The mechanism divides the code into basic blocks. The mechanism then determines a restore register set for each of the one or more basic blocks to form one or more restore register sets. The mechanism then stores the one or more register sets such that responsive to a rollback during execution of the compiled code. A rollback routine identifies a restore register set from the one or more restore register sets and restores registers identified in the identified restore register set.
Rheological Models of Blood: Sensitivity Analysis and Benchmark Simulations
NASA Astrophysics Data System (ADS)
Szeliga, Danuta; Macioł, Piotr; Banas, Krzysztof; Kopernik, Magdalena; Pietrzyk, Maciej
2010-06-01
Modeling of blood flow with respect to rheological parameters of the blood is the objective of this paper. Casson type equation was selected as a blood model and the blood flow was analyzed based on Backward Facing Step benchmark. The simulations were performed using ADINA-CFD finite element code. Three output parameters were selected, which characterize the accuracy of flow simulation. Sensitivity analysis of the results with Morris Design method was performed to identify rheological parameters and the model output, which control the blood flow to significant extent. The paper is the part of the work on identification of parameters controlling process of clotting.
Predictability of global surface temperature by means of nonlinear analysis
NASA Astrophysics Data System (ADS)
Gimeno, L.; García, R.; Pacheco, J. M.; Hernández, E.; Ribera, P.
2001-01-01
The time series of annually averaged global surface temperature anomalies for the years 1856-1998 is studied through nonlinear time series analysis with the aim of estimating the predictability time. Detection of chaotic behaviour in the data indicates that there is some internal structure in the data; the data may be considered to be governed by a deterministic process and some predictability is expected. Several tests are performed on the series, with results indicating possible chaotic behaviour.
Transversity and Collins Fragmentation Functions: Towards a New Global Analysis
Anselmino, M.; Boglione, M.; Melis, S.; Prokudin, A.; D'Alesio, U.; Kotzinian, A.; Murgia, F.
2009-08-04
We present an update of a previous global analysis of the experimental data on azimuthal asymmetries in semi-inclusive deep inelastic scattering (SIDIS), from the HERMES and COMPASS Collaborations, and in e{sup +}e{sup -}{yields}h{sub 1}h{sub 2}X processes, from the Belle Collaboration. Compared to the first extraction, a more precise determination of the Collins fragmentation function and the transversity distribution function for u and d quarks is obtained.
Sensitivity and uncertainty analysis of a polyurethane foam decomposition model
HOBBS,MICHAEL L.; ROBINSON,DAVID G.
2000-03-14
Sensitivity/uncertainty analyses are not commonly performed on complex, finite-element engineering models because the analyses are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, an analytical sensitivity/uncertainty analysis is used to determine the standard deviation and the primary factors affecting the burn velocity of polyurethane foam exposed to firelike radiative boundary conditions. The complex, finite element model has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state burn velocity calculated as the derivative of the burn front location versus time. The standard deviation of the burn velocity was determined by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation is essentially determined from a second derivative that is extremely sensitive to numerical noise. To minimize the numerical noise, 50-micron elements and approximately 1-msec time steps were required to obtain stable uncertainty results. The primary effect variable was shown to be the emissivity of the foam.
Sensitivity analysis for texture models applied to rust steel classification
NASA Astrophysics Data System (ADS)
Trujillo, Maite; Sadki, Mustapha
2004-05-01
The exposure of metallic structures to rust degradation during their operational life is a known problem and it affects storage tanks, steel bridges, ships, etc. In order to prevent this degradation and the potential related catastrophes, the surfaces have to be assessed and the appropriate surface treatment and coating need to be applied according to the corrosion time of the steel. We previously investigated the potential of image processing techniques to tackle this problem. Several mathematical algorithms methods were analyzed and evaluated on a database of 500 images. In this paper, we extend our previous research and provide a further analysis of the textural mathematical methods for automatic rust time steel detection. Statistical descriptors are provided to evaluate the sensitivity of the results as well as the advantages and limitations of the different methods. Finally, a selector of the classifiers algorithms is introduced and the ratio between sensitivity of the results and time response (execution time) is analyzed to compromise good classification results (high sensitivity) and acceptable time response for the automation of the system.
An, Xiangbo; Wang, Jingjing; Li, Hao; Lu, Zhizhen; Bai, Yan; Xiao, Han; Zhang, Youyi; Song, Yao
2016-01-01
Cardiac hypertrophy is a key pathological process of many cardiac diseases. However, early detection of cardiac hypertrophy is difficult by the currently used non-invasive method and new approaches are in urgent need for efficient diagnosis of cardiac malfunction. Here we report that speckle tracking-based strain analysis is more sensitive than conventional echocardiography for early detection of pathological cardiac hypertrophy in the isoproterenol (ISO) mouse model. Pathological hypertrophy was induced by a single subcutaneous injection of ISO. Physiological cardiac hypertrophy was established by daily treadmill exercise for six weeks. Strain analysis, including radial strain (RS), radial strain rate (RSR) and longitudinal strain (LS), showed marked decrease as early as 3 days after ISO injection. Moreover, unlike the regional changes in cardiac infarction, strain analysis revealed global cardiac dysfunction that affects the entire heart in ISO-induced hypertrophy. In contrast, conventional echocardiography, only detected altered E/E’, an index reflecting cardiac diastolic function, at 7 days after ISO injection. No change was detected on fractional shortening (FS), E/A and E’/A’ at 3 days or 7 days after ISO injection. Interestingly, strain analysis revealed cardiac dysfunction only in ISO-induced pathological hypertrophy but not the physiological hypertrophy induced by exercise. Taken together, our study indicates that strain analysis offers a more sensitive approach for early detection of cardiac dysfunction than conventional echocardiography. Moreover, multiple strain readouts distinguish pathological cardiac hypertrophy from physiological hypertrophy. PMID:26871457
An, Xiangbo; Wang, Jingjing; Li, Hao; Lu, Zhizhen; Bai, Yan; Xiao, Han; Zhang, Youyi; Song, Yao
2016-01-01
Cardiac hypertrophy is a key pathological process of many cardiac diseases. However, early detection of cardiac hypertrophy is difficult by the currently used non-invasive method and new approaches are in urgent need for efficient diagnosis of cardiac malfunction. Here we report that speckle tracking-based strain analysis is more sensitive than conventional echocardiography for early detection of pathological cardiac hypertrophy in the isoproterenol (ISO) mouse model. Pathological hypertrophy was induced by a single subcutaneous injection of ISO. Physiological cardiac hypertrophy was established by daily treadmill exercise for six weeks. Strain analysis, including radial strain (RS), radial strain rate (RSR) and longitudinal strain (LS), showed marked decrease as early as 3 days after ISO injection. Moreover, unlike the regional changes in cardiac infarction, strain analysis revealed global cardiac dysfunction that affects the entire heart in ISO-induced hypertrophy. In contrast, conventional echocardiography, only detected altered E/E', an index reflecting cardiac diastolic function, at 7 days after ISO injection. No change was detected on fractional shortening (FS), E/A and E'/A' at 3 days or 7 days after ISO injection. Interestingly, strain analysis revealed cardiac dysfunction only in ISO-induced pathological hypertrophy but not the physiological hypertrophy induced by exercise. Taken together, our study indicates that strain analysis offers a more sensitive approach for early detection of cardiac dysfunction than conventional echocardiography. Moreover, multiple strain readouts distinguish pathological cardiac hypertrophy from physiological hypertrophy.
Acid rain: Some preliminary results from global data analysis
NASA Astrophysics Data System (ADS)
Sequeira, R.
1981-02-01
Preliminary results of an analysis of global precipitation data from WMO (World Meteorological Organization) stations suggest that even remote maritime baseline stations, far removed from major continents, could become predisposed to acid rain if there is a deficiency of non-marine calcium relative to non-marine sulfate. The regional stations show greater complexity than the baseline stations in their precipitation chemistry. The overall results of this analysis suggest that not all non-marine sulfate and nitrate in precipitation could be present as acid.
Simplifying multivariate survival analysis using global score test methodology
NASA Astrophysics Data System (ADS)
Zain, Zakiyah; Aziz, Nazrina; Ahmad, Yuhaniz
2015-12-01
In clinical trials, the main purpose is often to compare efficacy between experimental and control treatments. Treatment comparisons often involve multiple endpoints, and this situation further complicates the analysis of survival data. In the case of tumor patients, endpoints concerning survival times include: times from tumor removal until the first, the second and the third tumor recurrences, and time to death. For each patient, these endpoints are correlated, and the estimation of the correlation between two score statistics is fundamental in derivation of overall treatment advantage. In this paper, the bivariate survival analysis method using the global score test methodology is extended to multivariate setting.
NASA Astrophysics Data System (ADS)
Fujie, Kentarou; Senba, Takasi
2016-08-01
This paper deals with positive radially symmetric solutions of the Neumann boundary value problem for the fully parabolic chemotaxis system, {ut=Δu‑∇ṡ(u∇χ(v))in Ω×(0,∞),τvt=Δv‑v+uin Ω×(0,∞), in a ball Ω \\subset {{{R}}2} with general sensitivity function χ (v) satisfying {χ\\prime}>0 and decaying property {χ\\prime}(s)\\to 0 (s\\to ∞ ), parameter τ \\in ≤ft(0,1\\right] and nonnegative radially symmetric initial data. It is shown that if τ \\in ≤ft(0,1\\right] is sufficiently small, then the problem has a unique classical radially symmetric solution, which exists globally and remains uniformly bounded in time. Especially, this result establishes global existence of solutions in the case χ (v)={χ0}log v for all {χ0}>0 , which has been left as an open problem.
NASA Astrophysics Data System (ADS)
Fujie, Kentarou; Senba, Takasi
2016-08-01
This paper deals with positive radially symmetric solutions of the Neumann boundary value problem for the fully parabolic chemotaxis system, {ut=Δu-∇ṡ(u∇χ(v))in Ω×(0,∞),τvt=Δv-v+uin Ω×(0,∞), in a ball Ω \\subset {{{R}}2} with general sensitivity function χ (v) satisfying {χ\\prime}>0 and decaying property {χ\\prime}(s)\\to 0 (s\\to ∞ ), parameter τ \\in ≤ft(0,1\\right] and nonnegative radially symmetric initial data. It is shown that if τ \\in ≤ft(0,1\\right] is sufficiently small, then the problem has a unique classical radially symmetric solution, which exists globally and remains uniformly bounded in time. Especially, this result establishes global existence of solutions in the case χ (v)={χ0}log v for all {χ0}>0 , which has been left as an open problem.
Analysis of Transition-Sensitized Turbulent Transport Equations
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Thacker, William D.; Gatski, Thomas B.; Grosch, Chester E,
2005-01-01
The dynamics of an ensemble of linear disturbances in boundary-layer flows at various Reynolds numbers is studied through an analysis of the transport equations for the mean disturbance kinetic energy and energy dissipation rate. Effects of adverse and favorable pressure-gradients on the disturbance dynamics are also included in the analysis Unlike the fully turbulent regime where nonlinear phase scrambling of the fluctuations affects the flow field even in proximity to the wall, the early stage transition regime fluctuations studied here are influenced cross the boundary layer by the solid boundary. The dominating dynamics in the disturbance kinetic energy and dissipation rate equations are described. These results are then used to formulate transition-sensitized turbulent transport equations, which are solved in a two-step process and applied to zero-pressure-gradient flow over a flat plate. Computed results are in good agreement with experimental data.
[Global Atmospheric Chemistry/Transport Modeling and Data-Analysis
NASA Technical Reports Server (NTRS)
Prinn, Ronald G.
1999-01-01
This grant supported a global atmospheric chemistry/transport modeling and data- analysis project devoted to: (a) development, testing, and refining of inverse methods for determining regional and global transient source and sink strengths for trace gases; (b) utilization of these inverse methods which use either the Model for Atmospheric Chemistry and Transport (MATCH) which is based on analyzed observed winds or back- trajectories calculated from these same winds for determining regional and global source and sink strengths for long-lived trace gases important in ozone depletion and the greenhouse effect; (c) determination of global (and perhaps regional) average hydroxyl radical concentrations using inverse methods with multiple "titrating" gases; and (d) computation of the lifetimes and spatially resolved destruction rates of trace gases using 3D models. Important ultimate goals included determination of regional source strengths of important biogenic/anthropogenic trace gases and also of halocarbons restricted by the Montreal Protocol and its follow-on agreements, and hydrohalocarbons now used as alternatives to the above restricted halocarbons.
GLobal Ocean Data Analysis Project (GLODAP): Data and Analyses
Sabine, C. L.; Key, R. M.; Feely, R. A.; Bullister, J. L.; Millero, F. J.; Wanninkhof, R.; Peng, T. H.; Kozyr, A.
The GLobal Ocean Data Analysis Project (GLODAP) is a cooperative effort to coordinate global synthesis projects funded through NOAA, DOE, and NSF as part of the Joint Global Ocean Flux Study - Synthesis and Modeling Project (JGOFS-SMP). Cruises conducted as part of the World Ocean Circulation Experiment (WOCE), JGOFS, and the NOAA Ocean-Atmosphere Exchange Study (OACES) over the decade of the 1990s have created an important oceanographic database for the scientific community investigating carbon cycling in the oceans. The unified data help to determine the global distributions of both natural and anthropogenic inorganic carbon, including radiocarbon. These estimates provide an important benchmark against which future observational studies will be compared. They also provide tools for the direct evaluation of numerical ocean carbon models. GLODAP information available through CDIAC includes gridded and bottle data, a live server, an interactive atlas that provides access to data plots, and other tools for viewing and interacting with the data. [from http://cdiac.esd.ornl.gov/oceans/glodap/Glopintrod.htm](Specialized Interface)
Diffenbaugh, N.S.; Sloan, L.C.; Snyder, M.A.; Bell, J.L.; Kaplan, J.; Shafer, S.L.; Bartlein, P.J.
2003-01-01
Anthropogenic increases in atmospheric carbon dioxide (CO2) concentrations may affect vegetation distribution both directly through changes in photosynthesis and water-use efficiency, and indirectly through CO2-induced climate change. Using an equilibrium vegetation model (BIOME4) driven by a regional climate model (RegCM2.5), we tested the sensitivity of vegetation in the western United States, a topographically complex region, to the direct, indirect, and combined effects of doubled preindustrial atmospheric CO2 concentrations. Those sensitivities were quantified using the kappa statistic. Simulated vegetation in the western United States was sensitive to changes in atmospheric CO2 concentrations, with woody biome types replacing less woody types throughout the domain. The simulated vegetation was also sensitive to climatic effects, particularly at high elevations, due to both warming throughout the domain and decreased precipitation in key mountain regions such as the Sierra Nevada of California and the Cascade and Blue Mountains of Oregon. Significantly, when the direct effects of CO2 on vegetation were tested in combination with the indirect effects of CO2-induced climate change, new vegetation patterns were created that were not seen in either of the individual cases. This result indicates that climatic and nonclimatic effects must be considered in tandem when assessing the potential impacts of elevated CO2 levels.
Comparative Analysis of State Fish Consumption Advisories Targeting Sensitive Populations
Scherer, Alison C.; Tsuchiya, Ami; Younglove, Lisa R.; Burbacher, Thomas M.; Faustman, Elaine M.
2008-01-01
Objective Fish consumption advisories are issued to warn the public of possible toxicological threats from consuming certain fish species. Although developing fetuses and children are particularly susceptible to toxicants in fish, fish also contain valuable nutrients. Hence, formulating advice for sensitive populations poses challenges. We conducted a comparative analysis of advisory Web sites issued by states to assess health messages that sensitive populations might access. Data sources We evaluated state advisories accessed via the National Listing of Fish Advisories issued by the U.S. Environmental Protection Agency. Data extraction We created criteria to evaluate advisory attributes such as risk and benefit message clarity. Data synthesis All 48 state advisories issued at the time of this analysis targeted children, 90% (43) targeted pregnant women, and 58% (28) targeted women of childbearing age. Only six advisories addressed single contaminants, while the remainder based advice on 2–12 contaminants. Results revealed that advisories associated a dozen contaminants with specific adverse health effects. Beneficial health effects of any kind were specifically associated only with omega-3 fatty acids found in fish. Conclusions These findings highlight the complexity of assessing and communicating information about multiple contaminant exposure from fish consumption. Communication regarding potential health benefits conferred by specific fish nutrients was minimal and focused primarily on omega-3 fatty acids. This overview suggests some lessons learned and highlights a lack of both clarity and consistency in providing the breadth of information that sensitive populations such as pregnant women need to make public health decisions about fish consumption during pregnancy. PMID:19079708
Towards a controlled sensitivity analysis of model development decisions
NASA Astrophysics Data System (ADS)
Clark, Martyn; Nijssen, Bart
2016-04-01
The current generation of hydrologic models have followed a myriad of different development paths, making it difficult for the community to test underlying hypotheses and identify a clear path to model improvement. Model comparison studies have been undertaken to explore model differences, but these studies have not been able to meaningfully attribute inter-model differences in predictive ability to individual model components because there are often too many structural and implementation differences among the models considered. As a consequence, model comparison studies to date have provided limited insight into the causes of differences in model behavior, and model development has often relied on the inspiration and experience of individual modelers rather than a systematic analysis of model shortcomings. This presentation will discuss a unified approach to process-based hydrologic modeling to enable controlled and systematic analysis of multiple model representations (hypotheses) of hydrologic processes and scaling behavior. Our approach, which we term the Structure for Unifying Multiple Modeling Alternatives (SUMMA), formulates a general set of conservation equations, providing the flexibility to experiment with different spatial representations, different flux parameterizations, different model parameter values, and different time stepping schemes. We will discuss the use of SUMMA to systematically analyze different model development decisions, focusing on both analysis of simulations for intensively instrumented research watersheds as well as simulations across a global dataset of FLUXNET sites. The intent of the presentation is to demonstrate how the systematic analysis of model shortcomings can help identify model weaknesses and inform future model development priorities.
NASA Astrophysics Data System (ADS)
Rohmer, Jeremy
2016-04-01
Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.
Simple Sensitivity Analysis for Orion Guidance Navigation and Control
NASA Technical Reports Server (NTRS)
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch. We describe in this paper a sensitivity analysis tool ("Critical Factors Tool" or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
Three-dimensional aerodynamic shape optimization using discrete sensitivity analysis
NASA Technical Reports Server (NTRS)
Burgreen, Gregory W.
1995-01-01
An aerodynamic shape optimization procedure based on discrete sensitivity analysis is extended to treat three-dimensional geometries. The function of sensitivity analysis is to directly couple computational fluid dynamics (CFD) with numerical optimization techniques, which facilitates the construction of efficient direct-design methods. The development of a practical three-dimensional design procedures entails many challenges, such as: (1) the demand for significant efficiency improvements over current design methods; (2) a general and flexible three-dimensional surface representation; and (3) the efficient solution of very large systems of linear algebraic equations. It is demonstrated that each of these challenges is overcome by: (1) employing fully implicit (Newton) methods for the CFD analyses; (2) adopting a Bezier-Bernstein polynomial parameterization of two- and three-dimensional surfaces; and (3) using preconditioned conjugate gradient-like linear system solvers. Whereas each of these extensions independently yields an improvement in computational efficiency, the combined effect of implementing all the extensions simultaneously results in a significant factor of 50 decrease in computational time and a factor of eight reduction in memory over the most efficient design strategies in current use. The new aerodynamic shape optimization procedure is demonstrated in the design of both two- and three-dimensional inviscid aerodynamic problems including a two-dimensional supersonic internal/external nozzle, two-dimensional transonic airfoils (resulting in supercritical shapes), three-dimensional transport wings, and three-dimensional supersonic delta wings. Each design application results in realistic and useful optimized shapes.
Sensitivity Analysis of Offshore Wind Cost of Energy (Poster)
Dykes, K.; Ning, A.; Graf, P.; Scott, G.; Damiami, R.; Hand, M.; Meadows, R.; Musial, W.; Moriarty, P.; Veers, P.
2012-10-01
No matter the source, offshore wind energy plant cost estimates are significantly higher than for land-based projects. For instance, a National Renewable Energy Laboratory (NREL) review on the 2010 cost of wind energy found baseline cost estimates for onshore wind energy systems to be 71 dollars per megawatt-hour ($/MWh), versus 225 $/MWh for offshore systems. There are many ways that innovation can be used to reduce the high costs of offshore wind energy. However, the use of such innovation impacts the cost of energy because of the highly coupled nature of the system. For example, the deployment of multimegawatt turbines can reduce the number of turbines, thereby reducing the operation and maintenance (O&M) costs associated with vessel acquisition and use. On the other hand, larger turbines may require more specialized vessels and infrastructure to perform the same operations, which could result in higher costs. To better understand the full impact of a design decision on offshore wind energy system performance and cost, a system analysis approach is needed. In 2011-2012, NREL began development of a wind energy systems engineering software tool to support offshore wind energy system analysis. The tool combines engineering and cost models to represent an entire offshore wind energy plant and to perform system cost sensitivity analysis and optimization. Initial results were collected by applying the tool to conduct a sensitivity analysis on a baseline offshore wind energy system using 5-MW and 6-MW NREL reference turbines. Results included information on rotor diameter, hub height, power rating, and maximum allowable tip speeds.
Thompson, S.L.; Pollard, D.
1995-05-01
The sensitivity of the equilibrium climate to doubled atmospheric CO{sub 2} is investigated using the GENESIS global climate model version 1.02. The atmospheric general circulation model is a heavily modified version of the NCAR CCM1 and is coupled to a multicanopy lane-surface model (LSX); multilayer models of soil, snow, and sea ice; and a slab ocean mixed layer. Features that are relatively new in CO{sub 2} sensitivity studies include explicit subgrid convective plumes, PBL mixing, a diurnal cycle, a complex land-surface model, sea ice dynamics, and semi-Lagrangian transport of water vapor. The global annual surface-air warming in the model is 2.1{degrees}C, with global precipitation increasing by 3.3%. Over most land areas, most of the changes in precipitation are insignificant at the 5% level compared to interannual variability. Decreases in soil moisture in summer are not as large as in most previous models and only occur poleward of {approximately}55{degrees} in Siberia, northern CAnada, and Alaska. Sea ice area in September recedes by 62% in the Artic and by 43% in the Antarctic. The area of Northern Hemispheric permafrost decreases by 48%, while the the total area of Northern hemispheric snowcover in January decreases by 48%, while the total area of Northern Hemispheric snowcover in January decreases by on 13%. The effects of several modifications to the model physics are described. Replacing LSX and the multilayer soil with a single-layer bucket model causes little change to CO{sub 2} sensitivities on global scales, and the regions of summer drying in northern high latitudes are reproduced, although with somewhat greater amplitude. Compared to convective adjustment, penetrative plume convection increases the tropical Hadley Cell response but decreases the global warming slightly by 0.1{degrees} to 0.3{degrees}, contrary to several previous GCM studies in which penetrative convection was associated with greater CO{sub 2} warming. 60 refs., 20 figs., 3 tabs.
Global analysis of population growth and river water quality
NASA Astrophysics Data System (ADS)
Wen, Yingrong; Schoups, Gerrit; van de Giesen, Nick
2014-05-01
Human-related pressures on river water quality are a concern of global proportions.. However, little is known about the more specific impact of increasing population on river water quality and how it provides a vital environmental reference for water management. Combining global gridded data on population and river discharge with digitized river networks, we conduct numerical simulations to demonstrate the direct impact of population growth on river water quality. Our model traces the transport, dilution, and degradation of anthropogenic organic matter (BOD) emissions into rivers. Spanning the period from the early 20th century to the present, our analysis indicates that the pressure on downstream river networks markedly increased since the population explosion starting in 1950, especially in developing countries. The ratio of population to river discharge reveals the link between impact severity and dilution capacity. In addition, a denser population is found to be correlated with higher impact severity. Consideration of direct population influences on global river water quality becomes limited as society develops and should be studied as a fundamental reference for human-related river water management. Keywords: Population growth, River water quality, Space-time analysis, Human activities, Water Management
The adjoint sensitivity method of global electromagnetic induction for CHAMP magnetic data
NASA Astrophysics Data System (ADS)
Martinec, Zdeněk; Velímský, Jakub
2009-12-01
An existing time-domain spectral-finite element approach for the forward modelling of electromagnetic induction vector data as measured by the CHAMP satellite is, in this paper, supplemented by a new method of computing the sensitivity of the CHAMP electromagnetic induction data to the Earth's mantle electrical conductivity, which we term the adjoint sensitivity method. The forward and adjoint initial boundary-value problems, both solved in the time domain, are identical, except for the specification of prescribed boundary conditions. The respective boundary-value data at the satellite's altitude are the X magnetic component measured by the CHAMP vector magnetometer along the satellite track for the forward method and the difference between the measured and predicted Z magnetic component for the adjoint method. The squares of these differences summed up over all CHAMP tracks determine the misfit. The sensitivities of the CHAMP data, that is the partial derivatives of the misfit with respect to mantle conductivity parameters, are then obtained by the scalar product of the forward and adjoint solutions, multiplied by the gradient of the conductivity and integrated over all CHAMP tracks. Such exactly determined sensitivities are checked against numerical differentiation of the misfit, and good agreement is obtained. The attractiveness of the adjoint method lies in the fact that the adjoint sensitivities are calculated for the price of only an additional forward calculation, regardless of the number of conductivity parameters. However, since the adjoint solution proceeds backwards in time, the forward solution must be stored at each time step, leading to memory requirements that are linear with respect to the number of steps undertaken. Having determined the sensitivities, we apply the conjugate gradient method to infer 1-D and 2-D conductivity structures of the Earth based on the CHAMP residual time series (after the subtraction of static field and secular variations
Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G.
2015-01-01
The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity, than effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g. sandy soil as compared to clayey soil, and “shallow” sources as compared to “deep” sources) are evaluated. Our results, not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive. PMID:25947051
Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G
2015-02-01
The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity than to effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g., sandy soil as compared to clayey soil, and "shallow" sources as compared to "deep" sources) are evaluated. Our results not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive.
Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G
2015-02-01
The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity than to effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g., sandy soil as compared to clayey soil, and "shallow" sources as compared to "deep" sources) are evaluated. Our results not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive. PMID:25947051
2013-01-01
Background Millions of dollars are invested annually under the umbrella of national health systems strengthening. Global health initiatives provide funding for low- and middle-income countries through disease-oriented programmes while maintaining that the interventions simultaneously strengthen systems. However, it is as yet unclear which, and to what extent, system-level interventions are being funded by these initiatives, nor is it clear how much funding they allocate to disease-specific activities – through conventional ‘vertical-programming’ approach. Such funding can be channelled to one or more of the health system building blocks while targeting disease(s) or explicitly to system-wide activities. Methods We operationalized the World Health Organization health system framework of the six building blocks to conduct a detailed assessment of Global Fund health system investments. Our application of this framework framework provides a comprehensive quantification of system-level interventions. We applied this systematically to a random subset of 52 of the 139 grants funded in Round 8 of the Global Fund to Fight AIDS, Tuberculosis and Malaria (totalling approximately US$1 billion). Results According to the analysis, 37% (US$ 362 million) of the Global Fund Round 8 funding was allocated to health systems strengthening. Of that, 38% (US$ 139 million) was for generic system-level interventions, rather than disease-specific system support. Around 82% of health systems strengthening funding (US$ 296 million) was allocated to service delivery, human resources, and medicines & technology, and within each of these to two to three interventions. Governance, financing, and information building blocks received relatively low funding. Conclusions This study shows that a substantial portion of Global Fund’s Round 8 funds was devoted to health systems strengthening. Dramatic skewing among the health system building blocks suggests opportunities for more balanced
Dykes, K.; Ning, A.; King, R.; Graf, P.; Scott, G.; Veers, P.
2014-02-01
This paper introduces the development of a new software framework for research, design, and development of wind energy systems which is meant to 1) represent a full wind plant including all physical and nonphysical assets and associated costs up to the point of grid interconnection, 2) allow use of interchangeable models of varying fidelity for different aspects of the system, and 3) support system level multidisciplinary analyses and optimizations. This paper describes the design of the overall software capability and applies it to a global sensitivity analysis of wind turbine and plant performance and cost. The analysis was performed using three different model configurations involving different levels of fidelity, which illustrate how increasing fidelity can preserve important system interactions that build up to overall system performance and cost. Analyses were performed for a reference wind plant based on the National Renewable Energy Laboratory's 5-MW reference turbine at a mid-Atlantic offshore location within the United States.
GPU-based Integration with Application in Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Atanassov, Emanouil; Ivanovska, Sofiya; Karaivanova, Aneta; Slavov, Dimitar
2010-05-01
The presented work is an important part of the grid application MCSAES (Monte Carlo Sensitivity Analysis for Environmental Studies) which aim is to develop an efficient Grid implementation of a Monte Carlo based approach for sensitivity studies in the domains of Environmental modelling and Environmental security. The goal is to study the damaging effects that can be caused by high pollution levels (especially effects on human health), when the main modeling tool is the Danish Eulerian Model (DEM). Generally speaking, sensitivity analysis (SA) is the study of how the variation in the output of a mathematical model can be apportioned to, qualitatively or quantitatively, different sources of variation in the input of a model. One of the important classes of methods for Sensitivity Analysis are Monte Carlo based, first proposed by Sobol, and then developed by Saltelli and his group. In MCSAES the general Saltelli procedure has been adapted for SA of the Danish Eulerian model. In our case we consider as factors the constants determining the speeds of the chemical reactions in the DEM and as output a certain aggregated measure of the pollution. Sensitivity simulations lead to huge computational tasks (systems with up to 4 × 109 equations at every time-step, and the number of time-steps can be more than a million) which motivates its grid implementation. MCSAES grid implementation scheme includes two main tasks: (i) Grid implementation of the DEM, (ii) Grid implementation of the Monte Carlo integration. In this work we present our new developments in the integration part of the application. We have developed an algorithm for GPU-based generation of scrambled quasirandom sequences which can be combined with the CPU-based computations related to the SA. Owen first proposed scrambling of Sobol sequence through permutation in a manner that improves the convergence rates. Scrambling is necessary not only for error analysis but for parallel implementations. Good scrambling is
Sensitivity analysis of an urban stormwater microorganism model.
McCarthy, D T; Deletic, A; Mitchell, V G; Diaper, C
2010-01-01
This paper presents the sensitivity analysis of a newly developed model which predicts microorganism concentrations in urban stormwater (MOPUS--MicroOrganism Prediction in Urban Stormwater). The analysis used Escherichia coli data collected from four urban catchments in Melbourne, Australia. The MICA program (Model Independent Markov Chain Monte Carlo Analysis), used to conduct this analysis, applies a carefully constructed Markov Chain Monte Carlo procedure, based on the Metropolis-Hastings algorithm, to explore the model's posterior parameter distribution. It was determined that the majority of parameters in the MOPUS model were well defined, with the data from the MCMC procedure indicating that the parameters were largely independent. However, a sporadic correlation found between two parameters indicates that some improvements may be possible in the MOPUS model. This paper identifies the parameters which are the most important during model calibration; it was shown, for example, that parameters associated with the deposition of microorganisms in the catchment were more influential than those related to microorganism survival processes. These findings will help users calibrate the MOPUS model, and will help the model developer to improve the model, with efforts currently being made to reduce the number of model parameters, whilst also reducing the slight interaction identified.
A global low order spectral model designed for climate sensitivity studies
NASA Technical Reports Server (NTRS)
Hanna, A. F.; Stevens, D. E.
1984-01-01
A two level, global, spectral model using pressure as a vertical coordinate is developed. The system of equations describing the model is nonlinear and quasi-geostrophic. A moisture budget is calculated in the lower layer only with moist convective adjustment between the two layers. The mechanical forcing of topography is introduced as a lower boundary vertical velocity. Solar forcing is specified assuming a daily mean zenith angle. On land and sea ice surfaces a steady state thermal energy equation is solved to calculate the surface temperature. Over the oceans the sea surface temperatures are prescribed from the climatological average of January. The model is integrated to simulate the January climate.
Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav
2015-01-01
Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290
Peterson, Kara J.; Bochev, Pavel Blagoveston; Paskaleva, Biliana S.
2010-09-01
Arctic sea ice is an important component of the global climate system and due to feedback effects the Arctic ice cover is changing rapidly. Predictive mathematical models are of paramount importance for accurate estimates of the future ice trajectory. However, the sea ice components of Global Climate Models (GCMs) vary significantly in their prediction of the future state of Arctic sea ice and have generally underestimated the rate of decline in minimum sea ice extent seen over the past thirty years. One of the contributing factors to this variability is the sensitivity of the sea ice to model physical parameters. A new sea ice model that has the potential to improve sea ice predictions incorporates an anisotropic elastic-decohesive rheology and dynamics solved using the material-point method (MPM), which combines Lagrangian particles for advection with a background grid for gradient computations. We evaluate the variability of the Los Alamos National Laboratory CICE code and the MPM sea ice code for a single year simulation of the Arctic basin using consistent ocean and atmospheric forcing. Sensitivities of ice volume, ice area, ice extent, root mean square (RMS) ice speed, central Arctic ice thickness, and central Arctic ice speed with respect to ten different dynamic and thermodynamic parameters are evaluated both individually and in combination using the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA). We find similar responses for the two codes and some interesting seasonal variability in the strength of the parameters on the solution.
A statistical bias correction for climate model data: parameter sensitivity analysis.
NASA Astrophysics Data System (ADS)
Piani, C.; Coppola, E.; Mariotti, L.; Haerter, J.; Hagemann, S.
2009-04-01
Water management adaptation strategies depend crucially on high quality projections of the hydrological cycle in view of anthropogenic climate change. The goodness of hydrological cycle projections depends, in turn, on the successful coupling of hydrological models to global (GCMs) or regional climate models (RCMs). It is well known within the climate modelling community that hydrological forcing output from climate models, in particular precipitation, is partially affected by large bias. The bias affects all aspects of the statistics, that is the mean, standard deviation (variability), skewness (drizzle versus intense events, dry days) etc. The state-of-the-art approach to bias correction is based on histogram equalization techniques. Such techniques intrinsically correct all moments of the statistical intensity distribution. However these methods are applicable to hydrological projections to the extent that the correction itself is robust, that is, defined by few parameters that are well constrained by available data and constant in time. Here we present details of the statistical bias correction methodology developed within the European project "Water and Global Change" (WATCH). We will suggest different versions of the method that allow it to be taylored to differently structured biases from different RCMs. Crucially, application of the methodology also allows for a sensitivity analysis of the correction parameters on other gridded variables such as orography and land use. Here we explore some of these sensitivities as well.
Sensitivity analysis for aeroacoustic and aeroelastic design of turbomachinery blades
NASA Technical Reports Server (NTRS)
Lorence, Christopher B.; Hall, Kenneth C.
1995-01-01
A new method for computing the effect that small changes in the airfoil shape and cascade geometry have on the aeroacoustic and aeroelastic behavior of turbomachinery cascades is presented. The nonlinear unsteady flow is assumed to be composed of a nonlinear steady flow plus a small perturbation unsteady flow that is harmonic in time. First, the full potential equation is used to describe the behavior of the nonlinear mean (steady) flow through a two-dimensional cascade. The small disturbance unsteady flow through the cascade is described by the linearized Euler equations. Using rapid distortion theory, the unsteady velocity is split into a rotational part that contains the vorticity and an irrotational part described by a scalar potential. The unsteady vorticity transport is described analytically in terms of the drift and stream functions computed from the steady flow. Hence, the solution of the linearized Euler equations may be reduced to a single inhomogeneous equation for the unsteady potential. The steady flow and small disturbance unsteady flow equations are discretized using bilinear quadrilateral isoparametric finite elements. The nonlinear mean flow solution and streamline computational grid are computed simultaneously using Newton iteration. At each step of the Newton iteration, LU decomposition is used to solve the resulting set of linear equations. The unsteady flow problem is linear, and is also solved using LU decomposition. Next, a sensitivity analysis is performed to determine the effect small changes in cascade and airfoil geometry have on the mean and unsteady flow fields. The sensitivity analysis makes use of the nominal steady and unsteady flow LU decompositions so that no additional matrices need to be factored. Hence, the present method is computationally very efficient. To demonstrate how the sensitivity analysis may be used to redesign cascades, a compressor is redesigned for improved aeroelastic stability and two different fan exit guide
Sensitivity analysis for computer model projections of hurricane losses.
Iman, Ronald L; Johnson, Mark E; Watson, Charles C
2005-10-01
Projecting losses associated with hurricanes is a complex and difficult undertaking that is fraught with uncertainties. Hurricane Charley, which struck southwest Florida on August 13, 2004, illustrates the uncertainty of forecasting damages from these storms. Due to shifts in the track and the rapid intensification of the storm, real-time estimates grew from 2 billion dollars to 3 billion dollars in losses late on the 12th to a peak of 50 billion dollars for a brief time as the storm appeared to be headed for the Tampa Bay area. The storm struck the resort areas of Charlotte Harbor and moved across the densely populated central part of the state, with early poststorm estimates in the 28 dollars to 31 billion dollars range, and final estimates converging at 15 billion dollars as the actual intensity at landfall became apparent. The Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) has a great appreciation for the role of computer models in projecting losses from hurricanes. The FCHLPM contracts with a professional team to perform onsite (confidential) audits of computer models developed by several different companies in the United States that seek to have their models approved for use in insurance rate filings in Florida. The team's members represent the fields of actuarial science, computer science, meteorology, statistics, and wind and structural engineering. An important part of the auditing process requires uncertainty and sensitivity analyses to be performed with the applicant's proprietary model. To influence future such analyses, an uncertainty and sensitivity analysis has been completed for loss projections arising from use of a sophisticated computer model based on the Holland wind field. Sensitivity analyses presented in this article utilize standardized regression coefficients to quantify the contribution of the computer input variables to the magnitude of the wind speed.
ERIC Educational Resources Information Center
Yukhymenko, Mariya
2011-01-01
The current meta-analysis study summarizes the effects of the GlobalEd Project, a web-based educational intervention of international negotiations embedded within social studies curricula, on middle and high school students' interest in social studies and negotiation self efficacy. Meta-analytic evidence supports statistically significant…
NASA Astrophysics Data System (ADS)
Danabasoglu, Gokhan; Peacock, Synte; Lindsay, Keith; Tsumune, Daisuke
Sensitivity of the oceanic chlorofluorocarbon CFC-11 uptake to physical initial conditions and surface dynamical forcing (heat and salt fluxes and wind stress) is investigated in a global ocean model used in climate studies. Two different initial conditions are used: a solution following a short integration starting with observed temperature and salinity and zero velocities, and the quasi-equilibrium solution of an independent integration. For surface dynamical forcing, recently developed normal-year and interannually varying (1958-2000) data sets are used. The model CFC-11 global and basin inventories, particularly in the normal-year forcing case, are below the observed mean estimates, but they remain within the observational error bars. Column inventory spatial distributions indicate nontrivial differences due to both initial condition and forcing changes, particularly in the northern North Atlantic and Southern Ocean. These differences are larger between forcing sensitivity experiments than between the initial condition cases. The comparisons along the A16N and SR3 WOCE sections also show differences between cases. However, comparisons with observations do not clearly favor a particular case, and model-observation differences remain much larger than model-model differences for all simulations. The choice of initial condition does not significantly change the CFC-11 distributions. Both because of locally large differences between normal-year and interannually varying simulations and because the dynamical and CFC-11 forcing calendars are synchronized, we favor using the more realistic interannually varying forcing in future simulations, given the availability of the forcing data sets.
Sensitivity analysis for high accuracy proximity effect correction
NASA Astrophysics Data System (ADS)
Thrun, Xaver; Browning, Clyde; Choi, Kang-Hoon; Figueiro, Thiago; Hohle, Christoph; Saib, Mohamed; Schiavone, Patrick; Bartha, Johann W.
2015-10-01
A sensitivity analysis (SA) algorithm was developed and tested to comprehend the influences of different test pattern sets on the calibration of a point spread function (PSF) model with complementary approaches. Variance-based SA is the method of choice. It allows attributing the variance of the output of a model to the sum of variance of each input of the model and their correlated factors.1 The objective of this development is increasing the accuracy of the resolved PSF model in the complementary technique through the optimization of test pattern sets. Inscale® from Aselta Nanographics is used to prepare the various pattern sets and to check the consequences of development. Fraunhofer IPMS-CNT exposed the prepared data and observed those to visualize the link of sensitivities between the PSF parameters and the test pattern. First, the SA can assess the influence of test pattern sets for the determination of PSF parameters, such as which PSF parameter is affected on the employments of certain pattern. Secondly, throughout the evaluation, the SA enhances the precision of PSF through the optimization of test patterns. Finally, the developed algorithm is able to appraise what ranges of proximity effect correction is crucial on which portion of a real application pattern in the electron beam exposure.
Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis
Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad
2015-10-02
Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.
Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint
Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad
2015-12-08
Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.
NASA Astrophysics Data System (ADS)
Mani, Shanmugam; Merino, Agustín; García-Oliva, Felipe; Riotte, Jean; Sukumar, Raman
2016-04-01
Soil organic carbon (SOC) storage and quality are some of the most important factors determining ecological process in tropical forests, which are especially sensitive to global climate change (GCC). In India, the GCC scenarios expect increasing of drought period and wildfire, which may affect the SOC, and therefore the capacity of forest for C sequestration. The aim of the study was to evaluate the amount of soil C and its quality in the mineral soil across precipitation gradient with different factors (vegetation, pH, soil texture and bedrock composition) for generate SOC predictions under GCC. Six soil samples were collected (top 10 cm depth) from 19 1-ha permanent plots in the Mudumalai Wildlife Sanctuary of southern India, which are characterised by four types of forest vegetation (i.e. dry thorn, dry deciduous, moist deciduous and semi-evergreen forest) distributed along to rainfall gradient. The driest sites are dominated by sandy soils, while the soil clay proportion increased in the wet sites. Total organic C (Leco CN analyser), and the SOM quality was assessed by Differential Scanning Calorimetry (DSC) and Solid-state 13CCP-MAS NMR analyses. Soil organic C was positively correlated with precipitation (R2 = 0.502, p<0.01) and with soil clay content (R2 =0.15, p<0.05), and negatively with soil sand content (R2=0.308, p<0.001) and with pH (R2=0.529, p<0.01); while the C/N was only found positive correlation with clay (R2= 0.350, p<0.01). The driest sites (dry thorn forest) has the lowest proportion of thermal combustion of recalcitrant organic matter (Q2,375-475 °C) than the other sites (p<0.05) and this SOC fraction correlated positively with rainfall (R2=0.27, p=0.01). The Q2 model with best fit included rainfall, pH, sand, clay, C and C/N (R2=0.52, p=0.01). Principal component analysis explains 77% of total variance. The sites on the fist component are distributed along the rainfall gradient. These results suggest that the 50% of variance was explained
Drivers of Wetland Conversion: a Global Meta-Analysis
van Asselen, Sanneke; Verburg, Peter H.; Vermaat, Jan E.; Janse, Jan H.
2013-01-01
Meta-analysis of case studies has become an important tool for synthesizing case study findings in land change. Meta-analyses of deforestation, urbanization, desertification and change in shifting cultivation systems have been published. This present study adds to this literature, with an analysis of the proximate causes and underlying forces of wetland conversion at a global scale using two complementary approaches of systematic review. Firstly, a meta-analysis of 105 case-study papers describing wetland conversion was performed, showing that different combinations of multiple-factor proximate causes, and underlying forces, drive wetland conversion. Agricultural development has been the main proximate cause of wetland conversion, and economic growth and population density are the most frequently identified underlying forces. Secondly, to add a more quantitative component to the study, a logistic meta-regression analysis was performed to estimate the likelihood of wetland conversion worldwide, using globally-consistent biophysical and socioeconomic location factor maps. Significant factors explaining wetland conversion, in order of importance, are market influence, total wetland area (lower conversion probability), mean annual temperature and cropland or built-up area. The regression analyses results support the outcomes of the meta-analysis of the processes of conversion mentioned in the individual case studies. In other meta-analyses of land change, similar factors (e.g., agricultural development, population growth, market/economic factors) are also identified as important causes of various types of land change (e.g., deforestation, desertification). Meta-analysis helps to identify commonalities across the various local case studies and identify which variables may lead to individual cases to behave differently. The meta-regression provides maps indicating the likelihood of wetland conversion worldwide based on the location factors that have determined historic
Neutron activation analysis; A sensitive test for trace elements
Hossain, T.Z. . Ward Lab.)
1992-01-01
This paper discusses neutron activation analysis (NAA), an extremely sensitive technique for determining the elemental constituents of an unknown specimen. Currently, there are some twenty-five moderate-power TRIGA reactors scattered across the United States (fourteen of them at universities), and one of their principal uses is for NAA. NAA is procedurally simple. A small amount of the material to be tested (typically between one and one hundred milligrams) is irradiated for a period that varies from a few minutes to several hours in a neutron flux of around 10{sup 12} neutrons per square centimeter per second. A tiny fraction of the nuclei present (about 10{sup {minus}8}) is transmuted by nuclear reactions into radioactive forms. Subsequently, the nuclei decay, and the energy and intensity of the gamma rays that they emit can be measured in a gamma-ray spectrometer.
Sensitivity analysis and optimization of thin-film thermoelectric coolers
NASA Astrophysics Data System (ADS)
Harsha Choday, Sri; Roy, Kaushik
2013-06-01
The cooling performance of a thermoelectric (TE) material is dependent on the figure-of-merit (ZT = S2σT/κ), where S is the Seebeck coefficient, σ and κ are the electrical and thermal conductivities, respectively. The standard definition of ZT assigns equal importance to power factor (S2σ) and thermal conductivity. In this paper, we analyze the relative importance of each thermoelectric parameter on the cooling performance using the mathematical framework of sensitivity analysis. In addition, the impact of the electrical/thermal contact parasitics on bulk and superlattice Bi2Te3 is also investigated. In the presence of significant contact parasitics, we find that the carrier concentration that results in best cooling is lower than that of the highest ZT. We also establish the level of contact parasitics that are needed such that their impact on TE cooling is negligible.
Sensitivity analysis for causal inference using inverse probability weighting.
Shen, Changyu; Li, Xiaochun; Li, Lingling; Were, Martin C
2011-09-01
Evaluation of impact of potential uncontrolled confounding is an important component for causal inference based on observational studies. In this article, we introduce a general framework of sensitivity analysis that is based on inverse probability weighting. We propose a general methodology that allows both non-parametric and parametric analyses, which are driven by two parameters that govern the magnitude of the variation of the multiplicative errors of the propensity score and their correlations with the potential outcomes. We also introduce a specific parametric model that offers a mechanistic view on how the uncontrolled confounding may bias the inference through these parameters. Our method can be readily applied to both binary and continuous outcomes and depends on the covariates only through the propensity score that can be estimated by any parametric or non-parametric method. We illustrate our method with two medical data sets.
Apparatus and Method for Ultra-Sensitive trace Analysis
Lu, Zhengtian; Bailey, Kevin G.; Chen, Chun Yen; Li, Yimin; O'Connor, Thomas P.; Young, Linda
2000-01-03
An apparatus and method for conducting ultra-sensitive trace element and isotope analysis. The apparatus injects a sample through a fine nozzle to form an atomic beam. A DC discharge is used to elevate select atoms to a metastable energy level. These atoms are then acted on by a laser oriented orthogonally to the beam path to reduce the traverse velocity and to decrease the divergence angle of the beam. The beam then enters a Zeeman slower where a counter-propagating laser beam acts to slow the atoms down. Then select atoms are captured in a magneto-optical trap where they undergo fluorescence. A portion of the scattered photons are imaged onto a photo-detector, and the results analyzed to detect the presence of single atoms of the specific trace elements.
Displacement Monitoring and Sensitivity Analysis in the Observational Method
NASA Astrophysics Data System (ADS)
Górska, Karolina; Muszyński, Zbigniew; Rybak, Jarosław
2013-09-01
This work discusses the fundamentals of designing deep excavation support by means of observational method. The effective tools for optimum designing with the use of the observational method are both inclinometric and geodetic monitoring, which provide data for the systematically updated calibration of the numerical computational model. The analysis included methods for selecting data for the design (by choosing the basic random variables), as well as methods for an on-going verification of the results of numeric calculations (e.g., MES) by way of measuring the structure displacement using geodetic and inclinometric techniques. The presented example shows the sensitivity analysis of the calculation model for a cantilever wall in non-cohesive soil; that analysis makes it possible to select the data to be later subject to calibration. The paper presents the results of measurements of a sheet pile wall displacement, carried out by means of inclinometric method and, simultaneously, two geodetic methods, successively with the deepening of the excavation. This work includes also critical comments regarding the usefulness of the obtained data, as well as practical aspects of taking measurement in the conditions of on-going construction works.
Isotopic Ratio Outlier Analysis Global Metabolomics of Caenorhabditis elegans
Szewc, Mark A.; Garrett, Timothy; Menger, Robert F.; Yost, Richard A.; Beecher, Chris; Edison, Arthur S.
2014-01-01
We demonstrate the global metabolic analysis of Caenorhabditis elegans stress responses using a mass spectrometry-based technique called Isotopic Ratio Outlier Analysis (IROA). In an IROA protocol, control and experimental samples are isotopically labeled with 95% and 5% 13C, and the two sample populations are mixed together for uniform extraction, sample preparation, and LC-MS analysis. This labeling strategy provides several advantages over conventional approaches: 1) compounds arising from biosynthesis are easily distinguished from artifacts, 2) errors from sample extraction and preparation are minimized because the control and experiment are combined into a single sample, 3) measurement of both the molecular weight and the exact number of carbon atoms in each molecule provides extremely accurate molecular formulae, and 4) relative concentrations of all metabolites are easily determined. A heat shock perturbation was conducted on C. elegans to demonstrate this approach. We identified many compounds that significantly changed upon heat shock, including several from the purine metabolism pathway, which we use to demonstrate the approach. The metabolomic response information by IROA may be interpreted in the context of a wealth of genetic and proteomic information available for C. elegans. Furthermore, the IROA protocol can be applied to any organism that can be isotopically labeled, making it a powerful new tool in a global metabolomics pipeline. PMID:24274725
Henderson-Sellers, A.; McGuffie, K.; Gross, C.
1995-07-01
Increasing levels of atmospheric CO{sub 2} will not only modify climate, they will also likely increase the water-use efficiency of plants by decreasing stomatal openings. The effect of the imposition of {open_quotes}doubled stomatal resistance{close_quotes} on climate is investigated in off-line simulations with the Biosphere-Atmosphere Transfer Scheme (BATS) and in two sets of global climate model simulations: for present-day and doubled atmospheric CO{sub 2} concentrations. The anticipated evapotranspiration decrease is seen most clearly in the boreal forests in the summer although, for the present-day climate (but not at 2 x CO{sub 2}), there are also noticeable responses in the tropical forests in South America. In the latitude zone 44{degrees}N to 58{degrees}N, evapotranspiration decreases by -15 W m{sup 2}, temperatures increase by =2 K, and the sensible heat flux by +15 W m{sup {minus}2}. Soil moisture is often, but less extensively, increased, which can cause increases in runoff. The responses at 2 x CO{sub 2} are larger in the 44{degrees}N to 58{degrees}N zone than elsewhere. Globally, the impact of imposing a doubled stomatal resistance in the present-day climate is an increase in the annually averaged surface air temperature of 0.13 K and a reduction in total precipitation of -0.82%. If both the atmospheric CO{sub 2} content and the stomatal resistance are doubled, the global response in surface air temperature and precipitation are +2.72 K and +5.01% compared with +2.67 K and + 7.73% if CO{sub 2} is doubled but stomatal resistance remains unchanged as in the usual {open_quotes}greenhouse{close_quotes} experiment. Doubling stomatal resistance as well as atmospheric CO{sub 2} results in increased soil moisture in northern midlatitudes in summer. 40 refs.. 17 figs., 5 tabs.
Sensitivity analysis of ecosystem service valuation in a Mediterranean watershed.
Sánchez-Canales, María; López Benito, Alfredo; Passuello, Ana; Terrado, Marta; Ziv, Guy; Acuña, Vicenç; Schuhmacher, Marta; Elorza, F Javier
2012-12-01
The services of natural ecosystems are clearly very important to our societies. In the last years, efforts to conserve and value ecosystem services have been fomented. By way of illustration, the Natural Capital Project integrates ecosystem services into everyday decision making around the world. This project has developed InVEST (a system for Integrated Valuation of Ecosystem Services and Tradeoffs). The InVEST model is a spatially integrated modelling tool that allows us to predict changes in ecosystem services, biodiversity conservation and commodity production levels. Here, InVEST model is applied to a stakeholder-defined scenario of land-use/land-cover change in a Mediterranean region basin (the Llobregat basin, Catalonia, Spain). Of all InVEST modules and sub-modules, only the behaviour of the water provisioning one is investigated in this article. The main novel aspect of this work is the sensitivity analysis (SA) carried out to the InVEST model in order to determine the variability of the model response when the values of three of its main coefficients: Z (seasonal precipitation distribution), prec (annual precipitation) and eto (annual evapotranspiration), change. The SA technique used here is a One-At-a-Time (OAT) screening method known as Morris method, applied over each one of the one hundred and fifty four sub-watersheds in which the Llobregat River basin is divided. As a result, this method provides three sensitivity indices for each one of the sub-watersheds under consideration, which are mapped to study how they are spatially distributed. From their analysis, the study shows that, in the case under consideration and between the limits considered for each factor, the effect of the Z coefficient on the model response is negligible, while the other two need to be accurately determined in order to obtain precise output variables. The results of this study will be applicable to the others watersheds assessed in the Consolider Scarce Project.
A Multivariate Analysis of Extratropical Cyclone Environmental Sensitivity
NASA Astrophysics Data System (ADS)
Tierney, G.; Posselt, D. J.; Booth, J. F.
2015-12-01
The implications of a changing climate system include more than a simple temperature increase. A changing climate also modifies atmospheric conditions responsible for shaping the genesis and evolution of atmospheric circulations. In the mid-latitudes, the effects of climate change on extratropical cyclones (ETCs) can be expressed through changes in bulk temperature, horizontal and vertical temperature gradients (leading to changes in mean state winds) as well as atmospheric moisture content. Understanding how these changes impact ETC evolution and dynamics will help to inform climate mitigation and adaptation strategies, and allow for better informed weather emergency planning. However, our understanding is complicated by the complex interplay between a variety of environmental influences, and their potentially opposing effects on extratropical cyclone strength. Attempting to untangle competing influences from a theoretical or observational standpoint is complicated by nonlinear responses to environmental perturbations and a lack of data. As such, numerical models can serve as a useful tool for examining this complex issue. We present results from an analysis framework that combines the computational power of idealized modeling with the statistical robustness of multivariate sensitivity analysis. We first establish control variables, such as baroclinicity, bulk temperature, and moisture content, and specify a range of values that simulate possible changes in a future climate. The Weather Research and Forecasting (WRF) model serves as the link between changes in climate state and ETC relevant outcomes. A diverse set of output metrics (e.g., sea level pressure, average precipitation rates, eddy kinetic energy, and latent heat release) facilitates examination of storm dynamics, thermodynamic properties, and hydrologic cycles. Exploration of the multivariate sensitivity of ETCs to changes in control parameters space is performed via an ensemble of WRF runs coupled with
A stoichiometrically derived algal growth model and its global analysis.
Li, Xiong; Wang, Hao
2010-10-01
Organisms are composed of multiple chemical elements such as carbon, nitrogen, and phosphorus. The scarcity of any of these elements can severely restrict organismal and population growth. However, many trophic interaction models only consider carbon limitation via energy flow. In this paper, we construct an algal growth model with the explicit incorporation of light and nutrient availability to characterize both carbon and phosphorus limitations. We provide a global analysis of this model to illustrate how light and nutrient availability regulate algal dynamics. PMID:21077710
A study of sensitivity to dissipation in a Global Climate Model of Saturn
NASA Astrophysics Data System (ADS)
Indurain, M.; Millour, E.; Spiga, A.; Guerlet, S.; Hourdin, F.
2014-04-01
Goals Our overall goal is to build a Global Climate Model [GCM] to study the dynamics of Saturn's troposphere and stratosphere [Spiga et al. EPSC 2014, this issue]. A GCM consists basically in an interface between an hydrodynamical core which computes numerically the solution to Navier-Stokes equations, and physical parameterizations in which the various forcings (radiative transfer, latent heat exchanges within clouds, subgrid-scale mixing) applied to the atmospheric fluid are calculated. Thus, a first step was to develop tailored physical parameterizations for the Saturn's atmosphere [2]. We report here on part of the second step to build our Saturn GCM: testing the LMDz dynamical core [3] in the fast-rotating conditions of gas giants' atmospheres.
Indian plant germplasm on the global platter: an analysis.
Jacob, Sherry R; Tyagi, Vandana; Agrawal, Anuradha; Chakrabarty, Shyamal K; Tyagi, Rishi K
2015-01-01
, about 50% of the Indian-origin accessions deposited in SGSV are traditional varieties or landraces with defined traits which form the backbone of any crop gene pool. This paper is also attempting to correlate the global data on Indian-origin germplasm with the national germplasm export profile. The analysis from this paper is discussed with the perspective of possible implications in the access and benefit sharing regime of both the International Treaty on Plant Genetic Resources for Food and Agriculture and the newly enforced Nagoya Protocol under the Convention on Biological Diversity. PMID:25974270
Indian plant germplasm on the global platter: an analysis.
Jacob, Sherry R; Tyagi, Vandana; Agrawal, Anuradha; Chakrabarty, Shyamal K; Tyagi, Rishi K
2015-01-01
, about 50% of the Indian-origin accessions deposited in SGSV are traditional varieties or landraces with defined traits which form the backbone of any crop gene pool. This paper is also attempting to correlate the global data on Indian-origin germplasm with the national germplasm export profile. The analysis from this paper is discussed with the perspective of possible implications in the access and benefit sharing regime of both the International Treaty on Plant Genetic Resources for Food and Agriculture and the newly enforced Nagoya Protocol under the Convention on Biological Diversity.
Indian Plant Germplasm on the Global Platter: An Analysis
Jacob, Sherry R.; Tyagi, Vandana; Agrawal, Anuradha; Chakrabarty, Shyamal K.; Tyagi, Rishi K.
2015-01-01
, about 50% of the Indian-origin accessions deposited in SGSV are traditional varieties or landraces with defined traits which form the backbone of any crop gene pool. This paper is also attempting to correlate the global data on Indian-origin germplasm with the national germplasm export profile. The analysis from this paper is discussed with the perspective of possible implications in the access and benefit sharing regime of both the International Treaty on Plant Genetic Resources for Food and Agriculture and the newly enforced Nagoya Protocol under the Convention on Biological Diversity. PMID:25974270
A Meta-Analysis of Global Urban Land Expansion
Seto, Karen C.; Fragkias, Michail; Güneralp, Burak; Reilly, Michael K.
2011-01-01
The conversion of Earth's land surface to urban uses is one of the most irreversible human impacts on the global biosphere. It drives the loss of farmland, affects local climate, fragments habitats, and threatens biodiversity. Here we present a meta-analysis of 326 studies that have used remotely sensed images to map urban land conversion. We report a worldwide observed increase in urban land area of 58,000 km2 from 1970 to 2000. India, China, and Africa have experienced the highest rates of urban land expansion, and the largest change in total urban extent has occurred in North America. Across all regions and for all three decades, urban land expansion rates are higher than or equal to urban population growth rates, suggesting that urban growth is becoming more expansive than compact. Annual growth in GDP per capita drives approximately half of the observed urban land expansion in China but only moderately affects urban expansion in India and Africa, where urban land expansion is driven more by urban population growth. In high income countries, rates of urban land expansion are slower and increasingly related to GDP growth. However, in North America, population growth contributes more to urban expansion than it does in Europe. Much of the observed variation in urban expansion was not captured by either population, GDP, or other variables in the model. This suggests that contemporary urban expansion is related to a variety of factors difficult to observe comprehensively at the global level, including international capital flows, the informal economy, land use policy, and generalized transport costs. Using the results from the global model, we develop forecasts for new urban land cover using SRES Scenarios. Our results show that by 2030, global urban land cover will increase between 430,000 km2 and 12,568,000 km2, with an estimate of 1,527,000 km2 more likely. PMID:21876770
A meta-analysis of global urban land expansion.
Seto, Karen C; Fragkias, Michail; Güneralp, Burak; Reilly, Michael K
2011-01-01
The conversion of Earth's land surface to urban uses is one of the most irreversible human impacts on the global biosphere. It drives the loss of farmland, affects local climate, fragments habitats, and threatens biodiversity. Here we present a meta-analysis of 326 studies that have used remotely sensed images to map urban land conversion. We report a worldwide observed increase in urban land area of 58,000 km(2) from 1970 to 2000. India, China, and Africa have experienced the highest rates of urban land expansion, and the largest change in total urban extent has occurred in North America. Across all regions and for all three decades, urban land expansion rates are higher than or equal to urban population growth rates, suggesting that urban growth is becoming more expansive than compact. Annual growth in GDP per capita drives approximately half of the observed urban land expansion in China but only moderately affects urban expansion in India and Africa, where urban land expansion is driven more by urban population growth. In high income countries, rates of urban land expansion are slower and increasingly related to GDP growth. However, in North America, population growth contributes more to urban expansion than it does in Europe. Much of the observed variation in urban expansion was not captured by either population, GDP, or other variables in the model. This suggests that contemporary urban expansion is related to a variety of factors difficult to observe comprehensively at the global level, including international capital flows, the informal economy, land use policy, and generalized transport costs. Using the results from the global model, we develop forecasts for new urban land cover using SRES Scenarios. Our results show that by 2030, global urban land cover will increase between 430,000 km(2) and 12,568,000 km(2), with an estimate of 1,527,000 km(2) more likely.
A meta-analysis of global urban land expansion.
Seto, Karen C; Fragkias, Michail; Güneralp, Burak; Reilly, Michael K
2011-01-01
The conversion of Earth's land surface to urban uses is one of the most irreversible human impacts on the global biosphere. It drives the loss of farmland, affects local climate, fragments habitats, and threatens biodiversity. Here we present a meta-analysis of 326 studies that have used remotely sensed images to map urban land conversion. We report a worldwide observed increase in urban land area of 58,000 km(2) from 1970 to 2000. India, China, and Africa have experienced the highest rates of urban land expansion, and the largest change in total urban extent has occurred in North America. Across all regions and for all three decades, urban land expansion rates are higher than or equal to urban population growth rates, suggesting that urban growth is becoming more expansive than compact. Annual growth in GDP per capita drives approximately half of the observed urban land expansion in China but only moderately affects urban expansion in India and Africa, where urban land expansion is driven more by urban population growth. In high income countries, rates of urban land expansion are slower and increasingly related to GDP growth. However, in North America, population growth contributes more to urban expansion than it does in Europe. Much of the observed variation in urban expansion was not captured by either population, GDP, or other variables in the model. This suggests that contemporary urban expansion is related to a variety of factors difficult to observe comprehensively at the global level, including international capital flows, the informal economy, land use policy, and generalized transport costs. Using the results from the global model, we develop forecasts for new urban land cover using SRES Scenarios. Our results show that by 2030, global urban land cover will increase between 430,000 km(2) and 12,568,000 km(2), with an estimate of 1,527,000 km(2) more likely. PMID:21876770
Vásquez, G A; Busschaert, P; Haberbeck, L U; Uyttendaele, M; Geeraerd, A H
2014-11-01
growth temperature of L. monocytogenes. Uncertainty in the dose-response relationship was not included in the analysis, hence the level of its influence cannot be assessed in the present research. Finally, a baseline global workflow for QMRA and sensitivity analysis is proposed. PMID:25173917
Vásquez, G A; Busschaert, P; Haberbeck, L U; Uyttendaele, M; Geeraerd, A H
2014-11-01
growth temperature of L. monocytogenes. Uncertainty in the dose-response relationship was not included in the analysis, hence the level of its influence cannot be assessed in the present research. Finally, a baseline global workflow for QMRA and sensitivity analysis is proposed.
Spatial risk assessment for critical network infrastructure using sensitivity analysis
NASA Astrophysics Data System (ADS)
Möderl, Michael; Rauch, Wolfgang
2011-12-01
The presented spatial risk assessment method allows for managing critical network infrastructure in urban areas under abnormal and future conditions caused e.g., by terrorist attacks, infrastructure deterioration or climate change. For the spatial risk assessment, vulnerability maps for critical network infrastructure are merged with hazard maps for an interfering process. Vulnerability maps are generated using a spatial sensitivity analysis of network transport models to evaluate performance decrease under investigated thread scenarios. Thereby parameters are varied according to the specific impact of a particular threat scenario. Hazard maps are generated with a geographical information system using raster data of the same threat scenario derived from structured interviews and cluster analysis of events in the past. The application of the spatial risk assessment is exemplified by means of a case study for a water supply system, but the principal concept is applicable likewise to other critical network infrastructure. The aim of the approach is to help decision makers in choosing zones for preventive measures.
Robust and sensitive video motion detection for sleep analysis.
Heinrich, Adrienne; Geng, Di; Znamenskiy, Dmitry; Vink, Jelte Peter; de Haan, Gerard
2014-05-01
In this paper, we propose a camera-based system combining video motion detection, motion estimation, and texture analysis with machine learning for sleep analysis. The system is robust to time-varying illumination conditions while using standard camera and infrared illumination hardware. We tested the system for periodic limb movement (PLM) detection during sleep, using EMG signals as a reference. We evaluated the motion detection performance both per frame and with respect to movement event classification relevant for PLM detection. The Matthews correlation coefficient improved by a factor of 2, compared to a state-of-the-art motion detection method, while sensitivity and specificity increased with 45% and 15%, respectively. Movement event classification improved by a factor of 6 and 3 in constant and highly varying lighting conditions, respectively. On 11 PLM patient test sequences, the proposed system achieved a 100% accurate PLM index (PLMI) score with a slight temporal misalignment of the starting time (<1 s) regarding one movement. We conclude that camera-based PLM detection during sleep is feasible and can give an indication of the PLMI score.
Fault sensitivity and wear-out analysis of VLSI systems
NASA Astrophysics Data System (ADS)
Choi, Gwan Seung
1994-07-01
This thesis describes simulation approaches to conduct fault sensitivity and wear-out failure analysis of VLSI systems. A fault-injection approach to study transient impact in VLSI systems is developed. Through simulated fault injection at the device level and, subsequent fault propagation at the gate functional and software levels, it is possible to identify critical bottlenecks in dependability. Techniques to speed up the fault simulation and to perform statistical analysis of fault-impact are developed. A wear-out simulation environment is also developed to closely mimic dynamic sequences of wear-out events in a device through time, to localize weak location/aspect of target chip and to allow generation of TTF (Time-to-failure) distribution of VLSI chip as a whole. First, an accurate simulation of a target chip and its application code is performed to acquire trace data (real workload) on switch activity. Then, using this switch activity information, wear-out of the each component in the entire chip is simulated using Monte Carlo techniques.
Global point signature for shape analysis of carpal bones
Chaudhari, Abhijit J; Leahy, Richard M; Wise, Barton L; Lane, Nancy E; Badawi, Ramsey D; Joshi, Anand A
2014-01-01
We present a method based on spectral theory for the shape analysis of carpal bones of the human wrist. We represent the cortical surface of the carpal bone in a coordinate system based on the eigensystem of the two-dimensional Helmholtz equation. We employ a metric—global point signature (GPS)—that exploits the scale and isometric invariance of eigenfunctions to quantify overall bone shape. We use a fast finite-element-method to compute the GPS metric. We capitalize upon the properties of GPS representation—such as stability, a standard Euclidean (ℓ2) metric definition, and invariance to scaling, translation and rotation—to perform shape analysis of the carpal bones of ten women and ten men from a publicly-available database. We demonstrate the utility of the proposed GPS representation to provide a means for comparing shapes of the carpal bones across populations. PMID:24503490
Proteogenomic analysis and global discovery of posttranslational modifications in prokaryotes
Yang, Ming-kun; Yang, Yao-hua; Chen, Zhuo; Zhang, Jia; Lin, Yan; Wang, Yan; Xiong, Qian; Li, Tao; Ge, Feng; Bryant, Donald A.; Zhao, Jin-dong
2014-01-01
We describe an integrated workflow for proteogenomic analysis and global profiling of posttranslational modifications (PTMs) in prokaryotes and use the model cyanobacterium Synechococcus sp. PCC 7002 (hereafter Synechococcus 7002) as a test case. We found more than 20 different kinds of PTMs, and a holistic view of PTM events in this organism grown under different conditions was obtained without specific enrichment strategies. Among 3,186 predicted protein-coding genes, 2,938 gene products (>92%) were identified. We also identified 118 previously unidentified proteins and corrected 38 predicted gene-coding regions in the Synechococcus 7002 genome. This systematic analysis not only provides comprehensive information on protein profiles and the diversity of PTMs in Synechococcus 7002 but also provides some insights into photosynthetic pathways in cyanobacteria. The entire proteogenomics pipeline is applicable to any sequenced prokaryotic organism, and we suggest that it should become a standard part of genome annotation projects. PMID:25512518
Global point signature for shape analysis of carpal bones.
Chaudhari, Abhijit J; Leahy, Richard M; Wise, Barton L; Lane, Nancy E; Badawi, Ramsey D; Joshi, Anand A
2014-02-21
We present a method based on spectral theory for the shape analysis of carpal bones of the human wrist. We represent the cortical surface of the carpal bone in a coordinate system based on the eigensystem of the two-dimensional Helmholtz equation. We employ a metric--global point signature (GPS)--that exploits the scale and isometric invariance of eigenfunctions to quantify overall bone shape. We use a fast finite-element-method to compute the GPS metric. We capitalize upon the properties of GPS representation--such as stability, a standard Euclidean (ℓ(2)) metric definition, and invariance to scaling, translation and rotation--to perform shape analysis of the carpal bones of ten women and ten men from a publicly-available database. We demonstrate the utility of the proposed GPS representation to provide a means for comparing shapes of the carpal bones across populations.
Global point signature for shape analysis of carpal bones
NASA Astrophysics Data System (ADS)
Chaudhari, Abhijit J.; Leahy, Richard M.; Wise, Barton L.; Lane, Nancy E.; Badawi, Ramsey D.; Joshi, Anand A.
2014-02-01
We present a method based on spectral theory for the shape analysis of carpal bones of the human wrist. We represent the cortical surface of the carpal bone in a coordinate system based on the eigensystem of the two-dimensional Helmholtz equation. We employ a metric—global point signature (GPS)—that exploits the scale and isometric invariance of eigenfunctions to quantify overall bone shape. We use a fast finite-element-method to compute the GPS metric. We capitalize upon the properties of GPS representation—such as stability, a standard Euclidean (ℓ2) metric definition, and invariance to scaling, translation and rotation—to perform shape analysis of the carpal bones of ten women and ten men from a publicly-available database. We demonstrate the utility of the proposed GPS representation to provide a means for comparing shapes of the carpal bones across populations.
Sensitivity of leaf size and shape to climate: Global patterns and paleoclimatic applications
Peppe, D.J.; Royer, D.L.; Cariglino, B.; Oliver, S.Y.; Newman, S.; Leight, E.; Enikolopov, G.; Fernandez-Burgos, M.; Herrera, F.; Adams, J.M.; Correa, E.; Currano, E.D.; Erickson, J.M.; Hinojosa, L.F.; Hoganson, J.W.; Iglesias, A.; Jaramillo, C.A.; Johnson, K.R.; Jordan, G.J.; Kraft, N.J.B.; Lovelock, E.C.; Lusk, C.H.; Niinemets, U.; Penuelas, J.; Rapson, G.; Wing, S.L.; Wright, I.J.
2011-01-01
Paleobotanists have long used models based on leaf size and shape to reconstruct paleoclimate. However, most models incorporate a single variable or use traits that are not physiologically or functionally linked to climate, limiting their predictive power. Further, they often underestimate paleotemperature relative to other proxies. Here we quantify leaf-climate correlations from 92 globally distributed, climatically diverse sites, and explore potential confounding factors. Multiple linear regression models for mean annual temperature (MAT) and mean annual precipitation (MAP) are developed and applied to nine well-studied fossil floras. We find that leaves in cold climates typically have larger, more numerous teeth, and are more highly dissected. Leaf habit (deciduous vs evergreen), local water availability, and phylogenetic history all affect these relationships. Leaves in wet climates are larger and have fewer, smaller teeth. Our multivariate MAT and MAP models offer moderate improvements in precision over univariate approaches (??4.0 vs 4.8??C for MAT) and strong improvements in accuracy. For example, our provisional MAT estimates for most North American fossil floras are considerably warmer and in better agreement with independent paleoclimate evidence. Our study demonstrates that the inclusion of additional leaf traits that are functionally linked to climate improves paleoclimate reconstructions. This work also illustrates the need for better understanding of the impact of phylogeny and leaf habit on leaf-climate relationships. ?? 2011 The Authors. New Phytologist ?? 2011 New Phytologist Trust.
A Test of Sensitivity to Convective Transport in a Global Atmospheric CO2 Simulation
NASA Technical Reports Server (NTRS)
Bian, H.; Kawa, S. R.; Chin, M.; Pawson, S.; Zhu, Z.; Rasch, P.; Wu, S.
2006-01-01
Two approximations to convective transport have been implemented in an offline chemistry transport model (CTM) to explore the impact on calculated atmospheric CO2 distributions. GlobalCO2 in the year 2000 is simulated using theCTM driven by assimilated meteorological fields from the NASA s Goddard Earth Observation System Data Assimilation System, Version 4 (GEOS-4). The model simulates atmospheric CO2 by adopting the same CO2 emission inventory and dynamical modules as described in Kawa et al. (convective transport scheme denoted as Conv1). Conv1 approximates the convective transport by using the bulk convective mass fluxes to redistribute trace gases. The alternate approximation, Conv2, partitions fluxes into updraft and downdraft, as well as into entrainment and detrainment, and has potential to yield a more realistic simulation of vertical redistribution through deep convection. Replacing Conv1 by Conv2 results in an overestimate of CO2 over biospheric sink regions. The largest discrepancies result in a CO2 difference of about 7.8 ppm in the July NH boreal forest, which is about 30% of the CO2 seasonality for that area. These differences are compared to those produced by emission scenario variations constrained by the framework of Intergovernmental Panel on Climate Change (IPCC) to account for possible land use change and residual terrestrial CO2 sink. It is shown that the overestimated CO2 driven by Conv2 can be offset by introducing these supplemental emissions.
Lock Acquisition and Sensitivity Analysis of Advanced LIGO Interferometers
NASA Astrophysics Data System (ADS)
Martynov, Denis
Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe. The initial phase of LIGO started in 2002, and since then data was collected during the six science runs. Instrument sensitivity improved from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010. In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation of detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted until 2014. This thesis describes results of commissioning work done at the LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers. The first part of this thesis is devoted to the description of methods for bringing the interferometer into linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details. Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in
NASA Astrophysics Data System (ADS)
Harshan, S.; Roth, M.; Velasco, E.
2014-12-01
Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model
Global Topology Analysis of Pancreatic Zymogen Granule Membrane Proteins *S⃞
Chen, Xuequn; Ulintz, Peter J.; Simon, Eric S.; Williams, John A.; Andrews, Philip C.
2008-01-01
The zymogen granule is the specialized organelle in pancreatic acinar cells for digestive enzyme storage and regulated secretion and is a classic model for studying secretory granule function. Our long term goal is to develop a comprehensive architectural model for zymogen granule membrane (ZGM) proteins that would direct new hypotheses for subsequent functional studies. Our initial proteomics analysis focused on identification of proteins from purified ZGM (Chen, X., Walker, A. K., Strahler, J. R., Simon, E. S., Tomanicek-Volk, S. L., Nelson, B. B., Hurley, M. C., Ernst, S. A., Williams, J. A., and Andrews, P. C. (2006) Organellar proteomics: analysis of pancreatic zymogen granule membranes. Mol. Cell. Proteomics 5, 306–312). In the current study, a new global topology analysis of ZGM proteins is described that applies isotope enrichment methods to a protease protection protocol. Our results showed that tryptic peptides of ZGM proteins were separated into two distinct clusters according to their isobaric tag for relative and absolute quantification (iTRAQ) ratios for proteinase K-treated versus control zymogen granules. The low iTRAQ ratio cluster included cytoplasm-orientated membrane and membrane-associated proteins including myosin V, vesicle-associated membrane proteins, syntaxins, and all the Rab proteins. The second cluster having unchanged ratios included predominantly luminal proteins. Because quantification is at the peptide level, this technique is also capable of mapping both cytoplasm- and lumen-orientated domains from the same transmembrane protein. To more accurately assign the topology, we developed a statistical mixture model to provide probabilities for identified peptides to be cytoplasmic or luminal based on their iTRAQ ratios. By implementing this approach to global topology analysis of ZGM proteins, we report here an experimentally constrained, comprehensive topology model of identified zymogen granule membrane proteins. This model
Global Analysis of the Staphylococcus aureus Response to Mupirocin