Science.gov

Sample records for global sensitivity analysis

  1. Sensitivity analysis, optimization, and global critical points

    SciTech Connect

    Cacuci, D.G. )

    1989-11-01

    The title of this paper suggests that sensitivity analysis, optimization, and the search for critical points in phase-space are somehow related; the existence of such a kinship has been undoubtedly felt by many of the nuclear engineering practitioners of optimization and/or sensitivity analysis. However, a unified framework for displaying this relationship has so far been lacking, especially in a global setting. The objective of this paper is to present such a global and unified framework and to suggest, within this framework, a new direction for future developments for both sensitivity analysis and optimization of the large nonlinear systems encountered in practical problems.

  2. Global sensitivity analysis in wind energy assessment

    NASA Astrophysics Data System (ADS)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present

  3. Global and Local Sensitivity Analysis Methods for a Physical System

    ERIC Educational Resources Information Center

    Morio, Jerome

    2011-01-01

    Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…

  4. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  5. Optimizing human activity patterns using global sensitivity analysis

    SciTech Connect

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  6. Optimizing human activity patterns using global sensitivity analysis

    DOE PAGES

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; ...

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less

  7. Global sensitivity analysis in stochastic simulators of uncertain reaction networks.

    PubMed

    Navarro Jimenez, M; Le Maître, O P; Knio, O M

    2016-12-28

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  8. A Global Sensitivity Analysis Methodology for Multi-physics Applications

    SciTech Connect

    Tong, C H; Graziani, F R

    2007-02-02

    Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.

  9. A global sensitivity analysis of crop virtual water content

    NASA Astrophysics Data System (ADS)

    Tamea, S.; Tuninetti, M.; D'Odorico, P.; Laio, F.; Ridolfi, L.

    2015-12-01

    The concepts of virtual water and water footprint are becoming widely used in the scientific literature and they are proving their usefulness in a number of multidisciplinary contexts. With such growing interest a measure of data reliability (and uncertainty) is becoming pressing but, as of today, assessments of data sensitivity to model parameters, performed at the global scale, are not known. This contribution aims at filling this gap. Starting point of this study is the evaluation of the green and blue virtual water content (VWC) of four staple crops (i.e. wheat, rice, maize, and soybean) at a global high resolution scale. In each grid cell, the crop VWC is given by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield, where evapotranspiration is determined with a detailed daily soil water balance and actual yield is estimated using country-based data, adjusted to account for spatial variability. The model provides estimates of the VWC at a 5x5 arc minutes and it improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The model is then used as the basis for a sensitivity analysis, in order to evaluate the role of model parameters in affecting the VWC and to understand how uncertainties in input data propagate and impact the VWC accounting. In each cell, small changes are exerted to one parameter at a time, and a sensitivity index is determined as the ratio between the relative change of VWC and the relative change of the input parameter with respect to its reference value. At the global scale, VWC is found to be most sensitive to the planting date, with a positive (direct) or negative (inverse) sensitivity index depending on the typical season of crop planting date. VWC is also markedly dependent on the length of the growing period, with an increase in length always producing an increase of VWC, but with higher spatial variability for rice than for

  10. Dynamic global sensitivity analysis in bioreactor networks for bioethanol production.

    PubMed

    Ochoa, M P; Estrada, V; Di Maggio, J; Hoch, P M

    2016-01-01

    Dynamic global sensitivity analysis (GSA) was performed for three different dynamic bioreactor models of increasing complexity: a fermenter for bioethanol production, a bioreactors network, where two types of bioreactors were considered: aerobic for biomass production and anaerobic for bioethanol production and a co-fermenter bioreactor, to identify the parameters that most contribute to uncertainty in model outputs. Sobol's method was used to calculate time profiles for sensitivity indices. Numerical results have shown the time-variant influence of uncertain parameters on model variables. Most influential model parameters have been determined. For the model of the bioethanol fermenter, μmax (maximum growth rate) and Ks (half-saturation constant) are the parameters with largest contribution to model variables uncertainty; in the bioreactors network, the most influential parameter is μmax,1 (maximum growth rate in bioreactor 1); whereas λ (glucose-to-total sugars concentration ratio in the feed) is the most influential parameter over all model variables in the co-fermentation bioreactor.

  11. Global sensitivity analysis of the radiative transfer model

    NASA Astrophysics Data System (ADS)

    Neelam, Maheshwari; Mohanty, Binayak P.

    2015-04-01

    With the recently launched Soil Moisture Active Passive (SMAP) mission, it is very important to have a complete understanding of the radiative transfer model for better soil moisture retrievals and to direct future research and field campaigns in areas of necessity. Because natural systems show great variability and complexity with respect to soil, land cover, topography, precipitation, there exist large uncertainties and heterogeneities in model input factors. In this paper, we explore the possibility of using global sensitivity analysis (GSA) technique to study the influence of heterogeneity and uncertainties in model inputs on zero order radiative transfer (ZRT) model and to quantify interactions between parameters. GSA technique is based on decomposition of variance and can handle nonlinear and nonmonotonic functions. We direct our analyses toward growing agricultural fields of corn and soybean in two different regions, Iowa, USA (SMEX02) and Winnipeg, Canada (SMAPVEX12). We noticed that, there exists a spatio-temporal variation in parameter interactions under different soil moisture and vegetation conditions. Radiative Transfer Model (RTM) behaves more non-linearly in SMEX02 and linearly in SMAPVEX12, with average parameter interactions of 14% in SMEX02 and 5% in SMAPVEX12. Also, parameter interactions increased with vegetation water content (VWC) and roughness conditions. Interestingly, soil moisture shows an exponentially decreasing sensitivity function whereas parameters such as root mean square height (RMS height) and vegetation water content show increasing sensitivity with 0.05 v/v increase in soil moisture range. Overall, considering the SMAPVEX12 fields to be water rich environment (due to higher observed SM) and SMEX02 fields to be energy rich environment (due to lower SM and wide ranges of TSURF), our results indicate that first order as well as interactions between the parameters change with water and energy rich environments.

  12. Simulation of the global contrail radiative forcing: A sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Yi, Bingqi; Yang, Ping; Liou, Kuo-Nan; Minnis, Patrick; Penner, Joyce E.

    2012-12-01

    The contrail radiative forcing induced by human aviation activity is one of the most uncertain contributions to climate forcing. An accurate estimation of global contrail radiative forcing is imperative, and the modeling approach is an effective and prominent method to investigate the sensitivity of contrail forcing to various potential factors. We use a simple offline model framework that is particularly useful for sensitivity studies. The most-up-to-date Community Atmospheric Model version 5 (CAM5) is employed to simulate the atmosphere and cloud conditions during the year 2006. With updated natural cirrus and additional contrail optical property parameterizations, the RRTMG Model (RRTM-GCM application) is used to simulate the global contrail radiative forcing. Global contrail coverage and optical depth derived from the literature for the year 2002 is used. The 2006 global annual averaged contrail net (shortwave + longwave) radiative forcing is estimated to be 11.3 mW m-2. Regional contrail radiative forcing over dense air traffic areas can be more than ten times stronger than the global average. A series of sensitivity tests are implemented and show that contrail particle effective size, contrail layer height, the model cloud overlap assumption, and contrail optical properties are among the most important factors. The difference between the contrail forcing under all and clear skies is also shown.

  13. How to assess the Efficiency and "Uncertainty" of Global Sensitivity Analysis?

    NASA Astrophysics Data System (ADS)

    Haghnegahdar, Amin; Razavi, Saman

    2016-04-01

    Sensitivity analysis (SA) is an important paradigm for understanding model behavior, characterizing uncertainty, improving model calibration, etc. Conventional "global" SA (GSA) approaches are rooted in different philosophies, resulting in different and sometime conflicting and/or counter-intuitive assessment of sensitivity. Moreover, most global sensitivity techniques are highly computationally demanding to be able to generate robust and stable sensitivity metrics over the entire model response surface. Accordingly, a novel sensitivity analysis method called Variogram Analysis of Response Surfaces (VARS) is introduced to overcome the aforementioned issues. VARS uses the Variogram concept to efficiently provide a comprehensive assessment of global sensitivity across a range of scales within the parameter space. Based on the VARS principles, in this study we present innovative ideas to assess (1) the efficiency of GSA algorithms and (2) the level of confidence we can assign to a sensitivity assessment. We use multiple hydrological models with different levels of complexity to explain the new ideas.

  14. Global sensitivity analysis of analytical vibroacoustic transmission models

    NASA Astrophysics Data System (ADS)

    Christen, Jean-Loup; Ichchou, Mohamed; Troclet, Bernard; Bareille, Olivier; Ouisse, Morvan

    2016-04-01

    Noise reduction issues arise in many engineering problems. One typical vibroacoustic problem is the transmission loss (TL) optimisation and control. The TL depends mainly on the mechanical parameters of the considered media. At early stages of the design, such parameters are not well known. Decision making tools are therefore needed to tackle this issue. In this paper, we consider the use of the Fourier Amplitude Sensitivity Test (FAST) for the analysis of the impact of mechanical parameters on features of interest. FAST is implemented with several structural configurations. FAST method is used to estimate the relative influence of the model parameters while assuming some uncertainty or variability on their values. The method offers a way to synthesize the results of a multiparametric analysis with large variability. Results are presented for transmission loss of isotropic, orthotropic and sandwich plates excited by a diffuse field on one side. Qualitative trends found to agree with the physical expectation. Design rules can then be set up for vibroacoustic indicators. The case of a sandwich plate is taken as an example of the use of this method inside an optimisation process and for uncertainty quantification.

  15. Comparison of Applying FOUR Reduced Order Models to a Global Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Oladyshkin, S.; Liu, Y.; Pau, G. S. H.

    2014-12-01

    This study focuses on the comparison of applying four reduced order models (ROMs) to global sensitivity analysis (GSA). ROM is one way to improve computational efficiency in many-query applications such as optimization, uncertainty quantification, sensitivity analysis, inverse modeling where the computational demand can become large. The four ROM methods are: arbitrary Polynomial Chaos (aPC), Gaussian process regression (GPR), cut high dimensional model representation (HDMR), and random sample HDMR. The discussion is mainly based on a global sensitivity analysis performed for a hypothetical large-scale CO2 storage project. Pros and cons of each method will be discussed and suggestions on how each method should be applied individually or combined will be made.

  16. Uncertainty analysis and global sensitivity analysis of techno-economic assessments for biodiesel production.

    PubMed

    Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao

    2015-01-01

    There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants.

  17. Global sensitivity analysis in wastewater treatment plant model applications: prioritizing sources of uncertainty.

    PubMed

    Sin, Gürkan; Gernaey, Krist V; Neumann, Marc B; van Loosdrecht, Mark C M; Gujer, Willi

    2011-01-01

    This study demonstrates the usefulness of global sensitivity analysis in wastewater treatment plant (WWTP) design to prioritize sources of uncertainty and quantify their impact on performance criteria. The study, which is performed with the Benchmark Simulation Model no. 1 plant design, complements a previous paper on input uncertainty characterisation and propagation (Sin et al., 2009). A sampling-based sensitivity analysis is conducted to compute standardized regression coefficients. It was found that this method is able to decompose satisfactorily the variance of plant performance criteria (with R(2) > 0.9) for effluent concentrations, sludge production and energy demand. This high extent of linearity means that the plant performance criteria can be described as linear functions of the model inputs under the defined plant conditions. In effect, the system of coupled ordinary differential equations can be replaced by multivariate linear models, which can be used as surrogate models. The importance ranking based on the sensitivity measures demonstrates that the most influential factors involve ash content and influent inert particulate COD among others, largely responsible for the uncertainty in predicting sludge production and effluent ammonium concentration. While these results were in agreement with process knowledge, the added value is that the global sensitivity methods can quantify the contribution of the variance of significant parameters, e.g., ash content explains 70% of the variance in sludge production. Further the importance of formulating appropriate sensitivity analysis scenarios that match the purpose of the model application needs to be highlighted. Overall, the global sensitivity analysis proved a powerful tool for explaining and quantifying uncertainties as well as providing insight into devising useful ways for reducing uncertainties in the plant performance. This information can help engineers design robust WWTP plants.

  18. A sensitivity analysis of key natural factors in the modeled global acetone budget

    NASA Astrophysics Data System (ADS)

    Brewer, J. F.; Bishop, M.; Kelp, M.; Keller, C. A.; Ravishankara, A. R.; Fischer, E. V.

    2017-02-01

    Acetone is one of the most abundant carbonyl compounds in the atmosphere, and it serves as an important source of HOx (OH + HO2) radicals in the upper troposphere and a precursor for peroxyacetyl nitrate. We present a global sensitivity analysis targeted at several major natural source and sink terms in the global acetone budget to find the input factor or factors to which the simulated acetone mixing ratio was most sensitive. The ranges of input factors were taken from literature. We calculated the influence of these factors in terms of their elementary effects on model output. Of the six factors tested here, the four factors with the highest contribution to total global annual model sensitivity are direct emissions of acetone from the terrestrial biosphere, acetone loss to photolysis, the concentration of acetone in the ocean mixed layer, and the dry deposition of acetone to ice-free land. The direct emissions of acetone from the terrestrial biosphere are globally important in determining acetone mixing ratios, but their importance varies seasonally outside the tropics. Photolysis is most influential in the upper troposphere. Additionally, the influence of the oceanic mixed layer concentrations are relatively invariant between seasons, compared to the other factors tested. Monoterpene oxidation in the troposphere, despite the significant uncertainties in acetone yield in this process, is responsible for only a small amount of model uncertainty in the budget analysis.

  19. A methodology for global-sensitivity analysis of time-dependent outputs in systems biology modelling

    PubMed Central

    Sumner, T.; Shephard, E.; Bogle, I. D. L.

    2012-01-01

    One of the main challenges in the development of mathematical and computational models of biological systems is the precise estimation of parameter values. Understanding the effects of uncertainties in parameter values on model behaviour is crucial to the successful use of these models. Global sensitivity analysis (SA) can be used to quantify the variability in model predictions resulting from the uncertainty in multiple parameters and to shed light on the biological mechanisms driving system behaviour. We present a new methodology for global SA in systems biology which is computationally efficient and can be used to identify the key parameters and their interactions which drive the dynamic behaviour of a complex biological model. The approach combines functional principal component analysis with established global SA techniques. The methodology is applied to a model of the insulin signalling pathway, defects of which are a major cause of type 2 diabetes and a number of key features of the system are identified. PMID:22491976

  20. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    SciTech Connect

    Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; Turner, Adrian Keith; Jeffery, Nicole

    2016-04-01

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.

  1. A Methodology For Performing Global Uncertainty And Sensitivity Analysis In Systems Biology

    PubMed Central

    Marino, Simeone; Hogue, Ian B.; Ray, Christian J.; Kirschner, Denise E.

    2008-01-01

    Accuracy of results from mathematical and computer models of biological systems is often complicated by the presence of uncertainties in experimental data that are used to estimate parameter values. Current mathematical modeling approaches typically use either single-parameter or local sensitivity analyses. However, these methods do not accurately assess uncertainty and sensitivity in the system as, by default they hold all other parameters fixed at baseline values. Using techniques described within we demonstrate how a multi-dimensional parameter space can be studied globally so all uncertainties can be identified. Further, uncertainty and sensitivity analysis techniques can help to identify and ultimately control uncertainties. In this work we develop methods for applying existing analytical tools to perform analyses on a variety of mathematical and computer models. We compare two specific types of global sensitivity analysis indexes that have proven to be among the most robust and efficient. Through familiar and new examples of mathematical and computer models, we provide a complete methodology for performing these analyses, both in deterministic and stochastic settings, and propose novel techniques to handle problems encountered during this type of analyses. PMID:18572196

  2. Global sensitivity analysis of the dispersion maximum position of the PCFs with circular holes

    NASA Astrophysics Data System (ADS)

    Guryev, Igor; Sukhoivanov, Igor; Andrade Lucio, Jose A.; Vargas Rodrigues, Everardo; Shulika, Oleksiy; Mata Chavez, Ruth I.; Baca Montero, Eric R.

    2015-08-01

    Microstructured fibers have recently become popular due to their numerous applications for fiber lasers,1 super-continuum generationi2 and pulse reshaping.3 One of the most important properties of such fibers that is taken into account is its dispersion. Fine tuning of the dispersion (i.e. dispersion management) is one of the crucial peculiarities of the photonic crystal fibers (PCFs)4 that are particular case of the microstructured fibers. During last years, there have been presented various designs of the PCFs possessing specially-designed dispersion shapes. 5-7 However, no universal technique exists which would allow tuning the PCF dispersion without using optimization methods. In our work, we investigate the sensitivity of the PCF dispersion as respect to variation of its basic parameters. This knowledge allows fine-tuning the position of local maximum of the PCF dispersion while maintaining other properties unchanged. The work is organized as follows. In the first section we discuss the dispersion computation method that is suitable for the global sensitivity analysis. The second section presents the global sensitivity analysis for this specific case. We also discuss there possible selection of the variable parameters.

  3. Development and sensitivity analysis of a global drinking water quality index.

    PubMed

    Rickwood, C J; Carr, G M

    2009-09-01

    The UNEP GEMS/Water Programme is the leading international agency responsible for the development of water quality indicators and maintains the only global database of water quality for inland waters (GEMStat). The protection of source water quality for domestic use (drinking water, abstraction etc) was identified by GEMS/Water as a priority for assessment. A composite index was developed to assess source water quality across a range of inland water types, globally, and over time. The approach for development was three-fold: (1) Select guidelines from the World Health Organisation that are appropriate in assessing global water quality for human health, (2) Select variables from GEMStat that have an appropriate guideline and reasonable global coverage, and (3) determine, on an annual basis, an overall index rating for each station using the water quality index equation endorsed by the Canadian Council of Ministers of the Environment. The index allowed measurements of the frequency and extent to which variables exceeded their respective WHO guidelines, at each individual monitoring station included within GEMStat, allowing both spatial and temporal assessment of global water quality. Development of the index was followed by preliminary sensitivity analysis and verification of the index against real water quality data.

  4. Global sensitivity analysis of ozone, HO2, and OH during ARCTAS campaign

    NASA Astrophysics Data System (ADS)

    Christian, K. E.; Mao, J.; Brune, W. H.

    2015-12-01

    Modeling the chemical state of the atmosphere is a complicated endeavor due to the complex, non-linear interactions between meteorology, emissions, and kinetics that govern trace gas concentrations. Given the rapid environmental changes taking place, the Arctic is one area of particular interest with regards to climate and atmospheric composition. To observe these changes to the Arctic atmosphere, NASA funded the Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) campaign (2008). As part of the mission, measurements of oxidative factors (hydroxyl (OH) and hydroperoxyl (HO2) abundances) were taken using the Airborne Tropospheric Hydrogen Oxides Sensor (ATHOS) aboard the NASA DC-8. Using GEOS-Chem, a popular global chemical transport model, we perform a global sensitivity analysis for the period of the ARCTAS campaign, allowing for non-linear interactions between input factors to be accounted and quantified in the analysis. Sensitivities are determined for around 50 model input factors and for combinations of pairs of input factors using the Random Sampling - High Dimensional Model Representation (RS-HDMR) method. We calculate the uncertainty in these oxidative factors, and in ozone, ozone production rate, and hydroxyl production rate and find the sensitivity of these oxidative factors and the differences between the measured and modeled oxidative factors to model inputs in meteorology, emissions, and chemistry. This presentation will include a solid estimate of GEOS-Chem model uncertainty for the period of the ARCTAS campaign, the emissions, meteorology, or chemistry to which oxidative properties are most sensitive for these periods, and the factors to which the differences between the modeled and measured oxidative factors are most sensitive.

  5. SAFE(R): A Matlab/Octave Toolbox (and R Package) for Global Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Pianosi, Francesca; Sarrazin, Fanny; Gollini, Isabella; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis (GSA) is increasingly used in the development and assessment of hydrological models, as well as for dominant control analysis and for scenario discovery to support water resource management under deep uncertainty. Here we present a toolbox for the application of GSA, called SAFE (Sensitivity Analysis For Everybody) that implements several established GSA methods, including method of Morris, Regional Sensitivity Analysis, variance-based sensitivity Analysis (Sobol') and FAST. It also includes new approaches and visualization tools to complement these established methods. The Toolbox is released in two versions, one running under Matlab/Octave (called SAFE) and one running in R (called SAFER). Thanks to its modular structure, SAFE(R) can be easily integrated with other toolbox and packages, and with models running in a different computing environment. Another interesting feature of SAFE(R) is that all the implemented methods include specific functions for assessing the robustness and convergence of the sensitivity estimates. Furthermore, SAFE(R) includes numerous visualisation tools for the effective investigation and communication of GSA results. The toolbox is designed to make GSA accessible to non-specialist users, and to provide a fully commented code for more experienced users to complement their own tools. The documentation includes a set of workflow scripts with practical guidelines on how to apply GSA and how to use the toolbox. SAFE(R) is open source and freely available from the following website: http://bristol.ac.uk/cabot/resources/safe-toolbox/ Ultimately, SAFE(R) aims at improving the diffusion and quality of GSA practice in the hydrological modelling community.

  6. Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model

    DOE PAGES

    Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; ...

    2016-04-01

    Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less

  7. Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities

    NASA Astrophysics Data System (ADS)

    Esposito, Gaetano

    Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and

  8. Decomposition method of complex optimization model based on global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Qiu, Qingying; Li, Bing; Feng, Peien; Gao, Yu

    2014-07-01

    The current research of the decomposition methods of complex optimization model is mostly based on the principle of disciplines, problems or components. However, numerous coupling variables will appear among the sub-models decomposed, thereby make the efficiency of decomposed optimization low and the effect poor. Though some collaborative optimization methods are proposed to process the coupling variables, there lacks the original strategy planning to reduce the coupling degree among the decomposed sub-models when we start decomposing a complex optimization model. Therefore, this paper proposes a decomposition method based on the global sensitivity information. In this method, the complex optimization model is decomposed based on the principle of minimizing the sensitivity sum between the design functions and design variables among different sub-models. The design functions and design variables, which are sensitive to each other, will be assigned to the same sub-models as much as possible to reduce the impacts to other sub-models caused by the changing of coupling variables in one sub-model. Two different collaborative optimization models of a gear reducer are built up separately in the multidisciplinary design optimization software iSIGHT, the optimized results turned out that the decomposition method proposed in this paper has less analysis times and increases the computational efficiency by 29.6%. This new decomposition method is also successfully applied in the complex optimization problem of hydraulic excavator working devices, which shows the proposed research can reduce the mutual coupling degree between sub-models. This research proposes a decomposition method based on the global sensitivity information, which makes the linkages least among sub-models after decomposition, and provides reference for decomposing complex optimization models and has practical engineering significance.

  9. Visualization of Global Sensitivity Analysis Results Based on a Combination of Linearly Dependent and Independent Directions

    NASA Technical Reports Server (NTRS)

    Davies, Misty D.; Gundy-Burlet, Karen

    2010-01-01

    A useful technique for the validation and verification of complex flight systems is Monte Carlo Filtering -- a global sensitivity analysis that tries to find the inputs and ranges that are most likely to lead to a subset of the outputs. A thorough exploration of the parameter space for complex integrated systems may require thousands of experiments and hundreds of controlled and measured variables. Tools for analyzing this space often have limitations caused by the numerical problems associated with high dimensionality and caused by the assumption of independence of all of the dimensions. To combat both of these limitations, we propose a technique that uses a combination of the original variables with the derived variables obtained during a principal component analysis.

  10. Sensitivity analysis

    MedlinePlus

    ... page: //medlineplus.gov/ency/article/003741.htm Sensitivity analysis To use the sharing features on this page, please enable JavaScript. Sensitivity analysis determines the effectiveness of antibiotics against microorganisms (germs) ...

  11. LCA of emerging technologies: addressing high uncertainty on inputs' variability when performing global sensitivity analysis.

    PubMed

    Lacirignola, Martino; Blanc, Philippe; Girard, Robin; Pérez-López, Paula; Blanc, Isabelle

    2017-02-01

    In the life cycle assessment (LCA) context, global sensitivity analysis (GSA) has been identified by several authors as a relevant practice to enhance the understanding of the model's structure and ensure reliability and credibility of the LCA results. GSA allows establishing a ranking among the input parameters, according to their influence on the variability of the output. Such feature is of high interest in particular when aiming at defining parameterized LCA models. When performing a GSA, the description of the variability of each input parameter may affect the results. This aspect is critical when studying new products or emerging technologies, where data regarding the model inputs are very uncertain and may cause misleading GSA outcomes, such as inappropriate input rankings. A systematic assessment of this sensitivity issue is now proposed. We develop a methodology to analyze the sensitivity of the GSA results (i.e. the stability of the ranking of the inputs) with respect to the description of such inputs of the model (i.e. the definition of their inherent variability). With this research, we aim at enriching the debate on the application of GSA to LCAs affected by high uncertainties. We illustrate its application with a case study, aiming at the elaboration of a simple model expressing the life cycle greenhouse gas emissions of enhanced geothermal systems (EGS) as a function of few key parameters. Our methodology allows identifying the key inputs of the LCA model, taking into account the uncertainty related to their description.

  12. Global sensitivity analysis, probabilistic calibration, and predictive assessment for the data assimilation linked ecosystem carbon model

    DOE PAGES

    Safta, C.; Ricciuto, Daniel M.; Sargsyan, Khachik; ...

    2015-07-01

    In this paper we propose a probabilistic framework for an uncertainty quantification (UQ) study of a carbon cycle model and focus on the comparison between steady-state and transient simulation setups. A global sensitivity analysis (GSA) study indicates the parameters and parameter couplings that are important at different times of the year for quantities of interest (QoIs) obtained with the data assimilation linked ecosystem carbon (DALEC) model. We then employ a Bayesian approach and a statistical model error term to calibrate the parameters of DALEC using net ecosystem exchange (NEE) observations at the Harvard Forest site. The calibration results are employedmore » in the second part of the paper to assess the predictive skill of the model via posterior predictive checks.« less

  13. Global sensitivity analysis, probabilistic calibration, and predictive assessment for the data assimilation linked ecosystem carbon model

    SciTech Connect

    Safta, C.; Ricciuto, Daniel M.; Sargsyan, Khachik; Debusschere, B.; Najm, H. N.; Williams, M.; Thornton, Peter E.

    2015-07-01

    In this paper we propose a probabilistic framework for an uncertainty quantification (UQ) study of a carbon cycle model and focus on the comparison between steady-state and transient simulation setups. A global sensitivity analysis (GSA) study indicates the parameters and parameter couplings that are important at different times of the year for quantities of interest (QoIs) obtained with the data assimilation linked ecosystem carbon (DALEC) model. We then employ a Bayesian approach and a statistical model error term to calibrate the parameters of DALEC using net ecosystem exchange (NEE) observations at the Harvard Forest site. The calibration results are employed in the second part of the paper to assess the predictive skill of the model via posterior predictive checks.

  14. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.

    PubMed

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.

  15. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis

    PubMed Central

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736

  16. Maximising the value of computer experiments using multi-method global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Pianosi, F.; Iwema, J.; Rosolem, R.; Wagener, T.

    2015-12-01

    Global Sensitivity Analysis (GSA) is increasingly recognised as an essential technique for a structured and quantitative approach to the calibration and diagnostic evaluation of environmental models. However, the implementation and interpretation of GSA is complicated by a number of choices that users need to make and for which multiple, equally sensible, options are often available. These choices include in the first place the choice of the GSA method, as well as many implementation details like the definition of the sampling space and strategy. The issue is exacerbated by computational complexity, in terms of both computing time and storage space needed to run the model, which might strongly constrain the number of experiments that can be afforded. While several algorithmic improvements can be adopted to reduce the computing burden of specific GSA methods, in this talk we discuss how a multi-method approach can be established to maximise the information gathered from an individual sample of model evaluations. Using as an example the GSA of a land surface model, we show how different analytical and approximation techniques can be applied sequentially to the same sample of model inputs and outputs, providing complimentary information about the model behaviour from different angles, and allowing for testing the impact of the choices made to generate the sample. We further expand our analysis to show how GSA is interconnected with model calibration and uncertainty analysis, so that a careful design of the simulation experiment can be used to address different questions simultaneously.

  17. Global sensitivity analysis of bandpass and antireflection coating manufacturing by numerical space filling designs.

    PubMed

    Vasseur, Olivier; Cathelinaud, Michel; Claeys-Bruno, Magalie; Sergent, Michelle

    2011-03-20

    We present the effectiveness of global sensitivity analyses of optical coatings manufacturing to assess the robustness of filters by computer experiments. The most critical interactions of layers are determined for a 29 quarter-wave layer bandpass filter and for an antireflection coating with eight non-quarter-wave layers. Two monitoring techniques with the associated production performances are considered, and their influence on the interactions classification is discussed. Global sensitivity analyses by numerical space filling designs give clues to improve filter manufacturing against error effects and to assess the potential robustness of the coatings.

  18. Global sensitivity analysis and Bayesian parameter inference for solute transport in porous media colonized by biofilms

    NASA Astrophysics Data System (ADS)

    Younes, A.; Delay, F.; Fajraoui, N.; Fahs, M.; Mara, T. A.

    2016-08-01

    The concept of dual flowing continuum is a promising approach for modeling solute transport in porous media that includes biofilm phases. The highly dispersed transit time distributions often generated by these media are taken into consideration by simply stipulating that advection-dispersion transport occurs through both the porous and the biofilm phases. Both phases are coupled but assigned with contrasting hydrodynamic properties. However, the dual flowing continuum suffers from intrinsic equifinality in the sense that the outlet solute concentration can be the result of several parameter sets of the two flowing phases. To assess the applicability of the dual flowing continuum, we investigate how the model behaves with respect to its parameters. For the purpose of this study, a Global Sensitivity Analysis (GSA) and a Statistical Calibration (SC) of model parameters are performed for two transport scenarios that differ by the strength of interaction between the flowing phases. The GSA is shown to be a valuable tool to understand how the complex system behaves. The results indicate that the rate of mass transfer between the two phases is a key parameter of the model behavior and influences the identifiability of the other parameters. For weak mass exchanges, the output concentration is mainly controlled by the velocity in the porous medium and by the porosity of both flowing phases. In the case of large mass exchanges, the kinetics of this exchange also controls the output concentration. The SC results show that transport with large mass exchange between the flowing phases is more likely affected by equifinality than transport with weak exchange. The SC also indicates that weakly sensitive parameters, such as the dispersion in each phase, can be accurately identified. Removing them from calibration procedures is not recommended because it might result in biased estimations of the highly sensitive parameters.

  19. A global sensitivity analysis of the PlumeRise model of volcanic plumes

    NASA Astrophysics Data System (ADS)

    Woodhouse, Mark J.; Hogg, Andrew J.; Phillips, Jeremy C.

    2016-10-01

    Integral models of volcanic plumes allow predictions of plume dynamics to be made and the rapid estimation of volcanic source conditions from observations of the plume height by model inversion. Here we introduce PlumeRise, an integral model of volcanic plumes that incorporates a description of the state of the atmosphere, includes the effects of wind and the phase change of water, and has been developed as a freely available web-based tool. The model can be used to estimate the height of a volcanic plume when the source conditions are specified, or to infer the strength of the source from an observed plume height through a model inversion. The predictions of the volcanic plume dynamics produced by the model are analysed in four case studies in which the atmospheric conditions and the strength of the source are varied. A global sensitivity analysis of the model to a selection of model inputs is performed and the results are analysed using parallel coordinate plots for visualisation and variance-based sensitivity indices to quantify the sensitivity of model outputs. We find that if the atmospheric conditions do not vary widely then there is a small set of model inputs that strongly influence the model predictions. When estimating the height of the plume, the source mass flux has a controlling influence on the model prediction, while variations in the plume height strongly effect the inferred value of the source mass flux when performing inversion studies. The values taken for the entrainment coefficients have a particularly important effect on the quantitative predictions. The dependencies of the model outputs to variations in the inputs are discussed and compared to simple algebraic expressions that relate source conditions to the height of the plume.

  20. Quantitative global sensitivity analysis of a biologically based dose-response pregnancy model for the thyroid endocrine system

    PubMed Central

    Lumen, Annie; McNally, Kevin; George, Nysia; Fisher, Jeffrey W.; Loizou, George D.

    2015-01-01

    A deterministic biologically based dose-response model for the thyroidal system in a near-term pregnant woman and the fetus was recently developed to evaluate quantitatively thyroid hormone perturbations. The current work focuses on conducting a quantitative global sensitivity analysis on this complex model to identify and characterize the sources and contributions of uncertainties in the predicted model output. The workflow and methodologies suitable for computationally expensive models, such as the Morris screening method and Gaussian Emulation processes, were used for the implementation of the global sensitivity analysis. Sensitivity indices, such as main, total and interaction effects, were computed for a screened set of the total thyroidal system descriptive model input parameters. Furthermore, a narrower sub-set of the most influential parameters affecting the model output of maternal thyroid hormone levels were identified in addition to the characterization of their overall and pair-wise parameter interaction quotients. The characteristic trends of influence in model output for each of these individual model input parameters over their plausible ranges were elucidated using Gaussian Emulation processes. Through global sensitivity analysis we have gained a better understanding of the model behavior and performance beyond the domains of observation by the simultaneous variation in model inputs over their range of plausible uncertainties. The sensitivity analysis helped identify parameters that determine the driving mechanisms of the maternal and fetal iodide kinetics, thyroid function and their interactions, and contributed to an improved understanding of the system modeled. We have thus demonstrated the use and application of global sensitivity analysis for a biologically based dose-response model for sensitive life-stages such as pregnancy that provides richer information on the model and the thyroidal system modeled compared to local sensitivity analysis

  1. Sensitivity analysis via kinetic global modeling of rotating spokes in HiPIMS

    NASA Astrophysics Data System (ADS)

    Gallian, Sara; Trieschmann, Jan; Mussenbrock, Thomas; Hitchon, William N. G.; Brinkmann, Ralf Peter

    2013-09-01

    High Power Impulse Magnetron Sputtering discharges are characterized by high density plasma (peak electron density 1018 - 1020 m-3) in a strong magnetic field (100 mT), with highly energetic secondary electrons (500 - 1000 eV). The combination of these factors results in a discharge showing a vast range of instabilities, in particular, a single rotating high emissivity region is often observed. This highly ionized region -or spoke- shows a stationary behavior in the current plateau region and rotates with Ω ~ kHz. We apply a global model that evolves the electron energy distribution function self-consistently with the rate equations for Ar and Al species. The volume average is performed only in the structure region and a net neutral flux term is imposed to model the spoke rotation. Outside the spoke region, the neutral densities are evolved according to a phenomenological fluid model. The model is solved using a relaxation method. We present a sensitivity analysis of the resulting steady state on the different physical mechanisms and comment on the anomalous electron transport observed. The authors gratefully acknowledge funding by the Deutsche Forschungsgemeinschaft within the frame of SFB-TR 87.

  2. A comparison of five forest interception models using global sensitivity and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Linhoss, Anna C.; Siegert, Courtney M.

    2016-07-01

    Interception by the forest canopy plays a critical role in the hydrologic cycle by removing a significant portion of incoming precipitation from the terrestrial component. While there are a number of existing physical models of forest interception, few studies have summarized or compared these models. The objective of this work is to use global sensitivity and uncertainty analysis to compare five mechanistic interception models including the Rutter, Rutter Sparse, Gash, Sparse Gash, and Liu models. Using parameter probability distribution functions of values from the literature, our results show that on average storm duration [Dur], gross precipitation [PG], canopy storage [S] and solar radiation [Rn] are the most important model parameters. On the other hand, empirical parameters used in calculating evaporation and drip (i.e. trunk evaporation as a proportion of evaporation from the saturated canopy [ɛ], the empirical drainage parameter [b], the drainage partitioning coefficient [pd], and the rate of water dripping from the canopy when canopy storage has been reached [Ds]) have relatively low levels of importance in interception modeling. As such, future modeling efforts should aim to decompose parameters that are the most influential in determining model outputs into easily measurable physical components. Because this study compares models, the choices regarding the parameter probability distribution functions are applied across models, which enables a more definitive ranking of model uncertainty.

  3. Global sensitivity analysis for an integrated model for simulation of nitrogen dynamics under the irrigation with treated wastewater.

    PubMed

    Sun, Huaiwei; Zhu, Yan; Yang, Jinzhong; Wang, Xiugui

    2015-11-01

    As the amount of water resources that can be utilized for agricultural production is limited, the reuse of treated wastewater (TWW) for irrigation is a practical solution to alleviate the water crisis in China. The process-based models, which estimate nitrogen dynamics under irrigation, are widely used to investigate the best irrigation and fertilization management practices in developed and developing countries. However, for modeling such a complex system for wastewater reuse, it is critical to conduct a sensitivity analysis to determine numerous input parameters and their interactions that contribute most to the variance of the model output for the development of process-based model. In this study, application of a comprehensive global sensitivity analysis for nitrogen dynamics was reported. The objective was to compare different global sensitivity analysis (GSA) on the key parameters for different model predictions of nitrogen and crop growth modules. The analysis was performed as two steps. Firstly, Morris screening method, which is one of the most commonly used screening method, was carried out to select the top affected parameters; then, a variance-based global sensitivity analysis method (extended Fourier amplitude sensitivity test, EFAST) was used to investigate more thoroughly the effects of selected parameters on model predictions. The results of GSA showed that strong parameter interactions exist in crop nitrogen uptake, nitrogen denitrification, crop yield, and evapotranspiration modules. Among all parameters, one of the soil physical-related parameters named as the van Genuchten air entry parameter showed the largest sensitivity effects on major model predictions. These results verified that more effort should be focused on quantifying soil parameters for more accurate model predictions in nitrogen- and crop-related predictions, and stress the need to better calibrate the model in a global sense. This study demonstrates the advantages of the GSA on a

  4. Global Sensitivity Analysis for Large-scale Socio-hydrological Models using the Cloud

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Garcia-Cabrejo, O.; Cai, X.; Valocchi, A. J.; Dupont, B.

    2014-12-01

    In the context of coupled human and natural system (CHNS), incorporating human factors into water resource management provides us with the opportunity to understand the interactions between human and environmental systems. A multi-agent system (MAS) model is designed to couple with the physically-based Republican River Compact Administration (RRCA) groundwater model, in an attempt to understand the declining water table and base flow in the heavily irrigated Republican River basin. For MAS modelling, we defined five behavioral parameters (κ_pr, ν_pr, κ_prep, ν_prep and λ) to characterize the agent's pumping behavior given the uncertainties of the future crop prices and precipitation. κ and ν describe agent's beliefs in their prior knowledge of the mean and variance of crop prices (κ_pr, ν_pr) and precipitation (κ_prep, ν_prep), and λ is used to describe the agent's attitude towards the fluctuation of crop profits. Notice that these human behavioral parameters as inputs to the MAS model are highly uncertain and even not measurable. Thus, we estimate the influences of these behavioral parameters on the coupled models using Global Sensitivity Analysis (GSA). In this paper, we address two main challenges arising from GSA with such a large-scale socio-hydrological model by using Hadoop-based Cloud Computing techniques and Polynomial Chaos Expansion (PCE) based variance decomposition approach. As a result, 1,000 scenarios of the coupled models are completed within two hours with the Hadoop framework, rather than about 28days if we run those scenarios sequentially. Based on the model results, GSA using PCE is able to measure the impacts of the spatial and temporal variations of these behavioral parameters on crop profits and water table, and thus identifies two influential parameters, κ_pr and λ. The major contribution of this work is a methodological framework for the application of GSA in large-scale socio-hydrological models. This framework attempts to

  5. The analysis sensitivity to tropical winds from the Global Weather Experiment

    NASA Technical Reports Server (NTRS)

    Paegle, J.; Paegle, J. N.; Baker, W. E.

    1986-01-01

    The global scale divergent and rotational flow components of the Global Weather Experiment (GWE) are diagnosed from three different analyses of the data. The rotational flow shows closer agreement between the analyses than does the divergent flow. Although the major outflow and inflow centers are similarly placed in all analyses, the global kinetic energy of the divergent wind varies by about a factor of 2 between different analyses while the global kinetic energy of the rotational wind varies by only about 10 percent between the analyses. A series of real data assimilation experiments has been performed with the GLA general circulation model using different amounts of tropical wind data during the First Special Observing Period of the Global Weather Experiment. In exeriment 1, all available tropical wind data were used; in the second experiment, tropical wind data were suppressed; while, in the third and fourth experiments, only tropical wind data with westerly and easterly components, respectively, were assimilated. The rotational wind appears to be more sensitive to the presence or absence of tropical wind data than the divergent wind. It appears that the model, given only extratropical observations, generates excessively strong upper tropospheric westerlies. These biases are sufficiently pronounced to amplify the globally integrated rotational flow kinetic energy by about 10 percent and the global divergent flow kinetic energy by about a factor of 2. Including only easterly wind data in the tropics is more effective in controlling the model error than including only westerly wind data. This conclusion is especially noteworthy because approximately twice as many upper tropospheric westerly winds were available in these cases as easterly winds.

  6. Global sensitivity analysis approach for input selection and system identification purposes--a new framework for feedforward neural networks.

    PubMed

    Fock, Eric

    2014-08-01

    A new algorithm for the selection of input variables of neural network is proposed. This new method, applied after the training stage, ranks the inputs according to their importance in the variance of the model output. The use of a global sensitivity analysis technique, extended Fourier amplitude sensitivity test, gives the total sensitivity index for each variable, which allows for the ranking and the removal of the less relevant inputs. Applied to some benchmarking problems in the field of features selection, the proposed approach shows good agreement in keeping the relevant variables. This new method is a useful tool for removing superfluous inputs and for system identification.

  7. Global Sensitivity Analysis for the determination of parameter importance in bio-manufacturing processes.

    PubMed

    Chhatre, Sunil; Francis, Richard; Newcombe, Anthony R; Zhou, Yuhong; Titchener-Hooker, Nigel; King, Josh; Keshavarz-Moore, Eli

    2008-10-01

    The present paper describes the application of GSA (Global Sensitivity Analysis) techniques to mathematical models of bioprocesses in order to rank inputs such as feed titres, flow rates and matrix capacities for the relative influence that each exerts upon outputs such as yield or throughput. GSA enables quantification of both the impact of individual variables on process outputs, as well as their interactions. These data highlight those attributes of a bioprocess which offer the greatest potential for achieving manufacturing improvements. Whereas previous GSA studies have been limited to individual unit operations, this paper extends the treatment to an entire downstream process and illustrates its utility by application to the production of a Fab-based rattlesnake antivenom called CroFab [(Crotalidae Polyvalent Immune Fab (Ovine); Protherics U.K. Limited]. Initially, hyperimmunized ovine serum containing rattlesnake antivenom IgG (product), other antibodies and albumin is applied to a synthetic affinity ligand adsorbent column to separate the antibodies from the albumin. The antibodies are papain-digested into Fab and Fc fragments, before concentration by ultrafiltration. Fc, residual IgG and albumin are eliminated by an ion-exchanger and then CroFab-specific affinity chromatography is used to produce purified antivenom. Application of GSA to the model of this process showed that product yield was controlled by IgG feed concentration and the synthetic-material affinity column's capacity and flow rate, whereas product throughput was predominantly influenced by the synthetic material's capacity, the ultrafiltration concentration factor and the CroFab affinity flow rate. Such information provides a rational basis for identifying the most promising strategies for delivering improvements to commercial-scale biomanufacturing processes.

  8. A Peep into the Uncertainty-Complexity-Relevance Modeling Trilemma through Global Sensitivity and Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Munoz-Carpena, R.; Muller, S. J.; Chu, M.; Kiker, G. A.; Perz, S. G.

    2014-12-01

    Model Model complexity resulting from the need to integrate environmental system components cannot be understated. In particular, additional emphasis is urgently needed on rational approaches to guide decision making through uncertainties surrounding the integrated system across decision-relevant scales. However, in spite of the difficulties that the consideration of modeling uncertainty represent for the decision process, it should not be avoided or the value and science behind the models will be undermined. These two issues; i.e., the need for coupled models that can answer the pertinent questions and the need for models that do so with sufficient certainty, are the key indicators of a model's relevance. Model relevance is inextricably linked with model complexity. Although model complexity has advanced greatly in recent years there has been little work to rigorously characterize the threshold of relevance in integrated and complex models. Formally assessing the relevance of the model in the face of increasing complexity would be valuable because there is growing unease among developers and users of complex models about the cumulative effects of various sources of uncertainty on model outputs. In particular, this issue has prompted doubt over whether the considerable effort going into further elaborating complex models will in fact yield the expected payback. New approaches have been proposed recently to evaluate the uncertainty-complexity-relevance modeling trilemma (Muller, Muñoz-Carpena and Kiker, 2011) by incorporating state-of-the-art global sensitivity and uncertainty analysis (GSA/UA) in every step of the model development so as to quantify not only the uncertainty introduced by the addition of new environmental components, but the effect that these new components have over existing components (interactions, non-linear responses). Outputs from the analysis can also be used to quantify system resilience (stability, alternative states, thresholds or tipping

  9. High-Throughput Analysis of Global DNA Methylation Using Methyl-Sensitive Digestion

    PubMed Central

    Feinweber, Carmen; Knothe, Claudia; Lötsch, Jörn; Thomas, Dominique; Geisslinger, Gerd; Parnham, Michael J.; Resch, Eduard

    2016-01-01

    DNA methylation is a major regulatory process of gene transcription, and aberrant DNA methylation is associated with various diseases including cancer. Many compounds have been reported to modify DNA methylation states. Despite increasing interest in the clinical application of drugs with epigenetic effects, and the use of diagnostic markers for genome-wide hypomethylation in cancer, large-scale screening systems to measure the effects of drugs on DNA methylation are limited. In this study, we improved the previously established fluorescence polarization-based global DNA methylation assay so that it is more suitable for application to human genomic DNA. Our methyl-sensitive fluorescence polarization (MSFP) assay was highly repeatable (inter-assay coefficient of variation = 1.5%) and accurate (r2 = 0.99). According to signal linearity, only 50–80 ng human genomic DNA per reaction was necessary for the 384-well format. MSFP is a simple, rapid approach as all biochemical reactions and final detection can be performed in one well in a 384-well plate without purification steps in less than 3.5 hours. Furthermore, we demonstrated a significant correlation between MSFP and the LINE-1 pyrosequencing assay, a widely used global DNA methylation assay. MSFP can be applied for the pre-screening of compounds that influence global DNA methylation states and also for the diagnosis of certain types of cancer. PMID:27749902

  10. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  11. Global sensitivity analysis of an in-sewer process model for the study of sulfide-induced corrosion of concrete.

    PubMed

    Donckels, B M R; Kroll, S; Van Dorpe, M; Weemaes, M

    2014-01-01

    The presence of high concentrations of hydrogen sulfide in the sewer system can result in corrosion of the concrete sewer pipes. The formation and fate of hydrogen sulfide in the sewer system is governed by a complex system of biological, chemical and physical processes. Therefore, mechanistic models have been developed to describe the underlying processes. In this work, global sensitivity analysis was applied to an in-sewer process model (aqua3S) to determine the most important model input factors with regard to sulfide formation in rising mains and the concrete corrosion rate downstream of a rising main. The results of the sensitivity analysis revealed the most influential model parameters, but also the importance of the characteristics of the organic matter, the alkalinity of the concrete and the movement of the sewer gas phase.

  12. The application of Global Sensitivity Analysis to quantify the dominant input factors for hydraulic model simulations

    NASA Astrophysics Data System (ADS)

    Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2015-04-01

    Predicting flood inundation extents using hydraulic models is subject to a number of critical uncertainties. For a specific event, these uncertainties are known to have a large influence on model outputs and any subsequent analyses made by risk managers. Hydraulic modellers often approach such problems by applying uncertainty analysis techniques such as the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. However, these methods do not allow one to attribute which source of uncertainty has the most influence on the various model outputs that inform flood risk decision making. Another issue facing modellers is the amount of computational resource that is available to spend on modelling flood inundations that are 'fit for purpose' to the modelling objectives. Therefore a balance needs to be struck between computation time, realism and spatial resolution, and effectively characterising the uncertainty spread of predictions (for example from boundary conditions and model parameterisations). However, it is not fully understood how much of an impact each factor has on model performance, for example how much influence changing the spatial resolution of a model has on inundation predictions in comparison to other uncertainties inherent in the modelling process. Furthermore, when resampling fine scale topographic data in the form of a Digital Elevation Model (DEM) to coarser resolutions, there are a number of possible coarser DEMs that can be produced. Deciding which DEM is then chosen to represent the surface elevations in the model could also influence model performance. In this study we model a flood event using the hydraulic model LISFLOOD-FP and apply Sobol' Sensitivity Analysis to estimate which input factor, among the uncertainty in model boundary conditions, uncertain model parameters, the spatial resolution of the DEM and the choice of resampled DEM, have the most influence on a range of model outputs. These outputs include whole domain maximum

  13. GLSENS: A Generalized Extension of LSENS Including Global Reactions and Added Sensitivity Analysis for the Perfectly Stirred Reactor

    NASA Technical Reports Server (NTRS)

    Bittker, David A.

    1996-01-01

    A generalized version of the NASA Lewis general kinetics code, LSENS, is described. The new code allows the use of global reactions as well as molecular processes in a chemical mechanism. The code also incorporates the capability of performing sensitivity analysis calculations for a perfectly stirred reactor rapidly and conveniently at the same time that the main kinetics calculations are being done. The GLSENS code has been extensively tested and has been found to be accurate and efficient. Nine example problems are presented and complete user instructions are given for the new capabilities. This report is to be used in conjunction with the documentation for the original LSENS code.

  14. Mars approach for global sensitivity analysis of differential equation models with applications to dynamics of influenza infection.

    PubMed

    Lee, Yeonok; Wu, Hulin

    2012-01-01

    Differential equation models are widely used for the study of natural phenomena in many fields. The study usually involves unknown factors such as initial conditions and/or parameters. It is important to investigate the impact of unknown factors (parameters and initial conditions) on model outputs in order to better understand the system the model represents. Apportioning the uncertainty (variation) of output variables of a model according to the input factors is referred to as sensitivity analysis. In this paper, we focus on the global sensitivity analysis of ordinary differential equation (ODE) models over a time period using the multivariate adaptive regression spline (MARS) as a meta model based on the concept of the variance of conditional expectation (VCE). We suggest to evaluate the VCE analytically using the MARS model structure of univariate tensor-product functions which is more computationally efficient. Our simulation studies show that the MARS model approach performs very well and helps to significantly reduce the computational cost. We present an application example of sensitivity analysis of ODE models for influenza infection to further illustrate the usefulness of the proposed method.

  15. Global sensitivity analysis of the GEOS-Chem chemical transport model: ozone and hydrogen oxides during ARCTAS (2008)

    NASA Astrophysics Data System (ADS)

    Christian, Kenneth E.; Brune, William H.; Mao, Jingqiu

    2017-03-01

    Developing predictive capability for future atmospheric oxidation capacity requires a detailed analysis of model uncertainties and sensitivity of the modeled oxidation capacity to model input variables. Using oxidant mixing ratios modeled by the GEOS-Chem chemical transport model and measured on the NASA DC-8 aircraft, uncertainty and global sensitivity analyses were performed on the GEOS-Chem chemical transport model for the modeled oxidants hydroxyl (OH), hydroperoxyl (HO2), and ozone (O3). The sensitivity of modeled OH, HO2, and ozone to model inputs perturbed simultaneously within their respective uncertainties were found for the flight tracks of NASA's Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS) A and B campaigns (2008) in the North American Arctic. For the spring deployment (ARCTAS-A), ozone was most sensitive to the photolysis rate of NO2, the NO2 + OH reaction rate, and various emissions, including methyl bromoform (CHBr3). OH and HO2 were overwhelmingly sensitive to aerosol particle uptake of HO2 with this one factor contributing upwards of 75 % of the uncertainty in HO2. For the summer deployment (ARCTAS-B), ozone was most sensitive to emission factors, such as soil NOx and isoprene. OH and HO2 were most sensitive to biomass emissions and aerosol particle uptake of HO2. With modeled HO2 showing a factor of 2 underestimation compared to measurements in the lowest 2 km of the troposphere, lower uptake rates (γHO2 < 0. 055), regardless of whether or not the product of the uptake is H2O or H2O2, produced better agreement between modeled and measured HO2.

  16. Global sensitivity analysis for model-based prediction of oxidative micropollutant transformation during drinking water treatment.

    PubMed

    Neumann, Marc B; Gujer, Willi; von Gunten, Urs

    2009-03-01

    This study quantifies the uncertainty involved in predicting micropollutant oxidation during drinking water ozonation in a pilot plant reactor. The analysis is conducted for geosmin, methyl tert-butyl ether (MTBE), isopropylmethoxypyrazine (IPMP), bezafibrate, beta-cyclocitral and ciprofloxazin. These compounds are representative for a wide range of substances with second order rate constants between 0.1 and 1.9x10(4)M(-1)s(-1) for the reaction with ozone and between 2x10(9) and 8x10(9)M(-1)s(-1) for the reaction with OH-radicals. Uncertainty ranges are derived for second order rate constants, hydraulic parameters, flow- and ozone concentration data, and water characteristic parameters. The uncertain model factors are propagated via Monte Carlo simulation and the resulting probability distributions of the relative residual micropollutant concentrations are assessed. The importance of factors in determining model output variance is quantified using Extended Fourier Amplitude Sensitivity Testing (Extended-FAST). For substances that react slowly with ozone (MTBE, IPMP, geosmin) the water characteristic R(ct)-value (ratio of ozone- to OH-radical concentration) is the most influential factor explaining 80% of the output variance. In the case of bezafibrate the R(ct)-value and the second order rate constant for the reaction with ozone each contribute about 30% to the output variance. For beta-cyclocitral and ciprofloxazin (fast reacting with ozone) the second order rate constant for the reaction with ozone and the hydraulic model structure become the dominating sources of uncertainty.

  17. Reducing Production Basis Risk through Rainfall Intensity Frequency (RIF) Indexes: Global Sensitivity Analysis' Implication on Policy Design

    NASA Astrophysics Data System (ADS)

    Muneepeerakul, Chitsomanus; Huffaker, Ray; Munoz-Carpena, Rafael

    2016-04-01

    The weather index insurance promises financial resilience to farmers struck by harsh weather conditions with swift compensation at affordable premium thanks to its minimal adverse selection and moral hazard. Despite these advantages, the very nature of indexing causes the presence of "production basis risk" that the selected weather indexes and their thresholds do not correspond to actual damages. To reduce basis risk without additional data collection cost, we propose the use of rain intensity and frequency as indexes as it could offer better protection at the lower premium by avoiding basis risk-strike trade-off inherent in the total rainfall index. We present empirical evidences and modeling results that even under the similar cumulative rainfall and temperature environment, yield can significantly differ especially for drought sensitive crops. We further show that deriving the trigger level and payoff function from regression between historical yield and total rainfall data may pose significant basis risk owing to their non-unique relationship in the insured range of rainfall. Lastly, we discuss the design of index insurance in terms of contract specifications based on the results from global sensitivity analysis.

  18. Position-independent geometric error identification and global sensitivity analysis for the rotary axes of five-axis machine tools

    NASA Astrophysics Data System (ADS)

    Guo, Shijie; Jiang, Gedong; Zhang, Dongsheng; Mei, Xuesong

    2017-04-01

    Position-independent geometric errors (PIGEs) are the fundamental errors of a five-axis machine tool. In this paper, to identify ten PIGEs peculiar to the rotary axes of five-axis machine tools with a tilting head, the mathematic model of the ten PIGEs is deduced and four measuring patterns are proposed. The measuring patterns and identifying method are validated on a five-axis machine tool with a tilting head, and the ten PIGEs of the machine tool are obtained. The sensitivities of the four adjustable PIGEs of the machine tool in different measuring patterns are analyzed by the Morris global sensitivity analysis method and the modifying method, and the procedure of the four adjustable PIGEs of the machine tool is given accordingly. Experimental results show that after and before modifying the four adjustable PIGEs, the average compensate rate reached 52.7%. It is proved that the proposed measuring, identifying, analyzing and modifying method are effective for error measurement and precision improvement of the five-axis machine tool.

  19. Quantifying the importance of spatial resolution and other factors through global sensitivity analysis of a flood inundation model

    NASA Astrophysics Data System (ADS)

    Thomas Steven Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2016-11-01

    Where high-resolution topographic data are available, modelers are faced with the decision of whether it is better to spend computational resource on resolving topography at finer resolutions or on running more simulations to account for various uncertain input factors (e.g., model parameters). In this paper we apply global sensitivity analysis to explore how influential the choice of spatial resolution is when compared to uncertainties in the Manning's friction coefficient parameters, the inflow hydrograph, and those stemming from the coarsening of topographic data used to produce Digital Elevation Models (DEMs). We apply the hydraulic model LISFLOOD-FP to produce several temporally and spatially variable model outputs that represent different aspects of flood inundation processes, including flood extent, water depth, and time of inundation. We find that the most influential input factor for flood extent predictions changes during the flood event, starting with the inflow hydrograph during the rising limb before switching to the channel friction parameter during peak flood inundation, and finally to the floodplain friction parameter during the drying phase of the flood event. Spatial resolution and uncertainty introduced by resampling topographic data to coarser resolutions are much more important for water depth predictions, which are also sensitive to different input factors spatially and temporally. Our findings indicate that the sensitivity of LISFLOOD-FP predictions is more complex than previously thought. Consequently, the input factors that modelers should prioritize will differ depending on the model output assessed, and the location and time of when and where this output is most relevant.

  20. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    SciTech Connect

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  1. Global sensitivity analysis of a SWAT model: comparison of the variance-based and moment-independent approaches

    NASA Astrophysics Data System (ADS)

    Khorashadi Zadeh, Farkhondeh; Sarrazin, Fanny; Nossent, Jiri; Pianosi, Francesca; van Griensven, Ann; Wagener, Thorsten; Bauwens, Willy

    2015-04-01

    Uncertainty in parameters is a well-known reason of model output uncertainty which, undermines model reliability and restricts model application. A large number of parameters, in addition to the lack of data, limits calibration efficiency and also leads to higher parameter uncertainty. Global Sensitivity Analysis (GSA) is a set of mathematical techniques that provides quantitative information about the contribution of different sources of uncertainties (e.g. model parameters) to the model output uncertainty. Therefore, identifying influential and non-influential parameters using GSA can improve model calibration efficiency and consequently reduce model uncertainty. In this paper, moment-independent density-based GSA methods that consider the entire model output distribution - i.e. Probability Density Function (PDF) or Cumulative Distribution Function (CDF) - are compared with the widely-used variance-based method and their differences are discussed. Moreover, the effect of model output definition on parameter ranking results is investigated using Nash-Sutcliffe Efficiency (NSE) and model bias as example outputs. To this end, 26 flow parameters of a SWAT model of the River Zenne (Belgium) are analysed. In order to assess the robustness of the sensitivity indices, bootstrapping is applied and 95% confidence intervals are estimated. The results show that, although the variance-based method is easy to implement and interpret, it provides wider confidence intervals, especially for non-influential parameters, compared to the density-based methods. Therefore, density-based methods may be a useful complement to variance-based methods for identifying non-influential parameters.

  2. Using global sensitivity analysis to evaluate the uncertainties of future shoreline changes under the Bruun rule assumption

    NASA Astrophysics Data System (ADS)

    Le Cozannet, Gonéri; Oliveros, Carlos; Castelle, Bruno; Garcin, Manuel; Idier, Déborah; Pedreros, Rodrigo; Rohmer, Jeremy

    2016-04-01

    Future sandy shoreline changes are often assed by summing the contributions of longshore and cross-shore effects. In such approaches, a contribution of sea-level rise can be incorporated by adding a supplementary term based on the Bruun rule. Here, our objective is to identify where and when the use of the Bruun rule can be (in)validated, in the case of wave-exposed beaches with gentle slopes. We first provide shoreline change scenarios that account for all uncertain hydrosedimentary processes affecting the idealized low- and high-energy coasts described by Stive (2004)[Stive, M. J. F. 2004, How important is global warming for coastal erosion? an editorial comment, Climatic Change, vol. 64, n 12, doi:10.1023/B:CLIM.0000024785.91858. ISSN 0165-0009]. Then, we generate shoreline change scenarios based on probabilistic sea-level rise projections based on IPCC. For scenario RCP 6.0 and 8.5 and in the absence of coastal defenses, the model predicts an observable shift toward generalized beach erosion by the middle of the 21st century. On the contrary, the model predictions are unlikely to differ from the current situation in case of scenario RCP 2.6. To get insight into the relative importance of each source of uncertainties, we quantify each contributions to the variance of the model outcome using a global sensitivity analysis. This analysis shows that by the end of the 21st century, a large part of shoreline change uncertainties are due to the climate change scenario if all anthropogenic greenhousegas emission scenarios are considered equiprobable. To conclude, the analysis shows that under the assumptions above, (in)validating the Bruun rule should be straightforward during the second half of the 21st century and for the RCP 8.5 scenario. Conversely, for RCP 2.6, the noise in shoreline change evolution should continue dominating the signal due to the Bruun effect. This last conclusion can be interpreted as an important potential benefit of climate change mitigation.

  3. Sensitivity Analysis in Engineering

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)

    1987-01-01

    The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.

  4. Mapping global sensitivity of cellular network dynamics: sensitivity heat maps and a global summation law.

    PubMed

    Rand, D A

    2008-08-06

    The dynamical systems arising from gene regulatory, signalling and metabolic networks are strongly nonlinear, have high-dimensional state spaces and depend on large numbers of parameters. Understanding the relation between the structure and the function for such systems is a considerable challenge. We need tools to identify key points of regulation, illuminate such issues as robustness and control and aid in the design of experiments. Here, I tackle this by developing new techniques for sensitivity analysis. In particular, I show how to globally analyse the sensitivity of a complex system by means of two new graphical objects: the sensitivity heat map and the parameter sensitivity spectrum. The approach to sensitivity analysis is global in the sense that it studies the variation in the whole of the model's solution rather than focusing on output variables one at a time, as in classical sensitivity analysis. This viewpoint relies on the discovery of local geometric rigidity for such systems, the mathematical insight that makes a practicable approach to such problems feasible for highly complex systems. In addition, we demonstrate a new summation theorem that substantially generalizes previous results for oscillatory and other dynamical phenomena. This theorem can be interpreted as a mathematical law stating the need for a balance between fragility and robustness in such systems.

  5. Representing nighttime and minimum conductance in CLM4.5: global hydrology and carbon sensitivity analysis using observational constraints

    NASA Astrophysics Data System (ADS)

    Lombardozzi, Danica L.; Zeppel, Melanie J. B.; Fisher, Rosie A.; Tawfik, Ahmed

    2017-01-01

    The terrestrial biosphere regulates climate through carbon, water, and energy exchanges with the atmosphere. Land-surface models estimate plant transpiration, which is actively regulated by stomatal pores, and provide projections essential for understanding Earth's carbon and water resources. Empirical evidence from 204 species suggests that significant amounts of water are lost through leaves at night, though land-surface models typically reduce stomatal conductance to nearly zero at night. Here, we test the sensitivity of carbon and water budgets in a global land-surface model, the Community Land Model (CLM) version 4.5, to three different methods of incorporating observed nighttime stomatal conductance values. We find that our modifications increase transpiration by up to 5 % globally, reduce modeled available soil moisture by up to 50 % in semi-arid regions, and increase the importance of the land surface in modulating energy fluxes. Carbon gain declines by up to ˜ 4 % globally and > 25 % in semi-arid regions. We advocate for realistic constraints of minimum stomatal conductance in future climate simulations, and widespread field observations to improve parameterizations.

  6. A Sensitivity Analysis of the Impact of Rain on Regional and Global Sea-Air Fluxes of CO2

    PubMed Central

    Shutler, J. D.; Land, P. E.; Woolf, D. K.; Quartly, G. D.

    2016-01-01

    The global oceans are considered a major sink of atmospheric carbon dioxide (CO2). Rain is known to alter the physical and chemical conditions at the sea surface, and thus influence the transfer of CO2 between the ocean and atmosphere. It can influence gas exchange through enhanced gas transfer velocity, the direct export of carbon from the atmosphere to the ocean, by altering the sea skin temperature, and through surface layer dilution. However, to date, very few studies quantifying these effects on global net sea-air fluxes exist. Here, we include terms for the enhanced gas transfer velocity and the direct export of carbon in calculations of the global net sea-air fluxes, using a 7-year time series of monthly global climate quality satellite remote sensing observations, model and in-situ data. The use of a non-linear relationship between the effects of rain and wind significantly reduces the estimated impact of rain-induced surface turbulence on the rate of sea-air gas transfer, when compared to a linear relationship. Nevertheless, globally, the rain enhanced gas transfer and rain induced direct export increase the estimated annual oceanic integrated net sink of CO2 by up to 6%. Regionally, the variations can be larger, with rain increasing the estimated annual net sink in the Pacific Ocean by up to 15% and altering monthly net flux by > ± 50%. Based on these analyses, the impacts of rain should be included in the uncertainty analysis of studies that estimate net sea-air fluxes of CO2 as the rain can have a considerable impact, dependent upon the region and timescale. PMID:27673683

  7. A Sensitivity Analysis of the Impact of Rain on Regional and Global Sea-Air Fluxes of CO2.

    PubMed

    Ashton, I G; Shutler, J D; Land, P E; Woolf, D K; Quartly, G D

    The global oceans are considered a major sink of atmospheric carbon dioxide (CO2). Rain is known to alter the physical and chemical conditions at the sea surface, and thus influence the transfer of CO2 between the ocean and atmosphere. It can influence gas exchange through enhanced gas transfer velocity, the direct export of carbon from the atmosphere to the ocean, by altering the sea skin temperature, and through surface layer dilution. However, to date, very few studies quantifying these effects on global net sea-air fluxes exist. Here, we include terms for the enhanced gas transfer velocity and the direct export of carbon in calculations of the global net sea-air fluxes, using a 7-year time series of monthly global climate quality satellite remote sensing observations, model and in-situ data. The use of a non-linear relationship between the effects of rain and wind significantly reduces the estimated impact of rain-induced surface turbulence on the rate of sea-air gas transfer, when compared to a linear relationship. Nevertheless, globally, the rain enhanced gas transfer and rain induced direct export increase the estimated annual oceanic integrated net sink of CO2 by up to 6%. Regionally, the variations can be larger, with rain increasing the estimated annual net sink in the Pacific Ocean by up to 15% and altering monthly net flux by > ± 50%. Based on these analyses, the impacts of rain should be included in the uncertainty analysis of studies that estimate net sea-air fluxes of CO2 as the rain can have a considerable impact, dependent upon the region and timescale.

  8. Economics in “Global Health 2035”: a sensitivity analysis of the value of a life year estimates

    PubMed Central

    Chang, Angela Y; Robinson, Lisa A; Hammitt, James K; Resch, Stephen C

    2017-01-01

    Background In “Global health 2035: a world converging within a generation,” The Lancet Commission on Investing in Health (CIH) adds the value of increased life expectancy to the value of growth in gross domestic product (GDP) when assessing national well–being. To value changes in life expectancy, the CIH relies on several strong assumptions to bridge gaps in the empirical research. It finds that the value of a life year (VLY) averages 2.3 times GDP per capita for low– and middle–income countries (LMICs) assuming the changes in life expectancy they experienced from 2000 to 2011 are permanent. Methods The CIH VLY estimate is based on a specific shift in population life expectancy and includes a 50 percent reduction for children ages 0 through 4. We investigate the sensitivity of this estimate to the underlying assumptions, including the effects of income, age, and life expectancy, and the sequencing of the calculations. Findings We find that reasonable alternative assumptions regarding the effects of income, age, and life expectancy may reduce the VLY estimates to 0.2 to 2.1 times GDP per capita for LMICs. Removing the reduction for young children increases the VLY, while reversing the sequencing of the calculations reduces the VLY. Conclusion Because the VLY is sensitive to the underlying assumptions, analysts interested in applying this approach elsewhere must tailor the estimates to the impacts of the intervention and the characteristics of the affected population. Analysts should test the sensitivity of their conclusions to reasonable alternative assumptions. More work is needed to investigate options for improving the approach.

  9. Assessment of the Potential Impacts of Wheat Plant Traits across Environments by Combining Crop Modeling and Global Sensitivity Analysis

    PubMed Central

    Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine

    2016-01-01

    A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement. PMID:26799483

  10. Assessment of the Potential Impacts of Wheat Plant Traits across Environments by Combining Crop Modeling and Global Sensitivity Analysis.

    PubMed

    Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine

    2016-01-01

    A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement.

  11. Global sensitivity analysis of a model related to memory formation in synapses: Model reduction based on epistemic parameter uncertainties and related issues.

    PubMed

    Kulasiri, Don; Liang, Jingyi; He, Yao; Samarasinghe, Sandhya

    2017-02-09

    We investigate the epistemic uncertainties of parameters of a mathematical model that describes the dynamics of CaMKII-NMDAR complex related to memory formation in synapses using global sensitivity analysis (GSA). The model, which was published in this journal, is nonlinear and complex with Ca(2+) patterns with different level of frequencies as inputs. We explore the effects of parameter on the key outputs of the model to discover the most sensitive ones using GSA and partial ranking correlation coefficient (PRCC) and to understand why they are sensitive and others are not based on the biology of the problem. We also extend the model to add presynaptic neurotransmitter vesicles release to have action potentials as inputs of different frequencies. We perform GSA on this extended model to show that the parameter sensitivities are different for the extended model as shown by PRCC landscapes. Based on the results of GSA and PRCC, we reduce the original model to a less complex model taking the most important biological processes into account. We validate the reduced model against the outputs of the original model. We show that the parameter sensitivities are dependent on the inputs and GSA would make us understand the sensitivities and the importance of the parameters. A thorough phenomenological understanding of the relationships involved is essential to interpret the results of GSA and hence for the possible model reduction.

  12. Global Sensitivity Measures from Given Data

    SciTech Connect

    Elmar Plischke; Emanuele Borgonovo; Curtis L. Smith

    2013-05-01

    Simulation models support managers in the solution of complex problems. International agencies recommend uncertainty and global sensitivity methods as best practice in the audit, validation and application of scientific codes. However, numerical complexity, especially in the presence of a high number of factors, induces analysts to employ less informative but numerically cheaper methods. This work introduces a design for estimating global sensitivity indices from given data (including simulation input–output data), at the minimum computational cost. We address the problem starting with a statistic based on the L1-norm. A formal definition of the estimators is provided and corresponding consistency theorems are proved. The determination of confidence intervals through a bias-reducing bootstrap estimator is investigated. The strategy is applied in the identification of the key drivers of uncertainty for the complex computer code developed at the National Aeronautics and Space Administration (NASA) assessing the risk of lunar space missions. We also introduce a symmetry result that enables the estimation of global sensitivity measures to datasets produced outside a conventional input–output functional framework.

  13. Integrated Sensitivity Analysis Workflow

    SciTech Connect

    Friedman-Hill, Ernest J.; Hoffman, Edward L.; Gibson, Marcus J.; Clay, Robert L.

    2014-08-01

    Sensitivity analysis is a crucial element of rigorous engineering analysis, but performing such an analysis on a complex model is difficult and time consuming. The mission of the DART Workbench team at Sandia National Laboratories is to lower the barriers to adoption of advanced analysis tools through software integration. The integrated environment guides the engineer in the use of these integrated tools and greatly reduces the cycle time for engineering analysis.

  14. Seismic waveform sensitivity to global boundary topography

    NASA Astrophysics Data System (ADS)

    Colombi, Andrea; Nissen-Meyer, Tarje; Boschi, Lapo; Giardini, Domenico

    2012-09-01

    We investigate the implications of lateral variations in the topography of global seismic discontinuities, in the framework of high-resolution forward modelling and seismic imaging. We run 3-D wave-propagation simulations accurate at periods of 10 s and longer, with Earth models including core-mantle boundary topography anomalies of ˜1000 km spatial wavelength and up to 10 km height. We obtain very different waveform signatures for PcP (reflected) and Pdiff (diffracted) phases, supporting the theoretical expectation that the latter are sensitive primarily to large-scale structure, whereas the former only to small scale, where large and small are relative to the frequency. PcP at 10 s seems to be well suited to map such a small-scale perturbation, whereas Pdiff at the same frequency carries faint signatures that do not allow any tomographic reconstruction. Only at higher frequency, the signature becomes stronger. We present a new algorithm to compute sensitivity kernels relating seismic traveltimes (measured by cross-correlation of observed and theoretical seismograms) to the topography of seismic discontinuities at any depth in the Earth using full 3-D wave propagation. Calculation of accurate finite-frequency sensitivity kernels is notoriously expensive, but we reduce computational costs drastically by limiting ourselves to spherically symmetric reference models, and exploiting the axial symmetry of the resulting propagating wavefield that collapses to a 2-D numerical domain. We compute and analyse a suite of kernels for upper and lower mantle discontinuities that can be used for finite-frequency waveform inversion. The PcP and Pdiff sensitivity footprints are in good agreement with the result obtained cross-correlating perturbed and unperturbed seismogram, validating our approach against full 3-D modelling to invert for such structures.

  15. The application of global sensitivity analysis in the development of a physiologically based pharmacokinetic model for m-xylene and ethanol co-exposure in humans

    PubMed Central

    Loizou, George D.; McNally, Kevin; Jones, Kate; Cocker, John

    2015-01-01

    Global sensitivity analysis (SA) was used during the development phase of a binary chemical physiologically based pharmacokinetic (PBPK) model used for the analysis of m-xylene and ethanol co-exposure in humans. SA was used to identify those parameters which had the most significant impact on variability of venous blood and exhaled m-xylene and urinary excretion of the major metabolite of m-xylene metabolism, 3-methyl hippuric acid. This analysis informed the selection of parameters for estimation/calibration by fitting to measured biological monitoring (BM) data in a Bayesian framework using Markov chain Monte Carlo (MCMC) simulation. Data generated in controlled human studies were shown to be useful for investigating the structure and quantitative outputs of PBPK models as well as the biological plausibility and variability of parameters for which measured values were not available. This approach ensured that a priori knowledge in the form of prior distributions was ascribed only to those parameters that were identified as having the greatest impact on variability. This is an efficient approach which helps reduce computational cost. PMID:26175688

  16. Determination of DNA methylation associated with Acer rubrum (red maple) adaptation to metals: analysis of global DNA modifications and methylation-sensitive amplified polymorphism.

    PubMed

    Kim, Nam-Soo; Im, Min-Ji; Nkongolo, Kabwe

    2016-08-01

    Red maple (Acer rubum), a common deciduous tree species in Northern Ontario, has shown resistance to soil metal contamination. Previous reports have indicated that this plant does not accumulate metals in its tissue. However, low level of nickel and copper corresponding to the bioavailable levels in contaminated soils in Northern Ontario causes severe physiological damages. No differentiation between metal-contaminated and uncontaminated populations has been reported based on genetic analyses. The main objective of this study was to assess whether DNA methylation is involved in A. rubrum adaptation to soil metal contamination. Global cytosine and methylation-sensitive amplified polymorphism (MSAP) analyses were carried out in A. rubrum populations from metal-contaminated and uncontaminated sites. The global modified cytosine ratios in genomic DNA revealed a significant decrease in cytosine methylation in genotypes from a metal-contaminated site compared to uncontaminated populations. Other genotypes from a different metal-contaminated site within the same region appear to be recalcitrant to metal-induced DNA alterations even ≥30 years of tree life exposure to nickel and copper. MSAP analysis showed a high level of polymorphisms in both uncontaminated (77%) and metal-contaminated (72%) populations. Overall, 205 CCGG loci were identified in which 127 were methylated in either outer or inner cytosine. No differentiation among populations was established based on several genetic parameters tested. The variations for nonmethylated and methylated loci were compared by analysis of molecular variance (AMOVA). For methylated loci, molecular variance among and within populations was 1.5% and 13.2%, respectively. These values were low (0.6% for among populations and 5.8% for within populations) for unmethylated loci. Metal contamination is seen to affect methylation of cytosine residues in CCGG motifs in the A. rubrum populations that were analyzed.

  17. Scaling in sensitivity analysis

    USGS Publications Warehouse

    Link, W.A.; Doherty, P.F.

    2002-01-01

    Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.

  18. Interference and Sensitivity Analysis

    PubMed Central

    VanderWeele, Tyler J.; Tchetgen Tchetgen, Eric J.; Halloran, M. Elizabeth

    2014-01-01

    Causal inference with interference is a rapidly growing area. The literature has begun to relax the “no-interference” assumption that the treatment received by one individual does not affect the outcomes of other individuals. In this paper we briefly review the literature on causal inference in the presence of interference when treatments have been randomized. We then consider settings in which causal effects in the presence of interference are not identified, either because randomization alone does not suffice for identification, or because treatment is not randomized and there may be unmeasured confounders of the treatment-outcome relationship. We develop sensitivity analysis techniques for these settings. We describe several sensitivity analysis techniques for the infectiousness effect which, in a vaccine trial, captures the effect of the vaccine of one person on protecting a second person from infection even if the first is infected. We also develop two sensitivity analysis techniques for causal effects in the presence of unmeasured confounding which generalize analogous techniques when interference is absent. These two techniques for unmeasured confounding are compared and contrasted. PMID:25620841

  19. The Sensitivity of a Global Ocean Model to Wind Forcing: A Test Using Sea Level and Wind Observations from Satellites and Operational Analysis

    NASA Technical Reports Server (NTRS)

    Fu, L. L.; Chao, Y.

    1997-01-01

    Investigated in this study is the response of a global ocean general circulation model to forcing provided by two wind products: operational analysis from the National Center for Environmental Prediction (NCEP); observations made by the ERS-1 radar scatterometer.

  20. Sensitivity testing and analysis

    SciTech Connect

    Neyer, B.T.

    1991-01-01

    New methods of sensitivity testing and analysis are proposed. The new test method utilizes Maximum Likelihood Estimates to pick the next test level in order to maximize knowledge of both the mean, {mu}, and the standard deviation, {sigma} of the population. Simulation results demonstrate that this new test provides better estimators (less bias and smaller variance) of both {mu} and {sigma} than the other commonly used tests (Probit, Bruceton, Robbins-Monro, Langlie). A new method of analyzing sensitivity tests is also proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions, for {mu}, {sigma}, and arbitrary percentiles. Unlike presently used methods, such as the program ASENT which is based on the Cramer-Rao theorem, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The new test and analysis methods will be explained and compared to the presently used methods. 19 refs., 12 figs.

  1. Global Uncertainty Propagation and Sensitivity Analysis in the CH3OCH2 + O2 System: Combining Experiment and Theory To Constrain Key Rate Coefficients in DME Combustion.

    PubMed

    Shannon, R J; Tomlin, A S; Robertson, S H; Blitz, M A; Pilling, M J; Seakins, P W

    2015-07-16

    Statistical rate theory calculations, in particular formulations of the chemical master equation, are widely used to calculate rate coefficients of interest in combustion environments as a function of temperature and pressure. However, despite the increasing accuracy of electronic structure calculations, small uncertainties in the input parameters for these master equation models can lead to relatively large uncertainties in the calculated rate coefficients. Master equation input parameters may be constrained further by using experimental data and the relationship between experiment and theory warrants further investigation. In this work, the CH3OCH2 + O2 system, of relevance to the combustion of dimethyl ether (DME), is used as an example and the input parameters for master equation calculations on this system are refined through fitting to experimental data. Complementing these fitting calculations, global sensitivity analysis is used to explore which input parameters are constrained by which experimental conditions, and which parameters need to be further constrained to accurately predict key elementary rate coefficients. Finally, uncertainties in the calculated rate coefficients are obtained using both correlated and uncorrelated distributions of input parameters.

  2. Assessment of the contamination of drinking water supply wells by pesticides from surface water resources using a finite element reactive transport model and global sensitivity analysis techniques

    NASA Astrophysics Data System (ADS)

    Malaguerra, Flavio; Albrechtsen, Hans-Jørgen; Binning, Philip John

    2013-01-01

    SummaryA reactive transport model is employed to evaluate the potential for contamination of drinking water wells by surface water pollution. The model considers various geologic settings, includes sorption and degradation processes and is tested by comparison with data from a tracer experiment where fluorescein dye injected in a river is monitored at nearby drinking water wells. Three compounds were considered: an older pesticide MCPP (Mecoprop) which is mobile and relatively persistent, glyphosate (Roundup), a newer biodegradable and strongly sorbing pesticide, and its degradation product AMPA. Global sensitivity analysis using the Morris method is employed to identify the dominant model parameters. Results show that the characteristics of clay aquitards (degree of fracturing and thickness), pollutant properties and well depths are crucial factors when evaluating the risk of drinking water well contamination from surface water. This study suggests that it is unlikely that glyphosate in streams can pose a threat to drinking water wells, while MCPP in surface water can represent a risk: MCPP concentration at the drinking water well can be up to 7% of surface water concentration in confined aquifers and up to 10% in unconfined aquifers. Thus, the presence of confining clay aquitards may not prevent contamination of drinking water wells by persistent compounds in surface water. Results are consistent with data on pesticide occurrence in Denmark where pesticides are found at higher concentrations at shallow depths and close to streams.

  3. New sensitivity analysis attack

    NASA Astrophysics Data System (ADS)

    El Choubassi, Maha; Moulin, Pierre

    2005-03-01

    The sensitivity analysis attacks by Kalker et al. constitute a known family of watermark removal attacks exploiting a vulnerability in some watermarking protocols: the attacker's unlimited access to the watermark detector. In this paper, a new attack on spread spectrum schemes is designed. We first examine one of Kalker's algorithms and prove its convergence using the law of large numbers, which gives more insight into the problem. Next, a new algorithm is presented and compared to existing ones. Various detection algorithms are considered including correlation detectors and normalized correlation detectors, as well as other, more complicated algorithms. Our algorithm is noniterative and requires at most n+1 operations, where n is the dimension of the signal. Moreover, the new approach directly estimates the watermark by exploiting the simple geometry of the detection boundary and the information leaked by the detector.

  4. Saltelli Global Sensitivity Analysis and Simulation Modelling to Identify Intervention Strategies to Reduce the Prevalence of Escherichia coli O157 Contaminated Beef Carcasses

    PubMed Central

    Brookes, Victoria J.; Jordan, David; Davis, Stephen; Ward, Michael P.; Heller, Jane

    2015-01-01

    Introduction Strains of Shiga-toxin producing Escherichia coli O157 (STEC O157) are important foodborne pathogens in humans, and outbreaks of illness have been associated with consumption of undercooked beef. Here, we determine the most effective intervention strategies to reduce the prevalence of STEC O157 contaminated beef carcasses using a modelling approach. Method A computational model simulated events and processes in the beef harvest chain. Information from empirical studies was used to parameterise the model. Variance-based global sensitivity analysis (GSA) using the Saltelli method identified variables with the greatest influence on the prevalence of STEC O157 contaminated carcasses. Following a baseline scenario (no interventions), a series of simulations systematically introduced and tested interventions based on influential variables identified by repeated Saltelli GSA, to determine the most effective intervention strategy. Results Transfer of STEC O157 from hide or gastro-intestinal tract to carcass (improved abattoir hygiene) had the greatest influence on the prevalence of contaminated carcases. Due to interactions between inputs (identified by Saltelli GSA), combinations of interventions based on improved abattoir hygiene achieved a greater reduction in maximum prevalence than would be expected from an additive effect of single interventions. The most effective combination was improved abattoir hygiene with vaccination, which achieved a greater than ten-fold decrease in maximum prevalence compared to the baseline scenario. Conclusion Study results suggest that effective interventions to reduce the prevalence of STEC O157 contaminated carcasses should initially be based on improved abattoir hygiene. However, the effect of improved abattoir hygiene on the distribution of STEC O157 concentration on carcasses is an important information gap—further empirical research is required to determine whether reduced prevalence of contaminated carcasses is

  5. Global sensitivity analysis of a mathematical model of acute inflammation identifies nonlinear dependence of cumulative tissue damage on host interleukin-6 responses.

    PubMed

    Mathew, Shibin; Bartels, John; Banerjee, Ipsita; Vodovotz, Yoram

    2014-10-07

    The precise inflammatory role of the cytokine interleukin (IL)-6 and its utility as a biomarker or therapeutic target have been the source of much debate, presumably due to the complex pro- and anti-inflammatory effects of this cytokine. We previously developed a nonlinear ordinary differential equation (ODE) model to explain the dynamics of endotoxin (lipopolysaccharide; LPS)-induced acute inflammation and associated whole-animal damage/dysfunction (a proxy for the health of the organism), along with the inflammatory mediators tumor necrosis factor (TNF)-α, IL-6, IL-10, and nitric oxide (NO). The model was partially calibrated using data from endotoxemic C57Bl/6 mice. Herein, we investigated the sensitivity of the area under the damage curve (AUCD) to the 51 rate parameters of the ODE model for different levels of simulated LPS challenges using a global sensitivity approach called Random Sampling High Dimensional Model Representation (RS-HDMR). We explored sufficient parametric Monte Carlo samples to generate the variance-based Sobol' global sensitivity indices, and found that inflammatory damage was highly sensitive to the parameters affecting the activity of IL-6 during the different stages of acute inflammation. The AUCIL6 showed a bimodal distribution, with the lower peak representing healthy response and the higher peak representing sustained inflammation. Damage was minimal at low AUCIL6, giving rise to a healthy response. In contrast, intermediate levels of AUCIL6 resulted in high damage, and this was due to the insufficiency of damage recovery driven by anti-inflammatory responses from IL-10 and the activation of positive feedback sustained by IL-6. At high AUCIL6, damage recovery was interestingly restored in some population of simulated animals due to the NO-mediated anti-inflammatory responses. These observations suggest that the host's health status during acute inflammation depends in a nonlinear fashion on the magnitude of the inflammatory stimulus

  6. Sensitivity of global terrestrial ecosystems to climate variability

    NASA Astrophysics Data System (ADS)

    Seddon, Alistair W. R.; Macias-Fauria, Marc; Long, Peter R.; Benz, David; Willis, Kathy J.

    2016-03-01

    The identification of properties that contribute to the persistence and resilience of ecosystems despite climate change constitutes a research priority of global relevance. Here we present a novel, empirical approach to assess the relative sensitivity of ecosystems to climate variability, one property of resilience that builds on theoretical modelling work recognizing that systems closer to critical thresholds respond more sensitively to external perturbations. We develop a new metric, the vegetation sensitivity index, that identifies areas sensitive to climate variability over the past 14 years. The metric uses time series data derived from the moderate-resolution imaging spectroradiometer (MODIS) enhanced vegetation index, and three climatic variables that drive vegetation productivity (air temperature, water availability and cloud cover). Underlying the analysis is an autoregressive modelling approach used to identify climate drivers of vegetation productivity on monthly timescales, in addition to regions with memory effects and reduced response rates to external forcing. We find ecologically sensitive regions with amplified responses to climate variability in the Arctic tundra, parts of the boreal forest belt, the tropical rainforest, alpine regions worldwide, steppe and prairie regions of central Asia and North and South America, the Caatinga deciduous forest in eastern South America, and eastern areas of Australia. Our study provides a quantitative methodology for assessing the relative response rate of ecosystems—be they natural or with a strong anthropogenic signature—to environmental variability, which is the first step towards addressing why some regions appear to be more sensitive than others, and what impact this has on the resilience of ecosystem service provision and human well-being.

  7. Sensitivity of global terrestrial ecosystems to climate variability.

    PubMed

    Seddon, Alistair W R; Macias-Fauria, Marc; Long, Peter R; Benz, David; Willis, Kathy J

    2016-03-10

    The identification of properties that contribute to the persistence and resilience of ecosystems despite climate change constitutes a research priority of global relevance. Here we present a novel, empirical approach to assess the relative sensitivity of ecosystems to climate variability, one property of resilience that builds on theoretical modelling work recognizing that systems closer to critical thresholds respond more sensitively to external perturbations. We develop a new metric, the vegetation sensitivity index, that identifies areas sensitive to climate variability over the past 14 years. The metric uses time series data derived from the moderate-resolution imaging spectroradiometer (MODIS) enhanced vegetation index, and three climatic variables that drive vegetation productivity (air temperature, water availability and cloud cover). Underlying the analysis is an autoregressive modelling approach used to identify climate drivers of vegetation productivity on monthly timescales, in addition to regions with memory effects and reduced response rates to external forcing. We find ecologically sensitive regions with amplified responses to climate variability in the Arctic tundra, parts of the boreal forest belt, the tropical rainforest, alpine regions worldwide, steppe and prairie regions of central Asia and North and South America, the Caatinga deciduous forest in eastern South America, and eastern areas of Australia. Our study provides a quantitative methodology for assessing the relative response rate of ecosystems--be they natural or with a strong anthropogenic signature--to environmental variability, which is the first step towards addressing why some regions appear to be more sensitive than others, and what impact this has on the resilience of ecosystem service provision and human well-being.

  8. Multidisciplinary optimization of controlled space structures with global sensitivity equations

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; James, Benjamin B.; Graves, Philip C.; Woodard, Stanley E.

    1991-01-01

    A new method for the preliminary design of controlled space structures is presented. The method coordinates standard finite element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structures and control systems of a spacecraft. Global sensitivity equations are a key feature of this method. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Fifteen design variables are used to optimize truss member sizes and feedback gain values. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporating the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables. The solution of the demonstration problem is an important step toward a comprehensive preliminary design capability for structures and control systems. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines.

  9. Extended Forward Sensitivity Analysis for Uncertainty Quantification

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau

    2011-09-01

    Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed

  10. Model-based global sensitivity analysis as applied to identification of anti-cancer drug targets and biomarkers of drug resistance in the ErbB2/3 network

    PubMed Central

    Lebedeva, Galina; Sorokin, Anatoly; Faratian, Dana; Mullen, Peter; Goltsov, Alexey; Langdon, Simon P.; Harrison, David J.; Goryanin, Igor

    2012-01-01

    High levels of variability in cancer-related cellular signalling networks and a lack of parameter identifiability in large-scale network models hamper translation of the results of modelling studies into the process of anti-cancer drug development. Recently global sensitivity analysis (GSA) has been recognised as a useful technique, capable of addressing the uncertainty of the model parameters and generating valid predictions on parametric sensitivities. Here we propose a novel implementation of model-based GSA specially designed to explore how multi-parametric network perturbations affect signal propagation through cancer-related networks. We use area-under-the-curve for time course of changes in phosphorylation of proteins as a characteristic for sensitivity analysis and rank network parameters with regard to their impact on the level of key cancer-related outputs, separating strong inhibitory from stimulatory effects. This allows interpretation of the results in terms which can incorporate the effects of potential anti-cancer drugs on targets and the associated biological markers of cancer. To illustrate the method we applied it to an ErbB signalling network model and explored the sensitivity profile of its key model readout, phosphorylated Akt, in the absence and presence of the ErbB2 inhibitor pertuzumab. The method successfully identified the parameters associated with elevation or suppression of Akt phosphorylation in the ErbB2/3 network. From analysis and comparison of the sensitivity profiles of pAkt in the absence and presence of targeted drugs we derived predictions of drug targets, cancer-related biomarkers and generated hypotheses for combinatorial therapy. Several key predictions have been confirmed in experiments using human ovarian carcinoma cell lines. We also compared GSA-derived predictions with the results of local sensitivity analysis and discuss the applicability of both methods. We propose that the developed GSA procedure can serve as a

  11. Global Sensitivity and Data-Worth Analyses in iTOUGH2: User's Guide

    SciTech Connect

    Wainwright, Haruko Murakami; Finsterle, Stefan

    2016-07-15

    This manual explains the use of local sensitivity analysis, the global Morris OAT and Sobol’ methods, and a related data-worth analysis as implemented in iTOUGH2. In addition to input specification and output formats, it includes some examples to show how to interpret results.

  12. D2PC sensitivity analysis

    SciTech Connect

    Lombardi, D.P.

    1992-08-01

    The Chemical Hazard Prediction Model (D2PC) developed by the US Army will play a critical role in the Chemical Stockpile Emergency Preparedness Program by predicting chemical agent transport and dispersion through the atmosphere after an accidental release. To aid in the analysis of the output calculated by D2PC, this sensitivity analysis was conducted to provide information on model response to a variety of input parameters. The sensitivity analysis focused on six accidental release scenarios involving chemical agents VX, GB, and HD (sulfur mustard). Two categories, corresponding to conservative most likely and worst case meteorological conditions, provided the reference for standard input values. D2PC displayed a wide variety of sensitivity to the various input parameters. The model displayed the greatest overall sensitivity to wind speed, mixing height, and breathing rate. For other input parameters, sensitivity was mixed but generally lower. Sensitivity varied not only with parameter, but also over the range of values input for a single parameter. This information on model response can provide useful data for interpreting D2PC output.

  13. Sensitivity of direct global warming potentials to key uncertainties

    SciTech Connect

    Wuebbles, D.J.; Patten, K.O.; Grant, K.E. ); Jain, A.K. )

    1992-07-01

    A series of sensitivity studies examines the effect of several uncertainties in Global Wanning Potentials (GWPs). For example, the original evaluation of GWPs for the Intergovernmental Panel on Climate Change (EPCC, 1990) did not attempt to account for the possible sinks of carbon dioxide (CO{sub 2}) that could balance the carbon cycle and produce atmospheric concentrations of C0{sub 2} that match observations. In this study, a balanced carbon cycle model is applied in calculation of the radiative forcing from C0{sub 2}. Use of the balanced model produces up to 20 percent enhancement of the GWPs for most trace gases compared with the EPCC (1990) values for time horizons up to 100 years, but a decreasing enhancement with longer time horizons. Uncertainty limits of the fertilization feedback parameter contribute a 10 percent range in GWP values. Another systematic uncertainty in GWPs is the assumption of an equilibrium atmosphere (one in which the concentration of trace gases remains constant) versus a disequilibrium atmosphere. The latter gives GWPs that are 15 to 30 percent greater than the former, dependening upon the carbon dioxide emission scenario chosen. Seven scenarios are employed: constant emission past 1990 and the six EPCC (1992) emission scenarios. For the analysis of uncertainties in atmospheric lifetime ({tau}), the GWP changes in direct proportion to {tau} for short-lived gases, but to a lesser extent for gases with {tau} greater than the time horizon for the GWP calculation.

  14. Comparative Sensitivity Analysis of Muscle Activation Dynamics

    PubMed Central

    Rockenfeller, Robert; Günther, Michael; Schmitt, Syn; Götz, Thomas

    2015-01-01

    We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379

  15. Computational methods for global/local analysis

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.

    1992-01-01

    Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.

  16. Global surveillance of antibiotic sensitivity of Vibrio cholerae*

    PubMed Central

    O'Grady, F.; Lewis, M. J.; Pearson, N. J.

    1976-01-01

    Strains of Vibrio cholerae—1156 from various parts of the world—were examined by standardized antibiotic sensitivity tests in one centre, to determine the global incidence of antibiotic resistance in this organism and to assess the extent to which differences in methods of sensitivity testing might be responsible for discrepancies in the reported incidence of resistant strains. Of the strains examined, 1127 were fully sensitive to ampicillin, chloramphenicol, tetracycline, furazolidone, and three different sulphonamides, 27 showed stable and reproducible resistance to one or more of these agents, and 2 proved to contain a minority of cells with unstable, presumably plasmid-borne, resistance to chloram-phenicol. Unstable resistance to antibiotics may be common in V. cholerae but rarely recognized, and may account for some of the discrepancies in the reported incidence of resistant strains. PMID:1088100

  17. Quality assessment and forecast sensitivity of global remote sensing observations

    NASA Astrophysics Data System (ADS)

    Mallick, Swapan; Dutta, Devajyoti; Min, Ki-Hong

    2017-03-01

    The satellite-derived wind from cloud and moisture features of geostationary satellites is an important data source for numerical weather prediction (NWP) models. These datasets and global positioning system radio occultation (GPSRO) satellite radiances are assimilated in the four-dimensional variational atmospheric data assimilation system of the UKMO Unified Model in India. This study focuses on the importance of these data in the NWP system and their impact on short-term 24-h forecasts. The quality of the wind observations is compared to the short-range forecast from the model background. The observation increments (observation minus background) are computed as the satellite-derived wind minus the model forecast with a 6-h lead time. The results show the model background has a large easterly wind component compared to satellite observations. The importance of each observation in the analysis is studied using an adjoint-based forecast sensitivity to observation method. The results show that at least around 50% of all types of satellite observations are beneficial. In terms of individual contribution, METEOSAT-7 shows a higher percentage of impact (nearly 50%), as compared to GEOS, MTSAT-2 and METEOSAT-10, all of which have a less than 25% impact. In addition, the impact of GPSRO, infrared atmospheric sounding interferometer (IASI) and atmospheric infrared sounder (AIRS) data is calculated. The GPSRO observations have beneficial impacts up to 50 km. Over the Southern Hemisphere, the high spectral radiances from IASI and AIRS show a greater impact than over the Northern Hemisphere. The results in this study can be used for further improvements in the use of new and existing satellite observations.

  18. Global average net radiation sensitivity to cloud amount variations

    SciTech Connect

    Karner, O.

    1993-12-01

    Time series analysis performed using an autoregressive model is carried out to study monthly oscillations in the earth radiation budget (ERB) at the top of the atmosphere (TOA) and cloud amount estimates on a global basis. Two independent cloud amount datasets, produced elsewhere by different authors, and the ERB record based on the Nimbus-7 wide field-of-view 8-year (1978-86) observations are used. Autoregressive models are used to eliminate the effects of the earth`s orbit eccentricity on the radiation budget and cloud amount series. Nonzero cross correlation between the residual series provides a way of estimating the contribution of the cloudiness variations to the variance in the net radiation. As a result, a new parameter to estimate the net radiation sensitivity at the TOA to changes in cloud amount is introduced. This parameter has a more general character than other estimates because it contains time-lag terms of different length responsible for different cloud-radiation feedback mechanisms in the earth climate system. Time lags of 0, 1, 12, and 13 months are involved. Inclusion of the zero-lag term only shows that the albedo effect of clouds dominates, as is known from other research. Inclusion of all four terms leads to an average quasi-annual insensitivity. Approximately 96% of the ERB variance at the TOA can be explained by the eccentricity factor and 1% by cloudiness variations, provided that the data used are without error. Although the latter assumption is not fully correct, the results presented allow one to estimate the contribution of current cloudiness changes to the net radiation variability. Two independent cloud amount datasets have very similar temporal variability and also approximately equal impact on the net radiation at the TOA.

  19. Sensitivity of Global Warming Potentials to the assumed background atmosphere

    SciTech Connect

    Wuebbles, D.J.; Patten, K.O.

    1992-03-05

    This is the first in a series of papers in which we will examine various aspects of the Global Warming Potential (GWP) concept and the sensitivity and uncertainties associated with the GWP values derived for the 1992 updated scientific assessment report of the Intergovernmental Panel on Climate Change (IPCC). One of the authors of this report (DJW) helped formulate the GWP concept for the first IPCC report in 1990. The Global Warming Potential concept was developed for that report as an attempt to fulfill the request from policymakers for a way of relating the potential effects on climate from various greenhouse gases, in much the same way as the Ozone Depletion Potential (ODP) concept (Wuebbles, 1981) is used in policy analyses related to concerns about the relative effects of CFCs and other compounds on stratospheric ozone destruction. We are also coauthors of the section on radiative forcing and Global Warming Potentials for the 1992 IPCC update; however, there was too little time to prepare much in the way of new research material for that report. Nonetheless, we have recognized for some time that there are a number of uncertainties and limitations associated with the definition of GWPs used in both the original and new IPCC reports. In this paper, we examine one of those uncertainties, namely, the effect of the assumed background atmospheric concentrations on the derived GWPs. Later papers will examine the sensitivity of GWPs to other uncertainties and limitations in the current concept.

  20. Sensitivity of regional climate to global temperature and forcing

    NASA Astrophysics Data System (ADS)

    Tebaldi, Claudia; O'Neill, Brian; Lamarque, Jean-François

    2015-07-01

    The sensitivity of regional climate to global average radiative forcing and temperature change is important for setting global climate policy targets and designing scenarios. Setting effective policy targets requires an understanding of the consequences exceeding them, even by small amounts, and the effective design of sets of scenarios requires the knowledge of how different emissions, concentrations, or forcing need to be in order to produce substantial differences in climate outcomes. Using an extensive database of climate model simulations, we quantify how differences in global average quantities relate to differences in both the spatial extent and magnitude of climate outcomes at regional (250-1250 km) scales. We show that differences of about 0.3 °C in global average temperature are required to generate statistically significant changes in regional annual average temperature over more than half of the Earth’s land surface. A global difference of 0.8 °C is necessary to produce regional warming over half the land surface that is not only significant but reaches at least 1 °C. As much as 2.5 to 3 °C is required for a statistically significant change in regional annual average precipitation that is equally pervasive. Global average temperature change provides a better metric than radiative forcing for indicating differences in regional climate outcomes due to the path dependency of the effects of radiative forcing. For example, a difference in radiative forcing of 0.5 W m-2 can produce statistically significant differences in regional temperature over an area that ranges between 30% and 85% of the land surface, depending on the forcing pathway.

  1. Global thermohaline circulation. Part 1: Sensitivity to atmospheric moisture transport

    SciTech Connect

    Wang, X.; Stone, P.H.; Marotzke, J.

    1999-01-01

    A global ocean general circulation model of idealized geometry, combined with an atmospheric model based on observed transports of heat, momentum, and moisture, is used to explore the sensitivity of the global conveyor belt circulation to the surface freshwater fluxes, in particular the effects of meridional atmospheric moisture transports. The numerical results indicate that the equilibrium strength of the North Atlantic Deep Water (NADW) formation increases as the global freshwater transports increase. However, the global deep water formation--that is, the sum of the NADW and the Southern Ocean Deep Water formation rates--is relatively insensitive to changes of the freshwater flux. Perturbations to the meridional moisture transports of each hemisphere identify equatorially asymmetric effects of the freshwater fluxes. The results are consistent with box model results that the equilibrium NADW formation is primarily controlled by the magnitude of the Southern Hemisphere freshwater flux. However, the results show that the Northern Hemisphere freshwater flux has a strong impact on the transient behavior of the North Atlantic overturning. Increasing this flux leads to a collapse of the conveyor belt circulation, but the collapse is delayed if the Southern Hemisphere flux also increases. The perturbation experiments also illustrate that the rapidity of collapse is affected by random fluctuations in the wind stress field.

  2. Use of global sensitivity analysis in quantitative microbial risk assessment: application to the evaluation of a biological time temperature integrator as a quality and safety indicator for cold smoked salmon.

    PubMed

    Ellouze, M; Gauchi, J-P; Augustin, J-C

    2011-06-01

    The aim of this study was to apply a global sensitivity analysis (SA) method in model simplification and to evaluate (eO)®, a biological Time Temperature Integrator (TTI) as a quality and safety indicator for cold smoked salmon (CSS). Models were thus developed to predict the evolutions of Listeria monocytogenes and the indigenous food flora in CSS and to predict TTIs endpoint. A global SA was then applied on the three models to identify the less important factors and simplify the models accordingly. Results showed that the subset of the most important factors of the three models was mainly composed of the durations and temperatures of two chill chain links, out of the control of the manufacturers: the domestic refrigerator and the retail/cabinet links. Then, the simplified versions of the three models were run with 10(4) time temperature profiles representing the variability associated to the microbial behavior, to the TTIs evolution and to the French chill chain characteristics. The results were used to assess the distributions of the microbial contaminations obtained at the TTI endpoint and at the end of the simulated profiles and proved that, in the case of poor storage conditions, the TTI use could reduce the number of unacceptable foods by 50%.

  3. Individual differences in children's global motion sensitivity correlate with TBSS-based measures of the superior longitudinal fasciculus.

    PubMed

    Braddick, Oliver; Atkinson, Janette; Akshoomoff, Natacha; Newman, Erik; Curley, Lauren B; Gonzalez, Marybel Robledo; Brown, Timothy; Dale, Anders; Jernigan, Terry

    2016-12-16

    Reduced global motion sensitivity, relative to global static form sensitivity, has been found in children with many neurodevelopmental disorders, leading to the "dorsal stream vulnerability" hypothesis (Braddick et al., 2003). Individual differences in typically developing children's global motion thresholds have been shown to be associated with variations in specific parietal cortical areas (Braddick et al., 2016). Here, in 125 children aged 5-12years, we relate individual differences in global motion and form coherence thresholds to fractional anisotropy (FA) in the superior longitudinal fasciculus (SLF), a major fibre tract communicating between parietal lobe and anterior cortical areas. We find a positive correlation between FA of the right SLF and individual children's sensitivity to global motion coherence, while FA of the left SLF shows a negative correlation. Further analysis of parietal cortical area data shows that this is also asymmetrical, showing a stronger association with global motion sensitivity in the left hemisphere. None of these associations hold for an analogous measure of global form sensitivity. We conclude that a complex pattern of structural asymmetry, including the parietal lobe and the superior longitudinal fasciculus, is specifically linked to the development of sensitivity to global visual motion. This pattern suggests that individual differences in motion sensitivity are primarily linked to parietal brain areas interacting with frontal systems in making decisions on integrated motion signals, rather than in the extra-striate visual areas that perform the initial integration. The basis of motion processing deficits in neurodevelopmental disorders may depend on these same structures.

  4. Using Dynamic Sensitivity Analysis to Assess Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey; Morell, Larry; Miller, Keith

    1990-01-01

    This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.

  5. Stiff DAE integrator with sensitivity analysis capabilities

    SciTech Connect

    Serban, R.

    2007-11-26

    IDAS is a general purpose (serial and parallel) solver for differential equation (ODE) systems with senstivity analysis capabilities. It provides both forward and adjoint sensitivity analysis options.

  6. Identification of the significant factors in food safety using global sensitivity analysis and the accept-and-reject algorithm: application to the cold chain of ham.

    PubMed

    Duret, Steven; Guillier, Laurent; Hoang, Hong-Minh; Flick, Denis; Laguerre, Onrawee

    2014-06-16

    Deterministic models describing heat transfer and microbial growth in the cold chain are widely studied. However, it is difficult to apply them in practice because of several variable parameters in the logistic supply chain (e.g., ambient temperature varying due to season and product residence time in refrigeration equipment), the product's characteristics (e.g., pH and water activity) and the microbial characteristics (e.g., initial microbial load and lag time). This variability can lead to different bacterial growth rates in food products and has to be considered to properly predict the consumer's exposure and identify the key parameters of the cold chain. This study proposes a new approach that combines deterministic (heat transfer) and stochastic (Monte Carlo) modeling to account for the variability in the logistic supply chain and the product's characteristics. The model generates a realistic time-temperature product history , contrary to existing modeling whose describe time-temperature profile Contrary to existing approaches that use directly a time-temperature profile, the proposed model predicts product temperature evolution from the thermostat setting and the ambient temperature. The developed methodology was applied to the cold chain of cooked ham including, the display cabinet, transport by the consumer and the domestic refrigerator, to predict the evolution of state variables, such as the temperature and the growth of Listeria monocytogenes. The impacts of the input factors were calculated and ranked. It was found that the product's time-temperature history and the initial contamination level are the main causes of consumers' exposure. Then, a refined analysis was applied, revealing the importance of consumer behaviors on Listeria monocytogenes exposure.

  7. LCA data quality: sensitivity and uncertainty analysis.

    PubMed

    Guo, M; Murphy, R J

    2012-10-01

    Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions.

  8. Life cycle impact assessment of terrestrial acidification: modeling spatially explicit soil sensitivity at the global scale.

    PubMed

    Roy, Pierre-Olivier; Deschênes, Louise; Margni, Manuele

    2012-08-07

    This paper presents a novel life cycle impact assessment (LCIA) approach to derive spatially explicit soil sensitivity indicators for terrestrial acidification. This global approach is compatible with a subsequent damage assessment, making it possible to consistently link the developed midpoint indicators with a later endpoint assessment along the cause-effect chain-a prerequisite in LCIA. Four different soil chemical indicators were preselected to evaluate sensitivity factors (SFs) for regional receiving environments at the global scale, namely the base cations to aluminum ratio, aluminum to calcium ratio, pH, and aluminum concentration. These chemical indicators were assessed using the PROFILE geochemical steady-state soil model and a global data set of regional soil parameters developed specifically for this study. Results showed that the most sensitive regions (i.e., where SF is maximized) are in Canada, northern Europe, the Amazon, central Africa, and East and Southeast Asia. However, the approach is not bereft of uncertainty. Indeed, a Monte Carlo analysis showed that input parameter variability may induce SF variations of up to over 6 orders of magnitude for certain chemical indicators. These findings improve current practices and enable the development of regional characterization models to assess regional life cycle inventories in a global economy.

  9. Sensitivity of flood events to global climate change

    NASA Astrophysics Data System (ADS)

    Panagoulia, Dionysia; Dimou, George

    1997-04-01

    The sensitivity of Acheloos river flood events at the outfall of the mountainous Mesochora catchment in Central Greece was analysed under various scenarios of global climate change. The climate change pattern was simulated through a set of hypothetical and monthly GISS (Goddard Institute for Space Studies) scenarios of temperature increase coupled with precipitation changes. The daily outflow of the catchment, which is dominated by spring snowmelt runoff, was simulated by the coupling of snowmelt and soil moisture accounting models of the US National Weather Service River Forecast System. Two threshold levels were used to define a flood day—the double and triple long-term mean daily streamflow—and the flood parameters (occurrences, duration, magnitude, etc.) for these cases were determined. Despite the complicated response of flood events to temperature increase and threshold, both hypothetical and monthly GISS representations of climate change resulted in more and longer flood events for climates with increased precipitation. All climates yielded larger flood volumes and greater mean values of flood peaks with respect to precipitation increase. The lower threshold resulted in more and longer flood occurrences, as well as smaller flood volumes and peaks than those of the upper one. The combination of higher and frequent flood events could lead to greater risks of inudation and possible damage to structures. Furthermore, the winter swelling of the streamflow could increase erosion of the river bed and banks and hence modify the river profile.

  10. Electric dipole moments: A global analysis

    NASA Astrophysics Data System (ADS)

    Chupp, Timothy; Ramsey-Musolf, Michael

    2015-03-01

    We perform a global analysis of searches for the permanent electric dipole moments (EDMs) of the neutron, neutral atoms, and molecules in terms of six leptonic, semileptonic, and nonleptonic interactions involving photons, electrons, pions, and nucleons. By translating the results into fundamental charge-conjugation-parity symmetry (CP) violating effective interactions through dimension six involving standard model particles, we obtain rough lower bounds on the scale of beyond the standard model CP-violating interactions ranging from 1.5 TeV for the electron EDM to 1300 TeV for the nuclear spin-independent electron-quark interaction. We show that planned future measurements involving systems or combinations of systems with complementary sensitivities to the low-energy parameters may extend the mass reach by an order of magnitude or more.

  11. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    NASA Technical Reports Server (NTRS)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  12. A review of sensitivity analysis techniques

    SciTech Connect

    Hamby, D.M.

    1993-12-31

    Mathematical models are utilized to approximate various highly complex engineering, physical, environmental, social, and economic phenomena. Model parameters exerting the most influence on model results are identified through a {open_quotes}sensitivity analysis.{close_quotes} A comprehensive review is presented of more than a dozen sensitivity analysis methods. The most fundamental of sensitivity techniques utilizes partial differentiation whereas the simplest approach requires varying parameter values one-at-a-time. Correlation analysis is used to determine relationships between independent and dependent variables. Regression analysis provides the most comprehensive sensitivity measure and is commonly utilized to build response surfaces that approximate complex models.

  13. Iterative methods for design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Yoon, B. G.

    1989-01-01

    A numerical method is presented for design sensitivity analysis, using an iterative-method reanalysis of the structure generated by a small perturbation in the design variable; a forward-difference scheme is then employed to obtain the approximate sensitivity. Algorithms are developed for displacement and stress sensitivity, as well as for eignevalues and eigenvector sensitivity, and the iterative schemes are modified so that the coefficient matrices are constant and therefore decomposed only once.

  14. Sensitivity analysis applied to stalled airfoil wake and steady control

    NASA Astrophysics Data System (ADS)

    Patino, Gustavo; Gioria, Rafael; Meneghini, Julio

    2014-11-01

    The sensitivity of an eigenvalue to base flow modifications induced by an external force is applied to the global unstable modes associated to the onset of vortex shedding in the wake of a stalled airfoil. In this work, the flow regime is close to the first instability of the system and its associated eigenvalue/eigenmode is determined. The sensitivity analysis to a general punctual external force allows establishing the regions where control devices must be in order to stabilize the global modes. Different types of steady control devices, passive and active, are used in the regions predicted by the sensitivity analysis to check the vortex shedding suppression, i.e. the primary instability bifurcation is delayed. The new eigenvalue, modified by the action of the device, is also calculated. Finally the spectral finite element method is employed to determine flow characteristics before and after of the bifurcation in order to cross check the results.

  15. A global sensitivity tool for cardiac cell modeling: Application to ionic current balance and hypertrophic signaling.

    PubMed

    Sher, Anna A; Cooling, Michael T; Bethwaite, Blair; Tan, Jefferson; Peachey, Tom; Enticott, Colin; Garic, Slavisa; Gavaghan, David J; Noble, Denis; Abramson, David; Crampin, Edmund J

    2010-01-01

    Cardiovascular diseases are the major cause of death in the developed countries. Identifying key cellular processes involved in generation of the electrical signal and in regulation of signal transduction pathways is essential for unraveling the underlying mechanisms of heart rhythm behavior. Computational cardiac models provide important insights into cardiovascular function and disease. Sensitivity analysis presents a key tool for exploring the large parameter space of such models, in order to determine the key factors determining and controlling the underlying physiological processes. We developed a new global sensitivity analysis tool which implements the Morris method, a global sensitivity screening algorithm, onto a Nimrod platform, which is a distributed resources software toolkit. The newly developed tool has been validated using the model of IP3-calcineurin signal transduction pathway model which has 30 parameters. The key driving factors of the IP3 transient behaviour have been calculated and confirmed to agree with previously published data. We next demonstrated the use of this method as an assessment tool for characterizing the structure of cardiac ionic models. In three latest human ventricular myocyte models, we examined the contribution of transmembrane currents to the shape of the electrical signal (i.e. on the action potential duration). The resulting profiles of the ionic current balance demonstrated the highly nonlinear nature of cardiac ionic models and identified key players in different models. Such profiling suggests new avenues for development of methodologies to predict drug action effects in cardiac cells.

  16. What Constitutes a "Good" Sensitivity Analysis? Elements and Tools for a Robust Sensitivity Analysis with Reduced Computational Cost

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin

    2016-04-01

    Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  17. Recent developments in structural sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Adelman, Howard M.

    1988-01-01

    Recent developments are reviewed in two major areas of structural sensitivity analysis: sensitivity of static and transient response; and sensitivity of vibration and buckling eigenproblems. Recent developments from the standpoint of computational cost, accuracy, and ease of implementation are presented. In the area of static response, current interest is focused on sensitivity to shape variation and sensitivity of nonlinear response. Two general approaches are used for computing sensitivities: differentiation of the continuum equations followed by discretization, and the reverse approach of discretization followed by differentiation. It is shown that the choice of methods has important accuracy and implementation implications. In the area of eigenproblem sensitivity, there is a great deal of interest and significant progress in sensitivity of problems with repeated eigenvalues. In addition to reviewing recent contributions in this area, the paper raises the issue of differentiability and continuity associated with the occurrence of repeated eigenvalues.

  18. Structural sensitivity analysis: Methods, applications and needs

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.; Camarda, C. J.; Walsh, J. L.

    1984-01-01

    Innovative techniques applicable to sensitivity analysis of discretized structural systems are reviewed. The techniques include a finite difference step size selection algorithm, a method for derivatives of iterative solutions, a Green's function technique for derivatives of transient response, simultaneous calculation of temperatures and their derivatives, derivatives with respect to shape, and derivatives of optimum designs with respect to problem parameters. Computerized implementations of sensitivity analysis and applications of sensitivity derivatives are also discussed. Some of the critical needs in the structural sensitivity area are indicated along with plans for dealing with some of those needs.

  19. Structural sensitivity analysis: Methods, applications, and needs

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.; Camarda, C. J.; Walsh, J. L.

    1984-01-01

    Some innovative techniques applicable to sensitivity analysis of discretized structural systems are reviewed. These techniques include a finite-difference step-size selection algorithm, a method for derivatives of iterative solutions, a Green's function technique for derivatives of transient response, a simultaneous calculation of temperatures and their derivatives, derivatives with respect to shape, and derivatives of optimum designs with respect to problem parameters. Computerized implementations of sensitivity analysis and applications of sensitivity derivatives are also discussed. Finally, some of the critical needs in the structural sensitivity area are indicated along with Langley plans for dealing with some of these needs.

  20. Sensitivity of global river discharges under Holocene and future climate conditions

    NASA Astrophysics Data System (ADS)

    Aerts, J. C. J. H.; Renssen, H.; Ward, P. J.; de Moel, H.; Odada, E.; Bouwer, L. M.; Goosse, H.

    2006-10-01

    A comparative analysis of global river basins shows that some river discharges are more sensitive to future climate change for the coming century than to natural climate variability over the last 9000 years. In these basins (Ganges, Mekong, Volta, Congo, Amazon, Murray-Darling, Rhine, Oder, Yukon) future discharges increase by 6-61%. These changes are of similar magnitude to changes over the last 9000 years. Some rivers (Nile, Syr Darya) experienced strong reductions in discharge over the last 9000 years (17-56%), but show much smaller responses to future warming. The simulation results for the last 9000 years are validated with independent proxy data.

  1. Behavioral metabolomics analysis identifies novel neurochemical signatures in methamphetamine sensitization.

    PubMed

    Adkins, D E; McClay, J L; Vunck, S A; Batman, A M; Vann, R E; Clark, S L; Souza, R P; Crowley, J J; Sullivan, P F; van den Oord, E J C G; Beardsley, P M

    2013-11-01

    Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In this study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine (MA)-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate, FDR <0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent MA levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization.

  2. Sensitivity Analysis of the Static Aeroelastic Response of a Wing

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.

    1993-01-01

    A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.

  3. Coal Transportation Rate Sensitivity Analysis

    EIA Publications

    2005-01-01

    On December 21, 2004, the Surface Transportation Board (STB) requested that the Energy Information Administration (EIA) analyze the impact of changes in coal transportation rates on projected levels of electric power sector energy use and emissions. Specifically, the STB requested an analysis of changes in national and regional coal consumption and emissions resulting from adjustments in railroad transportation rates for Wyoming's Powder River Basin (PRB) coal using the National Energy Modeling System (NEMS). However, because NEMS operates at a relatively aggregate regional level and does not represent the costs of transporting coal over specific rail lines, this analysis reports on the impacts of interregional changes in transportation rates from those used in the Annual Energy Outlook 2005 (AEO2005) reference case.

  4. Dynamic analysis of global copper flows. Global stocks, postconsumer material flows, recycling indicators, and uncertainty evaluation.

    PubMed

    Glöser, Simon; Soulier, Marcel; Tercero Espinoza, Luis A

    2013-06-18

    We present a dynamic model of global copper stocks and flows which allows a detailed analysis of recycling efficiencies, copper stocks in use, and dissipated and landfilled copper. The model is based on historical mining and refined copper production data (1910-2010) enhanced by a unique data set of recent global semifinished goods production and copper end-use sectors provided by the copper industry. To enable the consistency of the simulated copper life cycle in terms of a closed mass balance, particularly the matching of recycled metal flows to reported historical annual production data, a method was developed to estimate the yearly global collection rates of end-of-life (postconsumer) scrap. Based on this method, we provide estimates of 8 different recycling indicators over time. The main indicator for the efficiency of global copper recycling from end-of-life (EoL) scrap--the EoL recycling rate--was estimated to be 45% on average, ± 5% (one standard deviation) due to uncertainty and variability over time in the period 2000-2010. As uncertainties of specific input data--mainly concerning assumptions on end-use lifetimes and their distribution--are high, a sensitivity analysis with regard to the effect of uncertainties in the input data on the calculated recycling indicators was performed. The sensitivity analysis included a stochastic (Monte Carlo) uncertainty evaluation with 10(5) simulation runs.

  5. SILAC for global phosphoproteomic analysis.

    PubMed

    Pimienta, Genaro; Chaerkady, Raghothama; Pandey, Akhilesh

    2009-01-01

    Establishing the phosphorylation pattern of proteins in a comprehensive fashion is an important goal of a majority of cell signaling projects. Phosphoproteomic strategies should be designed in such a manner as to identify sites of phosphorylation as well as to provide quantitative information about the extent of phosphorylation at the sites. In this chapter, we describe an experimental strategy that outlines such an approach using stable isotope labeling with amino acids in cell culture (SILAC) coupled to LC-MS/MS. We highlight the importance of quantitative strategies in signal transduction as a platform for a systematic and global elucidation of biological processes.

  6. Sensitivity Analysis for Coupled Aero-structural Systems

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.

    1999-01-01

    A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.

  7. Multiple predictor smoothing methods for sensitivity analysis.

    SciTech Connect

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  8. Sensitivity analysis for solar plates

    NASA Technical Reports Server (NTRS)

    Aster, R. W.

    1986-01-01

    Economic evaluation methods and analyses of emerging photovoltaic (PV) technology since 1976 was prepared. This type of analysis was applied to the silicon research portion of the PV Program in order to determine the importance of this research effort in relationship to the successful development of commercial PV systems. All four generic types of PV that use silicon were addressed: crystal ingots grown either by the Czochralski method or an ingot casting method; ribbons pulled directly from molten silicon; an amorphous silicon thin film; and use of high concentration lenses. Three technologies were analyzed: the Union Carbide fluidized bed reactor process, the Hemlock process, and the Union Carbide Komatsu process. The major components of each process were assessed in terms of the costs of capital equipment, labor, materials, and utilities. These assessments were encoded as the probabilities assigned by experts for achieving various cost values or production rates.

  9. Sensitivity analysis for solar plates

    NASA Astrophysics Data System (ADS)

    Aster, R. W.

    1986-02-01

    Economic evaluation methods and analyses of emerging photovoltaic (PV) technology since 1976 was prepared. This type of analysis was applied to the silicon research portion of the PV Program in order to determine the importance of this research effort in relationship to the successful development of commercial PV systems. All four generic types of PV that use silicon were addressed: crystal ingots grown either by the Czochralski method or an ingot casting method; ribbons pulled directly from molten silicon; an amorphous silicon thin film; and use of high concentration lenses. Three technologies were analyzed: the Union Carbide fluidized bed reactor process, the Hemlock process, and the Union Carbide Komatsu process. The major components of each process were assessed in terms of the costs of capital equipment, labor, materials, and utilities. These assessments were encoded as the probabilities assigned by experts for achieving various cost values or production rates.

  10. Identifying sensitive ranges in global warming precipitation change dependence on convective parameters

    NASA Astrophysics Data System (ADS)

    Bernstein, Diana N.; Neelin, J. David

    2016-06-01

    A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3 mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme. This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive "dangerous ranges." The low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.

  11. Adjoint sensitivity analysis of an ultrawideband antenna

    SciTech Connect

    Stephanson, M B; White, D A

    2011-07-28

    The frequency domain finite element method using H(curl)-conforming finite elements is a robust technique for full-wave analysis of antennas. As computers become more powerful, it is becoming feasible to not only predict antenna performance, but also to compute sensitivity of antenna performance with respect to multiple parameters. This sensitivity information can then be used for optimization of the design or specification of manufacturing tolerances. In this paper we review the Adjoint Method for sensitivity calculation, and apply it to the problem of optimizing a Ultrawideband antenna.

  12. Sparing of Sensitivity to Biological Motion but Not of Global Motion after Early Visual Deprivation

    ERIC Educational Resources Information Center

    Hadad, Bat-Sheva; Maurer, Daphne; Lewis, Terri L.

    2012-01-01

    Patients deprived of visual experience during infancy by dense bilateral congenital cataracts later show marked deficits in the perception of global motion (dorsal visual stream) and global form (ventral visual stream). We expected that they would also show marked deficits in sensitivity to biological motion, which is normally processed in the…

  13. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    EPA Pesticide Factsheets

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  14. SEP thrust subsystem performance sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.

    1973-01-01

    This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.

  15. Sensitivity analysis of uncertainty in model prediction.

    PubMed

    Russi, Trent; Packard, Andrew; Feeley, Ryan; Frenklach, Michael

    2008-03-27

    Data Collaboration is a framework designed to make inferences from experimental observations in the context of an underlying model. In the prior studies, the methodology was applied to prediction on chemical kinetics models, consistency of a reaction system, and discrimination among competing reaction models. The present work advances Data Collaboration by developing sensitivity analysis of uncertainty in model prediction with respect to uncertainty in experimental observations and model parameters. Evaluation of sensitivity coefficients is performed alongside the solution of the general optimization ansatz of Data Collaboration. The obtained sensitivity coefficients allow one to determine which experiment/parameter uncertainty contributes the most to the uncertainty in model prediction, rank such effects, consider new or even hypothetical experiments to perform, and combine the uncertainty analysis with the cost of uncertainty reduction, thereby providing guidance in selecting an experimental/theoretical strategy for community action.

  16. Global thermohaline circulation. Part 2: Sensitivity with interactive atmospheric transports

    SciTech Connect

    Wang, X.; Stone, P.H.; Marotzke, J.

    1999-01-01

    A hybrid coupled ocean-atmospheric model is used to investigate the stability of the thermohaline circulation (THC) to an increase in the surface freshwater forcing in the presence of interactive meridional transports in the atmosphere. The ocean component is the idealized global general circulation model used in Part 1. The atmospheric model assumes fixed latitudinal structure of the heat and moisture transports, and the amplitudes are calculated separately for each hemisphere from the large-scale sea surface temperature (SST) and SST gradient, using parameterizations based on baroclinic stability theory. The ocean-atmosphere heat and freshwater exchanges are calculated as residuals of the steady-state atmospheric budgets. Owing to the ocean component`s weak heat transport, the model has too strong a meridional SST gradient when driven with observed atmospheric meridional transports. When the latter are made interactive, the conveyor belt circulation collapses. A flux adjustment is introduced in which the efficiency of the atmospheric transports is lowered to match the too low efficiency of the ocean component. The feedbacks between the THC and both the atmospheric heat and moisture transports are positive, whether atmospheric transports are interactive in the Northern Hemisphere, the Southern Hemisphere, or both. However, the feedbacks operate differently in the northern and southern Hemispheres, because the Pacific THC dominates in the Southern Hemisphere, and deep water formation in the two hemispheres is negatively correlated. The feedbacks in the two hemisphere do not necessarily reinforce each other because they have opposite effects on low-latitude temperatures. The model is qualitatively similar in stability to one with conventional additive flux adjustment, but quantitatively more stable.

  17. Design sensitivity analysis of boundary element substructures

    NASA Technical Reports Server (NTRS)

    Kane, James H.; Saigal, Sunil; Gallagher, Richard H.

    1989-01-01

    The ability to reduce or condense a three-dimensional model exactly, and then iterate on this reduced size model representing the parts of the design that are allowed to change in an optimization loop is discussed. The discussion presents the results obtained from an ongoing research effort to exploit the concept of substructuring within the structural shape optimization context using a Boundary Element Analysis (BEA) formulation. The first part contains a formulation for the exact condensation of portions of the overall boundary element model designated as substructures. The use of reduced boundary element models in shape optimization requires that structural sensitivity analysis can be performed. A reduced sensitivity analysis formulation is then presented that allows for the calculation of structural response sensitivities of both the substructured (reduced) and unsubstructured parts of the model. It is shown that this approach produces significant computational economy in the design sensitivity analysis and reanalysis process by facilitating the block triangular factorization and forward reduction and backward substitution of smaller matrices. The implementatior of this formulation is discussed and timings and accuracies of representative test cases presented.

  18. NIR sensitivity analysis with the VANE

    NASA Astrophysics Data System (ADS)

    Carrillo, Justin T.; Goodin, Christopher T.; Baylot, Alex E.

    2016-05-01

    Near infrared (NIR) cameras, with peak sensitivity around 905-nm wavelengths, are increasingly used in object detection applications such as pedestrian detection, occupant detection in vehicles, and vehicle detection. In this work, we present the results of simulated sensitivity analysis for object detection with NIR cameras. The analysis was conducted using high performance computing (HPC) to determine the environmental effects on object detection in different terrains and environmental conditions. The Virtual Autonomous Navigation Environment (VANE) was used to simulate highresolution models for environment, terrain, vehicles, and sensors. In the experiment, an active fiducial marker was attached to the rear bumper of a vehicle. The camera was mounted on a following vehicle that trailed at varying standoff distances. Three different terrain conditions (rural, urban, and forest), two environmental conditions (clear and hazy), three different times of day (morning, noon, and evening), and six different standoff distances were used to perform the sensor sensitivity analysis. The NIR camera that was used for the simulation is the DMK firewire monochrome on a pan-tilt motor. Standoff distance was varied along with environment and environmental conditions to determine the critical failure points for the sensor. Feature matching was used to detect the markers in each frame of the simulation, and the percentage of frames in which one of the markers was detected was recorded. The standoff distance produced the biggest impact on the performance of the camera system, while the camera system was not sensitive to environment conditions.

  19. Sensitive chiral analysis by CE: an update.

    PubMed

    Sánchez-Hernández, Laura; Crego, Antonio Luis; Marina, María Luisa; García-Ruiz, Carmen

    2008-01-01

    A general view of the different strategies used in the last years to enhance the detection sensitivity in chiral analysis by CE is provided in this article. With this purpose and in order to update the previous review by García-Ruiz et al., the articles appeared on this subject from January 2005 to March 2007 are considered. Three were the main strategies employed to increase the detection sensitivity in chiral analysis by CE: (i) the use of off-line sample treatment techniques, (ii) the employment of in-capillary preconcentration techniques based on electrophoretic principles, and (iii) the use of alternative detection systems to the widely employed on-column UV-Vis absorption detection. Combinations of two or three of the above-mentioned strategies gave rise to adequate concentration detection limits up to 10(-10) M enabling enantiomer analysis in a variety of real samples including complex biological matrices.

  20. Sensitivity analysis techniques for models of human behavior.

    SciTech Connect

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  1. Nursing-sensitive indicators: a concept analysis

    PubMed Central

    Heslop, Liza; Lu, Sai

    2014-01-01

    Aim To report a concept analysis of nursing-sensitive indicators within the applied context of the acute care setting. Background The concept of ‘nursing sensitive indicators’ is valuable to elaborate nursing care performance. The conceptual foundation, theoretical role, meaning, use and interpretation of the concept tend to differ. The elusiveness of the concept and the ambiguity of its attributes may have hindered research efforts to advance its application in practice. Design Concept analysis. Data sources Using ‘clinical indicators’ or ‘quality of nursing care’ as subject headings and incorporating keyword combinations of ‘acute care’ and ‘nurs*’, CINAHL and MEDLINE with full text in EBSCOhost databases were searched for English language journal articles published between 2000–2012. Only primary research articles were selected. Methods A hybrid approach was undertaken, incorporating traditional strategies as per Walker and Avant and a conceptual matrix based on Holzemer's Outcomes Model for Health Care Research. Results The analysis revealed two main attributes of nursing-sensitive indicators. Structural attributes related to health service operation included: hours of nursing care per patient day, nurse staffing. Outcome attributes related to patient care included: the prevalence of pressure ulcer, falls and falls with injury, nosocomial selective infection and patient/family satisfaction with nursing care. Conclusion This concept analysis may be used as a basis to advance understandings of the theoretical structures that underpin both research and practical application of quality dimensions of nursing care performance. PMID:25113388

  2. SENSITIVITY ANALYSIS FOR OSCILLATING DYNAMICAL SYSTEMS

    PubMed Central

    WILKINS, A. KATHARINA; TIDOR, BRUCE; WHITE, JACOB; BARTON, PAUL I.

    2012-01-01

    Boundary value formulations are presented for exact and efficient sensitivity analysis, with respect to model parameters and initial conditions, of different classes of oscillating systems. Methods for the computation of sensitivities of derived quantities of oscillations such as period, amplitude and different types of phases are first developed for limit-cycle oscillators. In particular, a novel decomposition of the state sensitivities into three parts is proposed to provide an intuitive classification of the influence of parameter changes on period, amplitude and relative phase. The importance of the choice of time reference, i.e., the phase locking condition, is demonstrated and discussed, and its influence on the sensitivity solution is quantified. The methods are then extended to other classes of oscillatory systems in a general formulation. Numerical techniques are presented to facilitate the solution of the boundary value problem, and the computation of different types of sensitivities. Numerical results are verified by demonstrating consistency with finite difference approximations and are superior both in computational efficiency and in numerical precision to existing partial methods. PMID:23296349

  3. Demonstration sensitivity analysis for RADTRAN III

    SciTech Connect

    Neuhauser, K S; Reardon, P C

    1986-10-01

    A demonstration sensitivity analysis was performed to: quantify the relative importance of 37 variables to the total incident free dose; assess the elasticity of seven dose subgroups to those same variables; develop density distributions for accident dose to combinations of accident data under wide-ranging variations; show the relationship between accident consequences and probabilities of occurrence; and develop limits for the variability of probability consequence curves.

  4. [Sensitivity analysis in health investment projects].

    PubMed

    Arroyave-Loaiza, G; Isaza-Nieto, P; Jarillo-Soto, E C

    1994-01-01

    This paper discusses some of the concepts and methodologies frequently used in sensitivity analyses in the evaluation of investment programs. In addition, a concrete example is presented: a hospital investment in which four indicators were used to design different scenarios and their impact on investment costs. This paper emphasizes the importance of this type of analysis in the field of management of health services, and more specifically in the formulation of investment programs.

  5. Optimal control concepts in design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Belegundu, Ashok D.

    1987-01-01

    A close link is established between open loop optimal control theory and optimal design by noting certain similarities in the gradient calculations. The resulting benefits include a unified approach, together with physical insights in design sensitivity analysis, and an efficient approach for simultaneous optimal control and design. Both matrix displacement and matrix force methods are considered, and results are presented for dynamic systems, structures, and elasticity problems.

  6. Development of sensitivity to global form and motion in macaque monkeys (Macaca nemestrina).

    PubMed

    Kiorpes, Lynne; Price, Tracy; Hall-Haro, Cynthia; Movshon, J Anthony

    2012-06-15

    To explore the relative development of the dorsal and ventral extrastriate processing streams, we studied the development of sensitivity to form and motion in macaque monkeys (Macaca nemestrina). We used Glass patterns and random dot kinematograms (RDK) to assay ventral and dorsal stream function, respectively. We tested 24 animals, longitudinally or cross-sectionally, between the ages of 5 weeks and 3 years. Each animal was tested with Glass patterns and RDK stimuli with each of two pattern types--circular and linear--at each age using a two alternative forced-choice task. We measured coherence threshold for discrimination of the global form or motion pattern from an incoherent control stimulus. Sensitivity to global motion appeared earlier than to global form and was higher at all ages, but performance approached adult levels at similar ages. Infants were most sensitive to large spatial scale (Δx) and fast speeds; sensitivity to fine scale and slow speeds developed more slowly independently of pattern type. Within the motion domain, pattern type had little effect on overall performance. However, within the form domain, sensitivity for linear Glass patterns was substantially poorer than that for concentric patterns. Our data show comparatively early onset for global motion integration ability, perhaps reflecting early development of the dorsal stream. However, both pathways mature over long time courses reaching adult levels between 2 and 3 years after birth.

  7. Variational global analysis of satellite temperature soundings

    NASA Technical Reports Server (NTRS)

    Halem, M.; Kalnay, E.

    1983-01-01

    A variational spherical harmonic analysis is developed for the production of global geopotential height and geostropic wind fields from the TIROS-N spacecraft's temperature sounding profiles. This scheme is based on Tykhonov's (1964) regularization method, and the smoothing parameter is determined by cross validation. The scheme is noted to be stable and computationally efficient, and it does not depend on a priori information. Its applications to three-dimensional temperature retrievals and to four-dimensional spectral analyses are illustrated.

  8. Climate sensitivity of global terrestrial ecosystems' subdaily carbon, water, and energy dynamics.

    NASA Astrophysics Data System (ADS)

    Yu, R.; Ruddell, B. L.; Childers, D. L.; Kang, M.

    2015-12-01

    Abstract: Under the context of global climate change, it is important to understand the direction and magnitude of different ecosystems respond to climate at the global level. In this study, we applied dynamical process network (DPN) approach combined with eco-climate system sensitivity model and used the global FLUXNET eddy covariance measurements (subdaily net ecosystem exchange of CO2, air temperature, and precipitation) to access eco-climate system sensitivity to climate and biophysical factors at the flux site level. For the first time, eco-climate system sensitivity was estimated at the global flux sites and extrapolated to all possible land covers by employing artificial neural network approach and using the MODIS phenology and land cover products, the long-term climate GLDAS-2 product, and the GMTED2010 Global Grid elevation dataset. We produced the seasonal eco-climate system DPN maps, which revealed how global carbon dynamics driven by temperature and precipitation. We also found that the eco-climate system dynamical process structures are more sensitive to temperature, whether directly or indirectly via phenology. Interestingly, if temperature continues rising, the temperature-NEE coupling may increase in tropical rain forest areas while decrease in tropical desert or Savanna areas, which means that rising temperature in the future could lead to more carbon sequestration in tropical forests whereas less carbon sequestration in tropical drylands. At the same time, phenology showed a positive effect on the temperature-NEE coupling at all pixels, which suggests increased greenness may increase temperature driven carbon dynamics and consequently carbon sequestration globally. Precipitation showed relatively strong influence on the precipitation-NEE coupling, especially indirectly via phenology. This study has the potential to conduct eco-climate system short-term and long-term forecasting.

  9. A global analysis of island pyrogeography

    NASA Astrophysics Data System (ADS)

    Trauernicht, C.; Murphy, B. P.

    2014-12-01

    Islands have provided insight into the ecological role of fire worldwide through research on the positive feedbacks between fire and nonnative grasses, particularly in the Hawaiian Islands. However, the global extent and frequency of fire on islands as an ecological disturbance has received little attention, possibly because 'natural fires' on islands are typically limited to infrequent dry lightning strikes and isolated volcanic events. But because most contemporary fires on islands are anthropogenic, islands provide ideal systems with which to understand the linkages between socio-economic development, shifting fire regimes, and ecological change. Here we use the density of satellite-derived (MODIS) active fire detections for the years 2000-2014 and global data sets of vegetation, climate, population density, and road development to examine the drivers of fire activity on islands at the global scale, and compare these results to existing pyrogeographic models derived from continental data sets. We also use the Hawaiian Islands as a case study to understand the extent to which novel fire regimes can pervade island ecosystems. The global analysis indicates that fire is a frequent disturbance across islands worldwide, strongly affected by human activities, indicating people can more readily override climatic drivers than on continental land masses. The extent of fire activity derived from local records in the Hawaiian Islands reveals that our global analysis likely underestimates the prevalence of fire among island systems and that the combined effects of human activity and invasion by nonnative grasses can create conditions for frequent and relatively large-scale fires. Understanding the extent of these novel fire regimes, and mitigating their impacts, is critical to reducing the current and rapid degradation of native island ecosystems worldwide.

  10. Sensitivity Analysis of Automated Ice Edge Detection

    NASA Astrophysics Data System (ADS)

    Moen, Mari-Ann N.; Isaksem, Hugo; Debien, Annekatrien

    2016-08-01

    The importance of highly detailed and time sensitive ice charts has increased with the increasing interest in the Arctic for oil and gas, tourism, and shipping. Manual ice charts are prepared by national ice services of several Arctic countries. Methods are also being developed to automate this task. Kongsberg Satellite Services uses a method that detects ice edges within 15 minutes after image acquisition. This paper describes a sensitivity analysis of the ice edge, assessing to which ice concentration class from the manual ice charts it can be compared to. The ice edge is derived using the Ice Tracking from SAR Images (ITSARI) algorithm. RADARSAT-2 images of February 2011 are used, both for the manual ice charts and the automatic ice edges. The results show that the KSAT ice edge lies within ice concentration classes with very low ice concentration or open water.

  11. Long Trajectory for the Development of Sensitivity to Global and Biological Motion

    ERIC Educational Resources Information Center

    Hadad, Bat-Sheva; Maurer, Daphne; Lewis, Terri L.

    2011-01-01

    We used a staircase procedure to test sensitivity to (1) global motion in random-dot kinematograms moving at 4 degrees and 18 degrees s[superscript -1] and (2) biological motion. Thresholds were defined as (1) the minimum percentage of signal dots (i.e. the maximum percentage of noise dots) necessary for accurate discrimination of upward versus…

  12. Toward a Globally Sensitive Definition of Inclusive Education Based in Social Justice

    ERIC Educational Resources Information Center

    Shyman, Eric

    2015-01-01

    While many policies, pieces of legislation and educational discourse focus on the concept of inclusion, or inclusive education, the field of education as a whole lacks a clear, precise and comprehensive definition that is both globally sensitive and based in social justice. Even international efforts including the UN Convention on the Rights of…

  13. Measuring Road Network Vulnerability with Sensitivity Analysis

    PubMed Central

    Jun-qiang, Leng; Long-hai, Yang; Liu, Wei-yi; Zhao, Lin

    2017-01-01

    This paper focuses on the development of a method for road network vulnerability analysis, from the perspective of capacity degradation, which seeks to identify the critical infrastructures in the road network and the operational performance of the whole traffic system. This research involves defining the traffic utility index and modeling vulnerability of road segment, route, OD (Origin Destination) pair and road network. Meanwhile, sensitivity analysis method is utilized to calculate the change of traffic utility index due to capacity degradation. This method, compared to traditional traffic assignment, can improve calculation efficiency and make the application of vulnerability analysis to large actual road network possible. Finally, all the above models and calculation method is applied to actual road network evaluation to verify its efficiency and utility. This approach can be used as a decision-supporting tool for evaluating the performance of road network and identifying critical infrastructures in transportation planning and management, especially in the resource allocation for mitigation and recovery. PMID:28125706

  14. Global climate sensitivity to land surface change: The Mid Holocene revisited

    NASA Astrophysics Data System (ADS)

    Diffenbaugh, Noah S.; Sloan, Lisa C.

    2002-05-01

    Land surface forcing of global climate has been shown both for anthropogenic and non-anthropogenic changes in land surface distribution. Because validation of global climate models (GCMs) is dependent upon the use of accurate boundary conditions, and because changes in land surface distribution have been shown to have effects on climate in areas remote from those changes, we have tested the sensitivity of a GCM to a global Mid Holocene vegetation distribution reconstructed from the fossil record, a first for a 6 ka GCM run. Here we demonstrate that large areas of the globe show statistically significant temperature sensitivity to these land surface changes and that the magnitude of the vegetation forcing is equal to the magnitude of 6 ka orbital forcing, emphasizing the importance of accurate land surface distribution for both model validation and future climate prediction.

  15. Network analysis of global influenza spread.

    PubMed

    Chan, Joseph; Holmes, Antony; Rabadan, Raul

    2010-11-18

    Although vaccines pose the best means of preventing influenza infection, strain selection and optimal implementation remain difficult due to antigenic drift and a lack of understanding global spread. Detecting viral movement by sequence analysis is complicated by skewed geographic and seasonal distributions in viral isolates. We propose a probabilistic method that accounts for sampling bias through spatiotemporal clustering and modeling regional and seasonal transmission as a binomial process. Analysis of H3N2 not only confirmed East-Southeast Asia as a source of new seasonal variants, but also increased the resolution of observed transmission to a country level. H1N1 data revealed similar viral spread from the tropics. Network analysis suggested China and Hong Kong as the origins of new seasonal H3N2 strains and the United States as a region where increased vaccination would maximally disrupt global spread of the virus. These techniques provide a promising methodology for the analysis of any seasonal virus, as well as for the continued surveillance of influenza.

  16. Chemistry in Protoplanetary Disks: A Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Vasyunin, A. I.; Semenov, D.; Henning, Th.; Wakelam, V.; Herbst, Eric; Sobolev, A. M.

    2008-01-01

    We study how uncertainties in the rate coefficients of chemical reactions in the RATE 06 database affect abundances and column densities of key molecules in protoplanetary disks. We randomly varied the gas-phase reaction rates within their uncertainty limits and calculated the time-dependent abundances and column densities using a gas-grain chemical model and a flaring steady state disk model. We find that key species can be separated into two distinct groups according to the sensitivity of their column densities to the rate uncertainties. The first group includes CO, C+, H+3, H2O, NH3, N2H+, and HCNH+. For these species the column densities are not very sensitive to the rate uncertainties, but the abundances in specific regions are. The second group includes CS, CO2, HCO+, H2CO, C2H, CN, HCN, HNC, and other, more complex species, for which high abundances and abundance uncertainties coexist in the same disk region, leading to larger scatters in column densities. However, even for complex and heavy molecules, the dispersion in their column densities is not more than a factor of ~4. We perform a sensitivity analysis of the computed abundances to rate uncertainties and identify those reactions with the most problematic rate coefficients. We conclude that the rate coefficients of about a hundred chemical reactions need to be determined more accurately in order to greatly improve the reliability of modern astrochemical models. This improvement should be an ultimate goal of future laboratory studies and theoretical investigations.

  17. Simple Sensitivity Analysis for Orion GNC

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  18. Disentangling residence time and temperature sensitivity of microbial decomposition in a global soil carbon model

    NASA Astrophysics Data System (ADS)

    Exbrayat, J.-F.; Pitman, A. J.; Abramowitz, G.

    2014-12-01

    Recent studies have identified the first-order representation of microbial decomposition as a major source of uncertainty in simulations and projections of the terrestrial carbon balance. Here, we use a reduced complexity model representative of current state-of-the-art models of soil organic carbon decomposition. We undertake a systematic sensitivity analysis to disentangle the effect of the time-invariant baseline residence time (k) and the sensitivity of microbial decomposition to temperature (Q10) on soil carbon dynamics at regional and global scales. Our simulations produce a range in total soil carbon at equilibrium of ~ 592 to 2745 Pg C, which is similar to the ~ 561 to 2938 Pg C range in pre-industrial soil carbon in models used in the fifth phase of the Coupled Model Intercomparison Project (CMIP5). This range depends primarily on the value of k, although the impact of Q10 is not trivial at regional scales. As climate changes through the historical period, and into the future, k is primarily responsible for the magnitude of the response in soil carbon, whereas Q10 determines whether the soil remains a sink, or becomes a source in the future mostly by its effect on mid-latitude carbon balance. If we restrict our simulations to those simulating total soil carbon stocks consistent with observations of current stocks, the projected range in total soil carbon change is reduced by 42% for the historical simulations and 45% for the future projections. However, while this observation-based selection dismisses outliers, it does not increase confidence in the future sign of the soil carbon feedback. We conclude that despite this result, future estimates of soil carbon and how soil carbon responds to climate change should be more constrained by available data sets of carbon stocks.

  19. The global analysis of DEER data.

    PubMed

    Brandon, Suzanne; Beth, Albert H; Hustedt, Eric J

    2012-05-01

    Double Electron-Electron Resonance (DEER) has emerged as a powerful technique for measuring long range distances and distance distributions between paramagnetic centers in biomolecules. This information can then be used to characterize functionally relevant structural and dynamic properties of biological molecules and their macromolecular assemblies. Approaches have been developed for analyzing experimental data from standard four-pulse DEER experiments to extract distance distributions. However, these methods typically use an a priori baseline correction to account for background signals. In the current work an approach is described for direct fitting of the DEER signal using a model for the distance distribution which permits a rigorous error analysis of the fitting parameters. Moreover, this approach does not require a priori background correction of the experimental data and can take into account excluded volume effects on the background signal when necessary. The global analysis of multiple DEER data sets is also demonstrated. Global analysis has the potential to provide new capabilities for extracting distance distributions and additional structural parameters in a wide range of studies.

  20. The global analysis of DEER data

    NASA Astrophysics Data System (ADS)

    Brandon, Suzanne; Beth, Albert H.; Hustedt, Eric J.

    2012-05-01

    Double Electron-Electron Resonance (DEER) has emerged as a powerful technique for measuring long range distances and distance distributions between paramagnetic centers in biomolecules. This information can then be used to characterize functionally relevant structural and dynamic properties of biological molecules and their macromolecular assemblies. Approaches have been developed for analyzing experimental data from standard four-pulse DEER experiments to extract distance distributions. However, these methods typically use an a priori baseline correction to account for background signals. In the current work an approach is described for direct fitting of the DEER signal using a model for the distance distribution which permits a rigorous error analysis of the fitting parameters. Moreover, this approach does not require a priori background correction of the experimental data and can take into account excluded volume effects on the background signal when necessary. The global analysis of multiple DEER data sets is also demonstrated. Global analysis has the potential to provide new capabilities for extracting distance distributions and additional structural parameters in a wide range of studies.

  1. A global analysis of neutrino oscillations

    NASA Astrophysics Data System (ADS)

    Fogli, G. L.; Lisi, E.; Marrone, A.; Montanino, D.; Palazzo, A.; Rotunno, A. M.

    2013-02-01

    We present a global analysis of neutrino oscillation data, including high-precision measurements of the neutrino mixing angle θ13 at reactor experiments, which have confirmed previous indications in favor of θ13>0. Recent data presented at this Conference are also included. We focus on the correlations between θ13 and the mixing angle θ23, as well as between θ13 and the neutrino CP-violation phase δ. We find interesting indications for θ23<π/4 and possible hints for δ˜π, with no significant difference between normal and inverted mass hierarchy.

  2. Assessing flood risk at the global scale: model setup, results, and sensitivity

    NASA Astrophysics Data System (ADS)

    Ward, Philip J.; Jongman, Brenden; Sperna Weiland, Frederiek; Bouwman, Arno; van Beek, Rens; Bierkens, Marc F. P.; Ligtvoet, Willem; Winsemius, Hessel C.

    2013-12-01

    Globally, economic losses from flooding exceeded 19 billion in 2012, and are rising rapidly. Hence, there is an increasing need for global-scale flood risk assessments, also within the context of integrated global assessments. We have developed and validated a model cascade for producing global flood risk maps, based on numerous flood return-periods. Validation results indicate that the model simulates interannual fluctuations in flood impacts well. The cascade involves: hydrological and hydraulic modelling; extreme value statistics; inundation modelling; flood impact modelling; and estimating annual expected impacts. The initial results estimate global impacts for several indicators, for example annual expected exposed population (169 million); and annual expected exposed GDP (1383 billion). These results are relatively insensitive to the extreme value distribution employed to estimate low frequency flood volumes. However, they are extremely sensitive to the assumed flood protection standard; developing a database of such standards should be a research priority. Also, results are sensitive to the use of two different climate forcing datasets. The impact model can easily accommodate new, user-defined, impact indicators. We envisage several applications, for example: identifying risk hotspots; calculating macro-scale risk for the insurance industry and large companies; and assessing potential benefits (and costs) of adaptation measures.

  3. A global analysis of soil acidification caused by nitrogen addition

    NASA Astrophysics Data System (ADS)

    Tian, Dashuan; Niu, Shuli

    2015-02-01

    Nitrogen (N) deposition-induced soil acidification has become a global problem. However, the response patterns of soil acidification to N addition and the underlying mechanisms remain far from clear. Here, we conducted a meta-analysis of 106 studies to reveal global patterns of soil acidification in responses to N addition. We found that N addition significantly reduced soil pH by 0.26 on average globally. However, the responses of soil pH varied with ecosystem types, N addition rate, N fertilization forms, and experimental durations. Soil pH decreased most in grassland, whereas boreal forest was not observed a decrease to N addition in soil acidification. Soil pH decreased linearly with N addition rates. Addition of urea and NH4NO3 contributed more to soil acidification than NH4-form fertilizer. When experimental duration was longer than 20 years, N addition effects on soil acidification diminished. Environmental factors such as initial soil pH, soil carbon and nitrogen content, precipitation, and temperature all influenced the responses of soil pH. Base cations of Ca2+, Mg2+ and K+ were critical important in buffering against N-induced soil acidification at the early stage. However, N addition has shifted global soils into the Al3+ buffering phase. Overall, this study indicates that acidification in global soils is very sensitive to N deposition, which is greatly modified by biotic and abiotic factors. Global soils are now at a buffering transition from base cations (Ca2+, Mg2+ and K+) to non-base cations (Mn2+ and Al3+). This calls our attention to care about the limitation of base cations and the toxic impact of non-base cations for terrestrial ecosystems with N deposition.

  4. Global climate sensitivity derived from ~784,000 years of SST data

    NASA Astrophysics Data System (ADS)

    Friedrich, T.; Timmermann, A.; Tigchelaar, M.; Elison Timm, O.; Ganopolski, A.

    2015-12-01

    Global mean temperatures will increase in response to future increasing greenhouse gas concentrations. The magnitude of this warming for a given radiative forcing is still subject of debate. Here we provide estimates for the equilibrium climate sensitivity using paleo-proxy and modeling data from the last eight glacial cycles (~784,000 years). First of all, two reconstructions of globally averaged surface air temperature (SAT) for the last eight glacial cycles are obtained from two independent sources: one mainly based on a transient model simulation, the other one derived from paleo- SST records and SST network/global SAT scaling factors. Both reconstructions exhibit very good agreement in both amplitude and timing of past SAT variations. In the second step, we calculate the radiative forcings associated with greenhouse gas concentrations, dust concentrations, and surface albedo changes for the last 784, 000 years. The equilibrium climate sensitivity is then derived from the ratio of the SAT anomalies and the radiative forcing changes. Our results reveal that this estimate of the Charney climate sensitivity is a function of the background climate with substantially higher values for warmer climates. Warm phases exhibit an equilibrium climate sensitivity of ~3.70 K per CO2-doubling - more than twice the value derived for cold phases (~1.40 K per 2xCO2). We will show that the current CMIP5 ensemble-mean projection of global warming during the 21st century is supported by our estimate of climate sensitivity derived from climate paleo data of the past 784,000 years.

  5. On computational schemes for global-local stress analysis

    NASA Technical Reports Server (NTRS)

    Reddy, J. N.

    1989-01-01

    An overview is given of global-local stress analysis methods and associated difficulties and recommendations for future research. The phrase global-local analysis is understood to be an analysis in which some parts of the domain or structure are identified, for reasons of accurate determination of stresses and displacements or for more refined analysis than in the remaining parts. The parts of refined analysis are termed local and the remaining parts are called global. Typically local regions are small in size compared to global regions, while the computational effort can be larger in local regions than in global regions.

  6. Synthesis, Characterization, and Sensitivity Analysis of Urea Nitrate (UN)

    DTIC Science & Technology

    2015-04-01

    ARL-TR-7250 ● APR 2015 US Army Research Laboratory Synthesis, Characterization, and Sensitivity Analysis of Urea Nitrate (UN...Characterization, and Sensitivity Analysis of Urea Nitrate (UN) by William M Sherrill Weapons and Materials Research Directorate...Characterization, and Sensitivity Analysis of Urea Nitrate (UN) 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S

  7. Global Proteomics Analysis of Protein Lysine Methylation

    PubMed Central

    Cao, Xing-Jun; Garcia, Benjamin A.

    2017-01-01

    Lysine methylation is a common protein post-translational modification dynamically mediated by protein lysine methyltransferases (PKMTs) and demethylases (PKDMs). Beyond histone proteins, lysine methylation on non-histone proteins play substantial roles in a variety of functions in cells, and is closely associated with diseases such as cancer. A large body of evidence indicates that the dysregulation of some PKMTs lead to tumorigenesis via their non-histone substrates. However, more studies on other PKMTs have made slow progress owing to the lack of the approaches for extensive screening of lysine methylation sites. Recently a series of publications to perform large-scale analysis of protein lysine methylation have emerged. In this unit, we introduce a protocol for the global analysis of protein lysine methylation in cells by means of immunoaffinity enrichment and mass spectrometry. PMID:27801517

  8. Tsunamis: Global Exposure and Local Risk Analysis

    NASA Astrophysics Data System (ADS)

    Harbitz, C. B.; Løvholt, F.; Glimsdal, S.; Horspool, N.; Griffin, J.; Davies, G.; Frauenfelder, R.

    2014-12-01

    The 2004 Indian Ocean tsunami led to a better understanding of the likelihood of tsunami occurrence and potential tsunami inundation, and the Hyogo Framework for Action (HFA) was one direct result of this event. The United Nations International Strategy for Disaster Risk Reduction (UN-ISDR) adopted HFA in January 2005 in order to reduce disaster risk. As an instrument to compare the risk due to different natural hazards, an integrated worldwide study was implemented and published in several Global Assessment Reports (GAR) by UN-ISDR. The results of the global earthquake induced tsunami hazard and exposure analysis for a return period of 500 years are presented. Both deterministic and probabilistic methods (PTHA) are used. The resulting hazard levels for both methods are compared quantitatively for selected areas. The comparison demonstrates that the analysis is rather rough, which is expected for a study aiming at average trends on a country level across the globe. It is shown that populous Asian countries account for the largest absolute number of people living in tsunami prone areas, more than 50% of the total exposed people live in Japan. Smaller nations like Macao and the Maldives are among the most exposed by population count. Exposed nuclear power plants are limited to Japan, China, India, Taiwan, and USA. On the contrary, a local tsunami vulnerability and risk analysis applies information on population, building types, infrastructure, inundation, flow depth for a certain tsunami scenario with a corresponding return period combined with empirical data on tsunami damages and mortality. Results and validation of a GIS tsunami vulnerability and risk assessment model are presented. The GIS model is adapted for optimal use of data available for each study. Finally, the importance of including landslide sources in the tsunami analysis is also discussed.

  9. A Sensitivity Analysis of SOLPS Plasma Detachment

    NASA Astrophysics Data System (ADS)

    Green, D. L.; Canik, J. M.; Eldon, D.; Meneghini, O.; AToM SciDAC Collaboration

    2016-10-01

    Predicting the scrape off layer plasma conditions required for the ITER plasma to achieve detachment is an important issue when considering divertor heat load management options that are compatible with desired core plasma operational scenarios. Given the complexity of the scrape off layer, such predictions often rely on an integrated model of plasma transport with many free parameters. However, the sensitivity of any given prediction to the choices made by the modeler is often overlooked due to the logistical difficulties in completing such a study. Here we utilize an OMFIT workflow to enable a sensitivity analysis of the midplane density at which detachment occurs within the SOLPS model. The workflow leverages the TaskFarmer technology developed at NERSC to launch many instances of the SOLPS integrated model in parallel to probe the high dimensional parameter space of SOLPS inputs. We examine both predictive and interpretive models where the plasma diffusion coefficients are chosen to match an empirical scaling for divertor heat flux width or experimental profiles respectively. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility, and is supported under Contracts DE-AC02-05CH11231, DE-AC05-00OR22725 and DE-SC0012656.

  10. Stormwater quality models: performance and sensitivity analysis.

    PubMed

    Dotto, C B S; Kleidorfer, M; Deletic, A; Fletcher, T D; McCarthy, D T; Rauch, W

    2010-01-01

    The complex nature of pollutant accumulation and washoff, along with high temporal and spatial variations, pose challenges for the development and establishment of accurate and reliable models of the pollution generation process in urban environments. Therefore, the search for reliable stormwater quality models remains an important area of research. Model calibration and sensitivity analysis of such models are essential in order to evaluate model performance; it is very unlikely that non-calibrated models will lead to reasonable results. This paper reports on the testing of three models which aim to represent pollutant generation from urban catchments. Assessment of the models was undertaken using a simplified Monte Carlo Markov Chain (MCMC) method. Results are presented in terms of performance, sensitivity to the parameters and correlation between these parameters. In general, it was suggested that the tested models poorly represent reality and result in a high level of uncertainty. The conclusions provide useful information for the improvement of existing models and insights for the development of new model formulations.

  11. Updated Chemical Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    2005-01-01

    An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.

  12. Scalable analysis tools for sensitivity analysis and UQ (3160) results.

    SciTech Connect

    Karelitz, David B.; Ice, Lisa G.; Thompson, David C.; Bennett, Janine C.; Fabian, Nathan; Scott, W. Alan; Moreland, Kenneth D.

    2009-09-01

    The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.

  13. Sensitivity analysis of distributed volcanic source inversion

    NASA Astrophysics Data System (ADS)

    Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José

    2016-04-01

    A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep

  14. Sensitivity of Photolysis Frequencies and Key Tropospheric Oxidants in a Global Model to Cloud Vertical Distributions and Optical Properties

    NASA Technical Reports Server (NTRS)

    Liu, Hongyu; Crawford, James H.; Considine, David B.; Platnick, Steven E.; Norris, Peter M.; Duncan, Bryan N.; Pierce, Robert B.; Chen, Gao; Yantosca, Robert M.

    2008-01-01

    As a follow-up study to our recent assessment of the radiative effects of clouds on tropospheric chemistry, this paper presents an analysis of the sensitivity of such effects to cloud vertical distributions and optical properties in a global 3-D chemical transport model (GEOS4-Chem CTM). GEOS-Chem was driven with a series of meteorological archives (GEOS1-STRAT, GEOS-3 and GEOS-4) generated by the NASA Goddard Earth Observing System data assimilation system, which have significantly different cloud optical depths (CODs) and vertical distributions. Clouds in GEOS1-STRAT and GEOS-3 have more similar vertical distributions while those in GEOS-4 are optically much thinner in the tropical upper troposphere. We find that the radiative impact of clouds on global photolysis frequencies and hydroxyl radical (OH) is more sensitive to the vertical distribution of clouds than to the magnitude of column CODs. Model simulations with each of the three cloud distributions all show that the change in the global burden of O3 due to clouds is less than 5%. Model perturbation experiments with GEOS-3, where the magnitude of 3-D CODs are progressively varied by -100% to 100%, predict only modest changes (<5%) in global mean OH concentrations. J(O1D), J(NO2) and OH concentrations show the strongest sensitivity for small CODs and become insensitive at large CODs due to saturation effects. Caution should be exercised not to use in photochemical models a value for cloud single scattering albedo lower than about 0.999 in order to be consistent with the current knowledge of cloud absorption at the UV wavelength. Our results have important implications for model intercomparisons and climate feedback on tropospheric photochemistry.

  15. Sensitivity Studies for Space-Based Global Measurements of Atmospheric Carbon Dioxide

    NASA Technical Reports Server (NTRS)

    Mao, Jian-Ping; Kawa, S. Randolph; Bhartia, P. K. (Technical Monitor)

    2001-01-01

    Carbon dioxide (CO2) is well known as the primary forcing agent of global warming. Although the climate forcing due to CO2 is well known, the sources and sinks of CO2 are not well understood. Currently the lack of global atmospheric CO2 observations limits our ability to diagnose the global carbon budget (e.g., finding the so-called "missing sink") and thus limits our ability to understand past climate change and predict future climate response. Space-based techniques are being developed to make high-resolution and high-precision global column CO2 measurements. One of the proposed techniques utilizes the passive remote sensing of Earth's reflected solar radiation at the weaker vibration-rotation band of CO2 in the near infrared (approx. 1.57 micron). We use a line-by-line radiative transfer model to explore the potential of this method. Results of sensitivity studies for CO2 concentration variation and geophysical conditions (i.e., atmospheric temperature, surface reflectivity, solar zenith angle, aerosol, and cirrus cloud) will be presented. We will also present sensitivity results for an O2 A-band (approx. 0.76 micron) sensor that will be needed along with CO2 to make surface pressure and cloud height measurements.

  16. Longitudinal Genetic Analysis of Anxiety Sensitivity

    ERIC Educational Resources Information Center

    Zavos, Helena M. S.; Gregory, Alice M.; Eley, Thalia C.

    2012-01-01

    Anxiety sensitivity is associated with both anxiety and depression and has been shown to be heritable. Little, however, is known about the role of genetic influence on continuity and change of symptoms over time. The authors' aim was to examine the stability of anxiety sensitivity during adolescence. By using a genetically sensitive design, the…

  17. Sensitivity Analysis of Wing Aeroelastic Responses

    NASA Technical Reports Server (NTRS)

    Issac, Jason Cherian

    1995-01-01

    Design for prevention of aeroelastic instability (that is, the critical speeds leading to aeroelastic instability lie outside the operating range) is an integral part of the wing design process. Availability of the sensitivity derivatives of the various critical speeds with respect to shape parameters of the wing could be very useful to a designer in the initial design phase, when several design changes are made and the shape of the final configuration is not yet frozen. These derivatives are also indispensable for a gradient-based optimization with aeroelastic constraints. In this study, flutter characteristic of a typical section in subsonic compressible flow is examined using a state-space unsteady aerodynamic representation. The sensitivity of the flutter speed of the typical section with respect to its mass and stiffness parameters, namely, mass ratio, static unbalance, radius of gyration, bending frequency, and torsional frequency is calculated analytically. A strip theory formulation is newly developed to represent the unsteady aerodynamic forces on a wing. This is coupled with an equivalent plate structural model and solved as an eigenvalue problem to determine the critical speed of the wing. Flutter analysis of the wing is also carried out using a lifting-surface subsonic kernel function aerodynamic theory (FAST) and an equivalent plate structural model. Finite element modeling of the wing is done using NASTRAN so that wing structures made of spars and ribs and top and bottom wing skins could be analyzed. The free vibration modes of the wing obtained from NASTRAN are input into FAST to compute the flutter speed. An equivalent plate model which incorporates first-order shear deformation theory is then examined so it can be used to model thick wings, where shear deformations are important. The sensitivity of natural frequencies to changes in shape parameters is obtained using ADIFOR. A simple optimization effort is made towards obtaining a minimum weight

  18. Sensitivity of Water Scarcity Events to ENSO-Driven Climate Variability at the Global Scale

    NASA Technical Reports Server (NTRS)

    Veldkamp, T. I. E.; Eisner, S.; Wada, Y.; Aerts, J. C. J. H.; Ward, P. J.

    2015-01-01

    Globally, freshwater shortage is one of the most dangerous risks for society. Changing hydro-climatic and socioeconomic conditions have aggravated water scarcity over the past decades. A wide range of studies show that water scarcity will intensify in the future, as a result of both increased consumptive water use and, in some regions, climate change. Although it is well-known that El Niño- Southern Oscillation (ENSO) affects patterns of precipitation and drought at global and regional scales, little attention has yet been paid to the impacts of climate variability on water scarcity conditions, despite its importance for adaptation planning. Therefore, we present the first global-scale sensitivity assessment of water scarcity to ENSO, the most dominant signal of climate variability. We show that over the time period 1961-2010, both water availability and water scarcity conditions are significantly correlated with ENSO-driven climate variability over a large proportion of the global land area (> 28.1 %); an area inhabited by more than 31.4% of the global population. We also found, however, that climate variability alone is often not enough to trigger the actual incidence of water scarcity events. The sensitivity of a region to water scarcity events, expressed in terms of land area or population exposed, is determined by both hydro-climatic and socioeconomic conditions. Currently, the population actually impacted by water scarcity events consists of 39.6% (CTA: consumption-to-availability ratio) and 41.1% (WCI: water crowding index) of the global population, whilst only 11.4% (CTA) and 15.9% (WCI) of the global population is at the same time living in areas sensitive to ENSO-driven climate variability. These results are contrasted, however, by differences in growth rates found under changing socioeconomic conditions, which are relatively high in regions exposed to water scarcity events. Given the correlations found between ENSO and water availability and scarcity

  19. Uncertainty and sensitivity analysis for photovoltaic system modeling.

    SciTech Connect

    Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk

    2013-12-01

    We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.

  20. Analysis of Globalization, the Planet and Education

    ERIC Educational Resources Information Center

    Tsegay, Samson Maekele

    2016-01-01

    Thorough the framework of theories analyzing globalization and education, this paper focuses on the intersection among globalization, the environment and education. This paper critically analyzes how globalization could affect environmental devastation, and explore the role of pedagogies that could foster planetary citizenship by exposing…

  1. Sensitivity of the global water cycle to the water-holding capacity of land

    SciTech Connect

    Milly, P.C.D.; Dunne, K.A. )

    1994-04-01

    The sensitivity of the global water cycle to the water-holding capacity of the plant-root zone of continental soils is estimated by simulations using a mathematical model of the general circulation of the atmosphere, with prescribed ocean surface temperatures and prescribed cloud. With an increase of the globally constant storage capacity, evaporation from the continents rises and runoff falls, because a high storage capacity enhances the ability of the soil to store water from periods of excess for later evaporation during periods of shortage. In addition, atmospheric feedbacks associated with higher precipitation and lower potential evaporation drive further changes in evaporation and runoff. Most changes in evaporation and runoff occur in the tropics and the northern middle-latitude rain belts. Global evaporation from land increases by 7 cm for each doubling of storage capacity. Sensitivity is negligible for capacity above 60 cm. In the tropics and in the extratropics,increased continental evaporation is split between increased continental precipitation and decreased convergence of atmospheric water vapor from ocean to land. In the tropics, this partitioning is strongly affected by induced circulation changes, which are themselves forced by changes in latent heating. In the northern middle and high latitudes, the increased continental evaporation moistens the atmosphere. This change in humidity of the atmosphere is greater above the continents than above the oceans, and the resulting reduction in the sea-land humidity gradient causes a decreased onshore transport of water vapor by transient eddies. Results here may have implications for problems in global hydrology and climate dynamics, including effects of water resource development on global precipitation, climatic control of plant rooting characteristics, climatic effects of tropical deforestation, and climate-model errors. 21 refs., 13 figs., 21 tabs.

  2. Limits to global and Australian temperature change this century based on expert judgment of climate sensitivity

    NASA Astrophysics Data System (ADS)

    Grose, Michael R.; Colman, Robert; Bhend, Jonas; Moise, Aurel F.

    2016-07-01

    The projected warming of surface air temperature at the global and regional scale by the end of the century is directly related to emissions and Earth's climate sensitivity. Projections are typically produced using an ensemble of climate models such as CMIP5, however the range of climate sensitivity in models doesn't cover the entire range considered plausible by expert judgment. Of particular interest from a risk-management perspective is the lower impact outcome associated with low climate sensitivity and the low-probability, high-impact outcomes associated with the top of the range. Here we scale climate model output to the limits of expert judgment of climate sensitivity to explore these limits. This scaling indicates an expanded range of projected change for each emissions pathway, including a much higher upper bound for both the globe and Australia. We find the possibility of exceeding a warming of 2 °C since pre-industrial is projected under high emissions for every model even scaled to the lowest estimate of sensitivity, and is possible under low emissions under most estimates of sensitivity. Although these are not quantitative projections, the results may be useful to inform thinking about the limits to change until the sensitivity can be more reliably constrained, or this expanded range of possibilities can be explored in a more formal way. When viewing climate projections, accounting for these low-probability but high-impact outcomes in a risk management approach can complement the focus on the likely range of projections. They can also highlight the scale of the potential reduction in range of projections, should tight constraints on climate sensitivity be established by future research.

  3. Effect of ice-albedo feedback on global sensitivity in a one-dimensional radiative-convective climate model

    NASA Technical Reports Server (NTRS)

    Wang, W.-C.; Stone, P. H.

    1980-01-01

    The feedback between the ice albedo and temperature is included in a one-dimensional radiative-convective climate model. The effect of this feedback on global sensitivity to changes in solar constant is studied for the current climate conditions. This ice-albedo feedback amplifies global sensitivity by 26 and 39%, respectively, for assumptions of fixed cloud altitude and fixed cloud temperature. The global sensitivity is not affected significantly if the latitudinal variations of mean solar zenith angle and cloud cover are included in the global model. The differences in global sensitivity between one-dimensional radiative-convective models and energy balance models are examined. It is shown that the models are in close agreement when the same feedback mechanisms are included. The one-dimensional radiative-convective model with ice-albedo feedback included is used to compute the equilibrium ice line as a function of solar constant.

  4. Sensitivity of Photolysis Frequencies and Key Tropospheric Oxidants in a Global Model to Cloud Vertical Distributions and Optical Properties

    NASA Technical Reports Server (NTRS)

    Liu, Hongyu; Crawford, James H.; Considine, David B.; Platnick, Steven; Norris, Peter M.; Duncan, Bryan N.; Pierce, Robert B.; Chen, Gao; Yantosca, Robert M.

    2009-01-01

    Clouds affect tropospheric photochemistry through modification of solar radiation that determines photolysis frequencies. As a follow-up study to our recent assessment of the radiative effects of clouds on tropospheric chemistry, this paper presents an analysis of the sensitivity of such effects to cloud vertical distributions and optical properties (cloud optical depths (CODs) and cloud single scattering albedo), in a global 3-D chemical transport model (GEOS-Chem). GEOS-Chem was driven with a series of meteorological archives (GEOS1- STRAT, GEOS-3 and GEOS-4) generated by the NASA Goddard Earth Observing System data assimilation system. Clouds in GEOS1-STRAT and GEOS-3 have more similar vertical distributions (with substantially smaller CODs in GEOS1-STRAT) while those in GEOS-4 are optically much thinner in the tropical upper troposphere. We find that the radiative impact of clouds on global photolysis frequencies and hydroxyl radical (OH) is more sensitive to the vertical distribution of clouds than to the magnitude of column CODs. With random vertical overlap for clouds, the model calculated changes in global mean OH (J(O1D), J(NO2)) due to the radiative effects of clouds in June are about 0.0% (0.4%, 0.9%), 0.8% (1.7%, 3.1%), and 7.3% (4.1%, 6.0%), for GEOS1-STRAT, GEOS-3 and GEOS-4, respectively; the geographic distributions of these quantities show much larger changes, with maximum decrease in OH concentrations of approx.15-35% near the midlatitude surface. The much larger global impact of clouds in GEOS-4 reflects the fact that more solar radiation is able to penetrate through the optically thin upper-tropospheric clouds, increasing backscattering from low-level clouds. Model simulations with each of the three cloud distributions all show that the change in the global burden of ozone due to clouds is less than 5%. Model perturbation experiments with GEOS-3, where the magnitude of 3-D CODs are progressively varied from -100% to 100%, predict only modest

  5. Sensitivity of photolysis frequencies and key tropospheric oxidants in a global model to cloud vertical distributions and optical properties

    NASA Astrophysics Data System (ADS)

    Liu, Hongyu; Crawford, James H.; Considine, David B.; Platnick, Steven; Norris, Peter M.; Duncan, Bryan N.; Pierce, Robert B.; Chen, Gao; Yantosca, Robert M.

    2009-05-01

    Clouds directly affect tropospheric photochemistry through modification of solar radiation that determines photolysis frequencies. As a follow-up study to our recent assessment of these direct radiative effects of clouds on tropospheric chemistry, this paper presents an analysis of the sensitivity of such effects to cloud vertical distributions and optical properties (cloud optical depths (CODs) and cloud single scattering albedo), in a global three-dimensional (3-D) chemical transport model. The model was driven with a series of meteorological archives (GEOS-1 in support of the Stratospheric Tracers of Atmospheric Transport mission, or GEOS1-STRAT, GEOS-3, and GEOS-4) generated by the NASA Goddard Earth Observing System (GEOS) data assimilation system. Clouds in GEOS1-STRAT and GEOS-3 have more similar vertical distributions (with substantially smaller CODs in GEOS1-STRAT) while those in GEOS-4 are optically much thinner in the tropical upper troposphere. We find that the radiative impact of clouds on global photolysis frequencies and hydroxyl radical (OH) is more sensitive to the vertical distribution of clouds than to the magnitude of column CODs. With random vertical overlap for clouds, the model calculated changes in global mean OH (J(O1D), J(NO2)) due to the radiative effects of clouds in June are about 0.0% (0.4%, 0.9%), 0.8% (1.7%, 3.1%), and 7.3% (4.1%, 6.0%) for GEOS1-STRAT, GEOS-3, and GEOS-4, respectively; the geographic distributions of these quantities show much larger changes, with maximum decrease in OH concentrations of ˜15-35% near the midlatitude surface. The much larger global impact of clouds in GEOS-4 reflects the fact that more solar radiation is able to penetrate through the optically thin upper tropospheric clouds, increasing backscattering from low-level clouds. Model simulations with each of the three cloud distributions all show that the change in the global burden of ozone due to clouds is less than 5%. Model perturbation experiments

  6. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes

    PubMed Central

    Curtis, Janelle M.R.

    2016-01-01

    Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along

  7. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes.

    PubMed

    Naujokaitis-Lewis, Ilona; Curtis, Janelle M R

    2016-01-01

    Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along

  8. Wear-Out Sensitivity Analysis Project Abstract

    NASA Technical Reports Server (NTRS)

    Harris, Adam

    2015-01-01

    During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.

  9. Sensitivity analysis of hydrodynamic stability operators

    NASA Technical Reports Server (NTRS)

    Schmid, Peter J.; Henningson, Dan S.; Khorrami, Mehdi R.; Malik, Mujeeb R.

    1992-01-01

    The eigenvalue sensitivity for hydrodynamic stability operators is investigated. Classical matrix perturbation techniques as well as the concept of epsilon-pseudoeigenvalues are applied to show that parts of the spectrum are highly sensitive to small perturbations. Applications are drawn from incompressible plane Couette, trailing line vortex flow and compressible Blasius boundary layer flow. Parametric studies indicate a monotonically increasing effect of the Reynolds number on the sensitivity. The phenomenon of eigenvalue sensitivity is due to the non-normality of the operators and their discrete matrix analogs and may be associated with large transient growth of the corresponding initial value problem.

  10. Global analysis of the immune response

    NASA Astrophysics Data System (ADS)

    Ribeiro, Leonardo C.; Dickman, Ronald; Bernardes, Américo T.

    2008-10-01

    The immune system may be seen as a complex system, characterized using tools developed in the study of such systems, for example, surface roughness and its associated Hurst exponent. We analyze densitometric (Panama blot) profiles of immune reactivity, to classify individuals into groups with similar roughness statistics. We focus on a population of individuals living in a region in which malaria endemic, as well as a control group from a disease-free region. Our analysis groups individuals according to the presence, or absence, of malaria symptoms and number of malaria manifestations. Applied to the Panama blot data, our method proves more effective at discriminating between groups than principal-components analysis or super-paramagnetic clustering. Our findings provide evidence that some phenomena observed in the immune system can be only understood from a global point of view. We observe similar tendencies between experimental immune profiles and those of artificial profiles, obtained from an immune network model. The statistical entropy of the experimental profiles is found to exhibit variations similar to those observed in the Hurst exponent.

  11. Sensitivity of global tropical climate to land surface processes: Mean state and interannual variability

    SciTech Connect

    Ma, Hsi-Yen; Xiao, Heng; Mechoso, C. R.; Xue, Yongkang

    2013-03-01

    This study examines the sensitivity of global tropical climate to land surface processes (LSP) using an atmospheric general circulation model both uncoupled (with prescribed SSTs) and coupled to an oceanic general circulation model. The emphasis is on the interactive soil moisture and vegetation biophysical processes, which have first order influence on the surface energy and water budgets. The sensitivity to those processes is represented by the differences between model simulations, in which two land surface schemes are considered: 1) a simple land scheme that specifies surface albedo and soil moisture availability, and 2) the Simplified Simple Biosphere Model (SSiB), which allows for consideration of interactive soil moisture and vegetation biophysical process. Observational datasets are also employed to assess the reality of model-revealed sensitivity. The mean state sensitivity to different LSP is stronger in the coupled mode, especially in the tropical Pacific. Furthermore, seasonal cycle of SSTs in the equatorial Pacific, as well as ENSO frequency, amplitude, and locking to the seasonal cycle of SSTs are significantly modified and more realistic with SSiB. This outstanding sensitivity of the atmosphere-ocean system develops through changes in the intensity of equatorial Pacific trades modified by convection over land. Our results further demonstrate that the direct impact of land-atmosphere interactions on the tropical climate is modified by feedbacks associated with perturbed oceanic conditions ("indirect effect" of LSP). The magnitude of such indirect effect is strong enough to suggest that comprehensive studies on the importance of LSP on the global climate have to be made in a system that allows for atmosphere-ocean interactions.

  12. Thermodynamics-based Metabolite Sensitivity Analysis in metabolic networks.

    PubMed

    Kiparissides, A; Hatzimanikatis, V

    2017-01-01

    The increasing availability of large metabolomics datasets enhances the need for computational methodologies that can organize the data in a way that can lead to the inference of meaningful relationships. Knowledge of the metabolic state of a cell and how it responds to various stimuli and extracellular conditions can offer significant insight in the regulatory functions and how to manipulate them. Constraint based methods, such as Flux Balance Analysis (FBA) and Thermodynamics-based flux analysis (TFA), are commonly used to estimate the flow of metabolites through genome-wide metabolic networks, making it possible to identify the ranges of flux values that are consistent with the studied physiological and thermodynamic conditions. However, unless key intracellular fluxes and metabolite concentrations are known, constraint-based models lead to underdetermined problem formulations. This lack of information propagates as uncertainty in the estimation of fluxes and basic reaction properties such as the determination of reaction directionalities. Therefore, knowledge of which metabolites, if measured, would contribute the most to reducing this uncertainty can significantly improve our ability to define the internal state of the cell. In the present work we combine constraint based modeling, Design of Experiments (DoE) and Global Sensitivity Analysis (GSA) into the Thermodynamics-based Metabolite Sensitivity Analysis (TMSA) method. TMSA ranks metabolites comprising a metabolic network based on their ability to constrain the gamut of possible solutions to a limited, thermodynamically consistent set of internal states. TMSA is modular and can be applied to a single reaction, a metabolic pathway or an entire metabolic network. This is, to our knowledge, the first attempt to use metabolic modeling in order to provide a significance ranking of metabolites to guide experimental measurements.

  13. Global resilience analysis of water distribution systems.

    PubMed

    Diao, Kegong; Sweetapple, Chris; Farmani, Raziyeh; Fu, Guangtao; Ward, Sarah; Butler, David

    2016-12-01

    Evaluating and enhancing resilience in water infrastructure is a crucial step towards more sustainable urban water management. As a prerequisite to enhancing resilience, a detailed understanding is required of the inherent resilience of the underlying system. Differing from traditional risk analysis, here we propose a global resilience analysis (GRA) approach that shifts the objective from analysing multiple and unknown threats to analysing the more identifiable and measurable system responses to extreme conditions, i.e. potential failure modes. GRA aims to evaluate a system's resilience to a possible failure mode regardless of the causal threat(s) (known or unknown, external or internal). The method is applied to test the resilience of four water distribution systems (WDSs) with various features to three typical failure modes (pipe failure, excess demand, and substance intrusion). The study reveals GRA provides an overview of a water system's resilience to various failure modes. For each failure mode, it identifies the range of corresponding failure impacts and reveals extreme scenarios (e.g. the complete loss of water supply with only 5% pipe failure, or still meeting 80% of demand despite over 70% of pipes failing). GRA also reveals that increased resilience to one failure mode may decrease resilience to another and increasing system capacity may delay the system's recovery in some situations. It is also shown that selecting an appropriate level of detail for hydraulic models is of great importance in resilience analysis. The method can be used as a comprehensive diagnostic framework to evaluate a range of interventions for improving system resilience in future studies.

  14. A Global Spectral Study of Stellar-Mass Black Holes with Unprecedented Sensitivity

    NASA Astrophysics Data System (ADS)

    Garci, Javier

    There are two well established populations of black holes: (i) stellar-mass black holes with masses in the range 5 to 30 solar masses, many millions of which are present in each galaxy in the universe, and (ii) supermassive black holes with masses in the range millions to billions of solar masses, which reside in the nucleus of most galaxies. Supermassive black holes play a leading role in shaping galaxies and are central to cosmology. However, they are hard to study because they are dim and they scarcely vary on a human timescale. Luckily, their variability and full range of behavior can be very effectively studied by observing their stellar-mass cousins, which display in miniature the full repertoire of a black hole over the course of a single year. The archive of data collected by NASA's Rossi X-ray Timing Explorer (RXTE) during its 16 year mission is of first importance for the study of stellar-mass black holes. While our ultimate goal is a complete spectral analysis of all the stellar-mass black hole data in the RXTE archive, the goal of this proposal is the global study of six of these black holes. The two key methodologies we bring to the study are: (1) Our recently developed calibration tool that increases the sensitivity of RXTE's detector by up to an order of magnitude; and (2) the leading X-ray spectral "reflection" models that are arguably the most effective means currently available for probing the effects of strong gravity near the event horizon of a black hole. For each of the six black holes, we will fit our models to all the archived spectral data and determine several key parameters describing the black hole and the 10-million-degree gas that surrounds it. Of special interest will be our measurement of the spin (or rate of rotation) of each black hole, which can be as high as tens of thousands of RPM. Profoundly, all the properties of an astronomical black hole are completely defined by specifying its spin and its mass. The main goal of this

  15. Sensitivity of Mid Holocene Global Climate to Changes in Vegetation Reconstructed From the Geologic Record

    NASA Astrophysics Data System (ADS)

    Diffenbaugh, N. S.; Sloan, L. C.

    2001-12-01

    The influence of land surface changes upon global and regional climate has been shown both for anthropogenic and non-anthropogenic changes in land surface distribution. Because validation of global climate models (GCMs) is dependent upon the use of accurate boundary conditions, and because changes in land surface distribution have been shown to have effects on climate in areas remote from those changes, we have tested the sensitivity of a GCM to a global Mid Holocene vegetation distribution reconstructed from the fossil record, a first for a 6 ka GCM run. Large areas of the globe exhibit statistically significant seasonal warming of 2 to 4 ° C, with peak warming of 10 ° C over the Middle East in June-July-August (JJA). The patterns of maximum warming over both Northern Asia and the Middle East strongly coincide with the patterns of maximum decrease in albedo in all seasons. Likewise, cooling of up to 4 ° C over Northern Africa associated with the expansion of savanna and broadleaf evergreen forest also coincides with increases in surface heat flux of up to 35 W/m2 in March-April-May (MAM) and 60 W/m2 in JJA. At both the regional and global scale, the magnitude of vegetation forcing is equal to that of 6 ka orbital forcing, emphasizing the importance of accurate land surface distribution for both model validation and future climate prediction.

  16. Sensitivity of tropospheric hydrogen peroxide to global chemical and climate change

    NASA Technical Reports Server (NTRS)

    Thompson, Anne M.; Stewart, Richard W.; Owens, Melody A.

    1989-01-01

    The sensitivities of tropospheric HO2 and hydrogen peroxide (H2O2) levels to increases in CH4, CO, and NO emissions and to changes in stratospheric O3 and tropospheric O3 and H2O have been evaluated with a one-dimensional photochemical model. Specific scenarios of CH4-CO-NO(x) emissions and global climate changes are used to predict HO2 and H2O2 changes between 1980 and 2030. Calculations are made for urban and nonurban continental conditions and for low latitudes. Generally, CO and CH4 emissions will enhance H2O2; NO emissions will suppress H2O2 except in very low NO(x) regions. A global warming or stratospheric O3 depletion will add to H2O2. Hydrogen peroxide increases from 1980 to 2030 could be 100 percent or more in the urban boundary layer.

  17. The Hydrological Sensitivity to Global Warming and Solar Geoengineering Derived from Thermodynamic Constraints

    SciTech Connect

    Kleidon, Alex; Kravitz, Benjamin S.; Renner, Maik

    2015-01-16

    We derive analytic expressions of the transient response of the hydrological cycle to surface warming from an extremely simple energy balance model in which turbulent heat fluxes are constrained by the thermodynamic limit of maximum power. For a given magnitude of steady-state temperature change, this approach predicts the transient response as well as the steady-state change in surface energy partitioning and the hydrologic cycle. We show that the transient behavior of the simple model as well as the steady state hydrological sensitivities to greenhouse warming and solar geoengineering are comparable to results from simulations using highly complex models. Many of the global-scale hydrological cycle changes can be understood from a surface energy balance perspective, and our thermodynamically-constrained approach provides a physically robust way of estimating global hydrological changes in response to altered radiative forcing.

  18. Ensemble reconstruction constraints on the global carbon cycle sensitivity to climate.

    PubMed

    Frank, David C; Esper, Jan; Raible, Christoph C; Büntgen, Ulf; Trouet, Valerie; Stocker, Benjamin; Joos, Fortunat

    2010-01-28

    The processes controlling the carbon flux and carbon storage of the atmosphere, ocean and terrestrial biosphere are temperature sensitive and are likely to provide a positive feedback leading to amplified anthropogenic warming. Owing to this feedback, at timescales ranging from interannual to the 20-100-kyr cycles of Earth's orbital variations, warming of the climate system causes a net release of CO(2) into the atmosphere; this in turn amplifies warming. But the magnitude of the climate sensitivity of the global carbon cycle (termed gamma), and thus of its positive feedback strength, is under debate, giving rise to large uncertainties in global warming projections. Here we quantify the median gamma as 7.7 p.p.m.v. CO(2) per degrees C warming, with a likely range of 1.7-21.4 p.p.m.v. CO(2) per degrees C. Sensitivity experiments exclude significant influence of pre-industrial land-use change on these estimates. Our results, based on the coupling of a probabilistic approach with an ensemble of proxy-based temperature reconstructions and pre-industrial CO(2) data from three ice cores, provide robust constraints for gamma on the policy-relevant multi-decadal to centennial timescales. By using an ensemble of >200,000 members, quantification of gamma is not only improved, but also likelihoods can be assigned, thereby providing a benchmark for future model simulations. Although uncertainties do not at present allow exclusion of gamma calculated from any of ten coupled carbon-climate models, we find that gamma is about twice as likely to fall in the lowermost than in the uppermost quartile of their range. Our results are incompatibly lower (P < 0.05) than recent pre-industrial empirical estimates of approximately 40 p.p.m.v. CO(2) per degrees C (refs 6, 7), and correspondingly suggest approximately 80% less potential amplification of ongoing global warming.

  19. Quantifying PM2.5-meteorology sensitivities in a global climate model

    NASA Astrophysics Data System (ADS)

    Westervelt, D. M.; Horowitz, L. W.; Naik, V.; Tai, A. P. K.; Fiore, A. M.; Mauzerall, D. L.

    2016-10-01

    Climate change can influence fine particulate matter concentrations (PM2.5) through changes in air pollution meteorology. Knowledge of the extent to which climate change can exacerbate or alleviate air pollution in the future is needed for robust climate and air pollution policy decision-making. To examine the influence of climate on PM2.5, we use the Geophysical Fluid Dynamics Laboratory Coupled Model version 3 (GFDL CM3), a fully-coupled chemistry-climate model, combined with future emissions and concentrations provided by the four Representative Concentration Pathways (RCPs). For each of the RCPs, we conduct future simulations in which emissions of aerosols and their precursors are held at 2005 levels while other climate forcing agents evolve in time, such that only climate (and thus meteorology) can influence PM2.5 surface concentrations. We find a small increase in global, annual mean PM2.5 of about 0.21 μg m-3 (5%) for RCP8.5, a scenario with maximum warming. Changes in global mean PM2.5 are at a maximum in the fall and are mainly controlled by sulfate followed by organic aerosol with minimal influence of black carbon. RCP2.6 is the only scenario that projects a decrease in global PM2.5 with future climate changes, albeit only by -0.06 μg m-3 (1.5%) by the end of the 21st century. Regional and local changes in PM2.5 are larger, reaching upwards of 2 μg m-3 for polluted (eastern China) and dusty (western Africa) locations on an annually averaged basis in RCP8.5. Using multiple linear regression, we find that future PM2.5 concentrations are most sensitive to local temperature, followed by surface wind and precipitation. PM2.5 concentrations are robustly positively associated with temperature, while negatively related with precipitation and wind speed. Present-day (2006-2015) modeled sensitivities of PM2.5 to meteorological variables are evaluated against observations and found to agree reasonably well with observed sensitivities (within 10-50% over the

  20. Quantifying PM2.5-Meteorology Sensitivities in a Global Climate Model

    NASA Technical Reports Server (NTRS)

    Westervelt, D. M.; Horowitz, L. W.; Naik, V.; Tai, A. P. K.; Fiore, A. M.; Mauzerall, D. L.

    2016-01-01

    Climate change can influence fine particulate matter concentrations (PM2.5) through changes in air pollution meteorology. Knowledge of the extent to which climate change can exacerbate or alleviate air pollution in the future is needed for robust climate and air pollution policy decision-making. To examine the influence of climate on PM2.5, we use the Geophysical Fluid Dynamics Laboratory Coupled Model version 3 (GFDL CM3), a fully-coupled chemistry-climate model, combined with future emissions and concentrations provided by the four Representative Concentration Pathways (RCPs). For each of the RCPs, we conduct future simulations in which emissions of aerosols and their precursors are held at 2005 levels while other climate forcing agents evolve in time, such that only climate (and thus meteorology) can influence PM2.5 surface concentrations. We find a small increase in global, annual mean PM2.5 of about 0.21 micro-g/cu m3 (5%) for RCP8.5, a scenario with maximum warming. Changes in global mean PM2.5 are at a maximum in the fall and are mainly controlled by sulfate followed by organic aerosol with minimal influence of black carbon. RCP2.6 is the only scenario that projects a decrease in global PM2.5 with future climate changes, albeit only by -0.06 micro-g/cu m (1.5%) by the end of the 21st century. Regional and local changes in PM2.5 are larger, reaching upwards of 2 micro-g/cu m for polluted (eastern China) and dusty (western Africa) locations on an annually averaged basis in RCP8.5. Using multiple linear regression, we find that future PM2.5 concentrations are most sensitive to local temperature, followed by surface wind and precipitation. PM2.5 concentrations are robustly positively associated with temperature, while negatively related with precipitation and wind speed. Present-day (2006-2015) modeled sensitivities of PM2.5 to meteorological variables are evaluated against observations and found to agree reasonably well with observed sensitivities (within 10e50

  1. Global boundedness to a chemotaxis system with singular sensitivity and logistic source

    NASA Astrophysics Data System (ADS)

    Zhao, Xiangdong; Zheng, Sining

    2017-02-01

    We consider the parabolic-parabolic Keller-Segel system with singular sensitivity and logistic source: u_t=Δ u-χ nabla \\cdot (u/vnabla v) +ru-μ u^2, v_t=Δ v-v+u under the homogeneous Neumann boundary conditions in a smooth bounded domain Ω subset {R}^2, χ ,μ >0 and rin {R}. It is proved that the system exists globally bounded classical solutions if r>χ ^2/4 for 0<χ ≤ 2, or r>χ -1 for χ >2.

  2. Development of a Pressure Sensitive Paint System for Measuring Global Surface Pressures on Rotorcraft Blades

    NASA Technical Reports Server (NTRS)

    Watkins, A. Neal; Leighty, Bradley D.; Lipford, William E.; Wong, Oliver D.; Oglesby, Donald M.; Ingram, JoAnne L.

    2007-01-01

    This paper will describe the results from a proof of concept test to examine the feasibility of using Pressure Sensitive Paint (PSP) to measure global surface pressures on rotorcraft blades in hover. The test was performed using the U.S. Army 2-meter Rotor Test Stand (2MRTS) and 15% scale swept rotor blades. Data were collected from five blades using both the intensity- and lifetime-based approaches. This paper will also outline several modifications and improvements that are underway to develop a system capable of measuring pressure distributions on up to four blades simultaneously at hover and forward flight conditions.

  3. Variability in visual cortex size reflects tradeoff between local orientation sensitivity and global orientation modulation

    PubMed Central

    Song, Chen; Schwarzkopf, Dietrich S.; Rees, Geraint

    2013-01-01

    The surface area of early visual cortices varies several fold across healthy adult humans and is genetically heritable. But the functional consequences of this anatomical variability are still largely unexplored. Here we show that interindividual variability in human visual cortical surface area reflects a tradeoff between sensitivity to visual details and susceptibility to visual context. Specifically, individuals with larger primary visual cortices can discriminate finer orientation differences, whereas individuals with smaller primary visual cortices experience stronger perceptual modulation by global orientation contexts. This anatomically correlated tradeoff between discrimination sensitivity and contextual modulation of orientation perception, however, does not generalize to contrast perception or luminance perception. Neural field simulations based on a scaling of intracortical circuits reproduce our empirical observations. Together our findings reveal a feature-specific shift in the scope of visual perception from context-oriented to detail-oriented with increased visual cortical surface area. PMID:23887643

  4. Global observations of cloud-sensitive aerosol loadings in low-level marine clouds

    NASA Astrophysics Data System (ADS)

    Andersen, H.; Cermak, J.; Fuchs, J.; Schwarz, K.

    2016-11-01

    Aerosol-cloud interaction is a key component of the Earth's radiative budget and hydrological cycle, but many facets of its mechanisms are not yet fully understood. In this study, global satellite-derived aerosol and cloud products are used to identify at what aerosol loading cloud droplet size shows the greatest sensitivity to changes in aerosol loading (ACSmax). While, on average, cloud droplet size is most sensitive at relatively low aerosol loadings, distinct spatial and temporal patterns exist. Possible determinants for these are identified with reanalysis data. The magnitude of ACSmax is found to be constrained by the total columnar water vapor. Seasonal patterns of water vapor are reflected in the seasonal patterns of ACSmax. Also, situations with enhanced turbulent mixing are connected to higher ACSmax, possibly due to intensified aerosol activation. Of the analyzed aerosol species, dust seems to impact ACSmax the most, as dust particles increase the retrieved aerosol loading without substantially increasing the concentration of cloud condensation nuclei.

  5. Sensitivity analysis of textural parameters for vertebroplasty

    NASA Astrophysics Data System (ADS)

    Tack, Gye Rae; Lee, Seung Y.; Shin, Kyu-Chul; Lee, Sung J.

    2002-05-01

    Vertebroplasty is one of the newest surgical approaches for the treatment of the osteoporotic spine. Recent studies have shown that it is a minimally invasive, safe, promising procedure for patients with osteoporotic fractures while providing structural reinforcement of the osteoporotic vertebrae as well as immediate pain relief. However, treatment failures due to excessive bone cement injection have been reported as one of complications. It is believed that control of bone cement volume seems to be one of the most critical factors in preventing complications. We believed that an optimal bone cement volume could be assessed based on CT data of a patient. Gray-level run length analysis was used to extract textural information of the trabecular. At initial stage of the project, four indices were used to represent the textural information: mean width of intertrabecular space, mean width of trabecular, area of intertrabecular space, and area of trabecular. Finally, the area of intertrabecular space was selected as a parameter to estimate an optimal bone cement volume and it was found that there was a strong linear relationship between these 2 variables (correlation coefficient = 0.9433, standard deviation = 0.0246). In this study, we examined several factors affecting overall procedures. The threshold level, the radius of rolling ball and the size of region of interest were selected for the sensitivity analysis. As the level of threshold varied with 9, 10, and 11, the correlation coefficient varied from 0.9123 to 0.9534. As the radius of rolling ball varied with 45, 50, and 55, the correlation coefficient varied from 0.9265 to 0.9730. As the size of region of interest varied with 58 x 58, 64 x 64, and 70 x 70, the correlation coefficient varied from 0.9685 to 0.9468. Finally, we found that strong correlation between actual bone cement volume (Y) and the area (X) of the intertrabecular space calculated from the binary image and the linear equation Y = 0.001722 X - 2

  6. Derivative based sensitivity analysis of gamma index

    PubMed Central

    Sarkar, Biplab; Pradhan, Anirudh; Ganesh, T.

    2015-01-01

    Originally developed as a tool for patient-specific quality assurance in advanced treatment delivery methods to compare between measured and calculated dose distributions, the gamma index (γ) concept was later extended to compare between any two dose distributions. It takes into effect both the dose difference (DD) and distance-to-agreement (DTA) measurements in the comparison. Its strength lies in its capability to give a quantitative value for the analysis, unlike other methods. For every point on the reference curve, if there is at least one point in the evaluated curve that satisfies the pass criteria (e.g., δDD = 1%, δDTA = 1 mm), the point is included in the quantitative score as “pass.” Gamma analysis does not account for the gradient of the evaluated curve - it looks at only the minimum gamma value, and if it is <1, then the point passes, no matter what the gradient of evaluated curve is. In this work, an attempt has been made to present a derivative-based method for the identification of dose gradient. A mathematically derived reference profile (RP) representing the penumbral region of 6 MV 10 cm × 10 cm field was generated from an error function. A general test profile (GTP) was created from this RP by introducing 1 mm distance error and 1% dose error at each point. This was considered as the first of the two evaluated curves. By its nature, this curve is a smooth curve and would satisfy the pass criteria for all points in it. The second evaluated profile was generated as a sawtooth test profile (STTP) which again would satisfy the pass criteria for every point on the RP. However, being a sawtooth curve, it is not a smooth one and would be obviously poor when compared with the smooth profile. Considering the smooth GTP as an acceptable profile when it passed the gamma pass criteria (1% DD and 1 mm DTA) against the RP, the first and second order derivatives of the DDs (δD’, δD”) between these two curves were derived and used as the boundary

  7. Topographic Avalanche Risk: DEM Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Nazarkulova, Ainura; Strobl, Josef

    2015-04-01

    GIS-based models are frequently used to assess the risk and trigger probabilities of (snow) avalanche releases, based on parameters and geomorphometric derivatives like elevation, exposure, slope, proximity to ridges and local relief energy. Numerous models, and model-based specific applications and project results have been published based on a variety of approaches and parametrizations as well as calibrations. Digital Elevation Models (DEM) come with many different resolution (scale) and quality (accuracy) properties, some of these resulting from sensor characteristics and DEM generation algorithms, others from different DEM processing workflows and analysis strategies. This paper explores the impact of using different types and characteristics of DEMs for avalanche risk modeling approaches, and aims at establishing a framework for assessing the uncertainty of results. The research question is derived from simply demonstrating the differences in release risk areas and intensities by applying identical models to DEMs with different properties, and then extending this into a broader sensitivity analysis. For the quantification and calibration of uncertainty parameters different metrics are established, based on simple value ranges, probabilities, as well as fuzzy expressions and fractal metrics. As a specific approach the work on DEM resolution-dependent 'slope spectra' is being considered and linked with the specific application of geomorphometry-base risk assessment. For the purpose of this study focusing on DEM characteristics, factors like land cover, meteorological recordings and snowpack structure and transformation are kept constant, i.e. not considered explicitly. Key aims of the research presented here are the development of a multi-resolution and multi-scale framework supporting the consistent combination of large area basic risk assessment with local mitigation-oriented studies, and the transferability of the latter into areas without availability of

  8. Global stability analysis of electrified jets

    NASA Astrophysics Data System (ADS)

    Rivero-Rodriguez, Javier; Pérez-Saborid, Miguel

    2014-11-01

    Electrospinning is a common process used to produce micro and nano polymeric fibers. In this technique, the whipping mode of a very thin electrified jet generated in an electrospray device is nhanced in order to increase its elongation. In this work, we use a theoretical Eulerian model that describes the kinematics and dynamics of the midline of the jet, its radius and convective velocity. The model equations result from balances of mass, linear and angular momentum applied to any differential slice of the jet together with constitutive laws for viscous forces and moments, as well as appropriate expressions for capillary and electrical forces. As a first step towards computing the complete nonlinear, transient dynamics of the electrified jet, we have performed a global stability analysis of the forementioned equations and compared the results with experimental data obtained by Guillaume et al. [2011] and Guerrero-Millán et al. [2014]. The support of the Ministry of Science and Innovation of Spain (Project DPI 2010-20450-C03-02) is acknowledged.

  9. Stability of fundamental couplings: A global analysis

    NASA Astrophysics Data System (ADS)

    Martins, C. J. A. P.; Pinho, A. M. M.

    2017-01-01

    Astrophysical tests of the stability of fundamental couplings are becoming an increasingly important probe of new physics. Motivated by the recent availability of new and stronger constraints we update previous works testing the consistency of measurements of the fine-structure constant α and the proton-to-electron mass ratio μ =mp/me (mostly obtained in the optical/ultraviolet) with combined measurements of α , μ and the proton gyromagnetic ratio gp (mostly in the radio band). We carry out a global analysis of all available data, including the 293 archival measurements of Webb et al. and 66 more recent dedicated measurements, and constraining both time and spatial variations. While nominally the full data sets show a slight statistical preference for variations of α and μ (at up to two standard deviations), we also find several inconsistencies between different subsets, likely due to hidden systematics and implying that these statistical preferences need to be taken with caution. The statistical evidence for a spatial dipole in the values of α is found at the 2.3 sigma level. Forthcoming studies with facilities such as ALMA and ESPRESSO should clarify these issues.

  10. Global analysis of photosynthesis transcriptional regulatory networks.

    PubMed

    Imam, Saheed; Noguera, Daniel R; Donohue, Timothy J

    2014-12-01

    Photosynthesis is a crucial biological process that depends on the interplay of many components. This work analyzed the gene targets for 4 transcription factors: FnrL, PrrA, CrpK and MppG (RSP_2888), which are known or predicted to control photosynthesis in Rhodobacter sphaeroides. Chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) identified 52 operons under direct control of FnrL, illustrating its regulatory role in photosynthesis, iron homeostasis, nitrogen metabolism and regulation of sRNA synthesis. Using global gene expression analysis combined with ChIP-seq, we mapped the regulons of PrrA, CrpK and MppG. PrrA regulates ∼34 operons encoding mainly photosynthesis and electron transport functions, while CrpK, a previously uncharacterized Crp-family protein, regulates genes involved in photosynthesis and maintenance of iron homeostasis. Furthermore, CrpK and FnrL share similar DNA binding determinants, possibly explaining our observation of the ability of CrpK to partially compensate for the growth defects of a ΔFnrL mutant. We show that the Rrf2 family protein, MppG, plays an important role in photopigment biosynthesis, as part of an incoherent feed-forward loop with PrrA. Our results reveal a previously unrealized, high degree of combinatorial regulation of photosynthetic genes and significant cross-talk between their transcriptional regulators, while illustrating previously unidentified links between photosynthesis and the maintenance of iron homeostasis.

  11. Determinants for global cargo analysis tools

    NASA Astrophysics Data System (ADS)

    Wilmoth, M.; Kay, W.; Sessions, C.; Hancock, M.

    2007-04-01

    The purpose of Global TRADER (GT) is not only to gather and query supply-chain transactional data for facts but also to analyze that data for hidden knowledge for the purpose of useful and meaningful pattern prediction. The application of advanced analytics provides benefits beyond simple information retrieval from GT, including computer-aided detection of useful patterns and associations. Knowledge discovery, offering a breadth and depth of analysis unattainable by manual processes, involves three components: repository structures, analytical engines, and user tools and reports. For a large and complex domain like supply-chains, there are many stages to developing the most advanced analytic capabilities; however, significant benefits accrue as components are incrementally added. These benefits include detecting emerging patterns; identifying new patterns; fusing data; creating models that can learn and predict behavior; and identifying new features for future tools. The GT Analyst Toolset was designed to overcome a variety of constraints, including lack of third party data, partial data loads, non-cleansed data (non-disambiguation of parties, misspellings, transpositions, etc.), and varying levels of analyst experience and expertise. The end result was a set of analytical tools that are flexible, extensible, tunable, and able to support a wide range of analyst demands.

  12. Economic Analysis and Assumptions in Global Education.

    ERIC Educational Resources Information Center

    Miller, Steven L.

    Economic educators recognize the importance of a global perspective, at least in part because the international sector has become more important over the past few decades. The application of economic principles calls into question some assumptions that appear to be common among members of the global education movement. That these assumptions might…

  13. Global bioenergy potentials from agricultural land in 2050: Sensitivity to climate change, diets and yields

    PubMed Central

    Haberl, Helmut; Erb, Karl-Heinz; Krausmann, Fridolin; Bondeau, Alberte; Lauk, Christian; Müller, Christoph; Plutzar, Christoph; Steinberger, Julia K.

    2011-01-01

    There is a growing recognition that the interrelations between agriculture, food, bioenergy, and climate change have to be better understood in order to derive more realistic estimates of future bioenergy potentials. This article estimates global bioenergy potentials in the year 2050, following a “food first” approach. It presents integrated food, livestock, agriculture, and bioenergy scenarios for the year 2050 based on a consistent representation of FAO projections of future agricultural development in a global biomass balance model. The model discerns 11 regions, 10 crop aggregates, 2 livestock aggregates, and 10 food aggregates. It incorporates detailed accounts of land use, global net primary production (NPP) and its human appropriation as well as socioeconomic biomass flow balances for the year 2000 that are modified according to a set of scenario assumptions to derive the biomass potential for 2050. We calculate the amount of biomass required to feed humans and livestock, considering losses between biomass supply and provision of final products. Based on this biomass balance as well as on global land-use data, we evaluate the potential to grow bioenergy crops and estimate the residue potentials from cropland (forestry is outside the scope of this study). We assess the sensitivity of the biomass potential to assumptions on diets, agricultural yields, cropland expansion and climate change. We use the dynamic global vegetation model LPJmL to evaluate possible impacts of changes in temperature, precipitation, and elevated CO2 on agricultural yields. We find that the gross (primary) bioenergy potential ranges from 64 to 161 EJ y−1, depending on climate impact, yields and diet, while the dependency on cropland expansion is weak. We conclude that food requirements for a growing world population, in particular feed required for livestock, strongly influence bioenergy potentials, and that integrated approaches are needed to optimize food and bioenergy supply

  14. Ringed Seal Search for Global Optimization via a Sensitive Search Model.

    PubMed

    Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar

    2016-01-01

    The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global

  15. Ringed Seal Search for Global Optimization via a Sensitive Search Model

    PubMed Central

    Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar

    2016-01-01

    The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global

  16. Global bioenergy potentials from agricultural land in 2050: Sensitivity to climate change, diets and yields.

    PubMed

    Haberl, Helmut; Erb, Karl-Heinz; Krausmann, Fridolin; Bondeau, Alberte; Lauk, Christian; Müller, Christoph; Plutzar, Christoph; Steinberger, Julia K

    2011-12-01

    There is a growing recognition that the interrelations between agriculture, food, bioenergy, and climate change have to be better understood in order to derive more realistic estimates of future bioenergy potentials. This article estimates global bioenergy potentials in the year 2050, following a "food first" approach. It presents integrated food, livestock, agriculture, and bioenergy scenarios for the year 2050 based on a consistent representation of FAO projections of future agricultural development in a global biomass balance model. The model discerns 11 regions, 10 crop aggregates, 2 livestock aggregates, and 10 food aggregates. It incorporates detailed accounts of land use, global net primary production (NPP) and its human appropriation as well as socioeconomic biomass flow balances for the year 2000 that are modified according to a set of scenario assumptions to derive the biomass potential for 2050. We calculate the amount of biomass required to feed humans and livestock, considering losses between biomass supply and provision of final products. Based on this biomass balance as well as on global land-use data, we evaluate the potential to grow bioenergy crops and estimate the residue potentials from cropland (forestry is outside the scope of this study). We assess the sensitivity of the biomass potential to assumptions on diets, agricultural yields, cropland expansion and climate change. We use the dynamic global vegetation model LPJmL to evaluate possible impacts of changes in temperature, precipitation, and elevated CO(2) on agricultural yields. We find that the gross (primary) bioenergy potential ranges from 64 to 161 EJ y(-1), depending on climate impact, yields and diet, while the dependency on cropland expansion is weak. We conclude that food requirements for a growing world population, in particular feed required for livestock, strongly influence bioenergy potentials, and that integrated approaches are needed to optimize food and bioenergy supply.

  17. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation.

    PubMed

    Zajac, Zuzanna; Stith, Bradley; Bowling, Andrea C; Langtimm, Catherine A; Swain, Eric D

    2015-07-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  18. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation

    USGS Publications Warehouse

    Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.

    2015-01-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  19. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation

    PubMed Central

    Zajac, Zuzanna; Stith, Bradley; Bowling, Andrea C; Langtimm, Catherine A; Swain, Eric D

    2015-01-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  20. Sensitivity analysis of channel-bend hydraulics influenced by vegetation

    NASA Astrophysics Data System (ADS)

    Bywater-Reyes, S.; Manners, R.; McDonald, R.; Wilcox, A. C.

    2015-12-01

    Alternating bars influence hydraulics by changing the force balance of channels as part of a morphodynamic feedback loop that dictates channel geometry. Pioneer woody riparian trees recruit on river bars and may steer flow, alter cross-stream and downstream force balances, and ultimately change channel morphology. Quantifying the influence of vegetation on stream hydraulics is difficult, and researchers increasingly rely on two-dimensional hydraulic models. In many cases, channel characteristics (channel drag and lateral eddy viscosity) and vegetation characteristics (density, frontal area, and drag coefficient) are uncertain. This study uses a beta version of FaSTMECH that models vegetation explicitly as a drag force to test the sensitivity of channel-bend hydraulics to riparian vegetation. We use a simplified, scale model of a meandering river with bars and conduct a global sensitivity analysis that ranks the influence of specified channel characteristics (channel drag and lateral eddy viscosity) against vegetation characteristics (density, frontal area, and drag coefficient) on cross-stream hydraulics. The primary influence on cross-stream velocity and shear stress is channel drag (i.e., bed roughness), followed by the near-equal influence of all vegetation parameters and lateral eddy viscosity. To test the implication of the sensitivity indices on bend hydraulics, we hold calibrated channel characteristics constant for a wandering gravel-bed river with bars (Bitterroot River, MT), and vary vegetation parameters on a bar. For a dense vegetation scenario, we find flow to be steered away from the bar, and velocity and shear stress to be reduced within the thalweg. This provides insight into how the morphodynamic evolution of vegetated bars differs from unvegetated bars.

  1. A discourse on sensitivity analysis for discretely-modeled structures

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Haftka, Raphael T.

    1991-01-01

    A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.

  2. GPT-Free Sensitivity Analysis for Reactor Depletion and Analysis

    NASA Astrophysics Data System (ADS)

    Kennedy, Christopher Brandon

    model (ROM) error. When building a subspace using the GPT-Free approach, the reduction error can be selected based on an error tolerance for generic flux response-integrals. The GPT-Free approach then solves the fundamental adjoint equation with randomly generated sets of input parameters. Using properties from linear algebra, the fundamental k-eigenvalue sensitivities, spanned by the various randomly generated models, can be related to response sensitivity profiles by a change of basis. These sensitivity profiles are the first-order derivatives of responses to input parameters. The quality of the basis is evaluated using the kappa-metric, developed from Wilks' order statistics, on the user-defined response functionals that involve the flux state-space. Because the kappa-metric is formed from Wilks' order statistics, a probability-confidence interval can be established around the reduction error based on user-defined responses such as fuel-flux, max-flux error, or other generic inner products requiring the flux. In general, The GPT-Free approach will produce a ROM with a quantifiable, user-specified reduction error. This dissertation demonstrates the GPT-Free approach for steady state and depletion reactor calculations modeled by SCALE6, an analysis tool developed by Oak Ridge National Laboratory. Future work includes the development of GPT-Free for new Monte Carlo methods where the fundamental adjoint is available. Additionally, the approach in this dissertation examines only the first derivatives of responses, the response sensitivity profile; extension and/or generalization of the GPT-Free approach to higher order response sensitivity profiles is natural area for future research.

  3. Changing carbon cycle: a global analysis

    SciTech Connect

    Trabalka, J.R.; Reichle, D.E.

    1986-01-01

    An attempt is made to examine current knowledge about the fluxes, sources, and sinks in the global carbon cycle, as well as our ability to predict changes in atmospheric CO/sub 2/ concentration resulting from anthropogenic influences. The reader will find authoritative discussions of: past and expected releases of CO/sub 2/ from fossil fuels; the historical record and implications of atmospheric CO/sub 2/ increases; isotopic and geological records of past carbon cycle processes; the role of the oceans in the global carbon cycle; the influence of the world biosphere on changes in atmospheric CO/sub 2/ levels; and, evidence linking the components of the global carbon cycle.

  4. The sensitivity of ozone and fine particulate matter concentrations to global change at different spatiotemporal scales

    NASA Astrophysics Data System (ADS)

    Racherla, Pavan Nandan

    Ozone (O3) and fine particulate matter (PM) are harmful to human health. Changes in climate and anthropogenic emissions due to global change will affect concentrations of O3 and fine PM. These effects are not well understood, however. We perform a suite of simulations using an integrated model of global climate, tropospheric gas-phase chemistry, and aerosols to investigate the effects of global change on O3 and fine PM at different spatiotemporal scales ranging from the global annual-average concentrations to regional (eg. United States) air pollution episodes. One major consequence of climate change is a lengthening of the O3 season over the eastern U.S. to include late spring and early fall months. Climate change is also predicted to increase the severity and frequency of O3 episodes over much of the eastern U.S. We found that U.S. O 3 and fine PM are sensitive first and foremost to U.S. anthropogenic emissions changes. However, the effect of climate change is very sensitive to the prevalent domestic anthropogenic emissions, and it increases strongly with emissions, thereby making it important to factor climate change in to air quality planning. The reductions in domestic emissions will, therefore, have the added benefit of minimized climate effects. Climate change affects fine PM sulfate and nitrate concentrations the most. Substantial increases of up to 2 mug m-3 in the July-average sulfate concentrations were predicted in many polluted regions in the eastern U.S. Higher NO x and ammonia emissions could negate the benefits of significant SO2 emissions reductions vis-a-vis the annual-average PM2.5 standard for several areas in the Northeast and Midwest U.S. Simultaneous reductions in SO2 and NOx emissions, however, will help bring most of the eastern U.S. into compliance with the current annual-average PM2.5 standard. If the U.S. O3 standard were to change from the current 80 ppbv to 55 ppbv (which is the case in many European countries), the increased O3

  5. Value-Driven Design and Sensitivity Analysis of Hybrid Energy Systems using Surrogate Modeling

    SciTech Connect

    Wenbo Du; Humberto E. Garcia; William R. Binder; Christiaan J. J. Paredis

    2001-10-01

    A surrogate modeling and analysis methodology is applied to study dynamic hybrid energy systems (HES). The effect of battery size on the smoothing of variability in renewable energy generation is investigated. Global sensitivity indices calculated using surrogate models show the relative sensitivity of system variability to dynamic properties of key components. A value maximization approach is used to consider the tradeoff between system variability and required battery size. Results are found to be highly sensitive to the renewable power profile considered, demonstrating the importance of accurate renewable resource modeling and prediction. The documented computational framework and preliminary results represent an important step towards a comprehensive methodology for HES evaluation, design, and optimization.

  6. Global O3-CO Correlations in a Global Model During July-August: Evaluation with TES Satellite Observations and Sensitivity to Emissions

    NASA Astrophysics Data System (ADS)

    Choi, H.; Liu, H.; Crawford, J. H.; Considine, D. B.; Allen, D. J.; Duncan, B. N.; Rodriguez, J. M.; Strahan, S. E.; Damon, M.; Steenrod, S. D.; Zhang, L.; Liu, X.

    2013-12-01

    We examine global mid-tropospheric (619 hPa) ozone - carbon monoxide (O3-CO) correlations and its sensitivity to emissions during July - August 2005 in the Global Modeling Initiative (GMI) chemistry and transport model driven by the Modern-Era Retrospective Analysis for Research and Application (MERRA) meteorological data set. We evaluate the simulated O3 with climatological O3 profiles from ozonesonde measurements and satellite tropospheric O3 columns. Model O3-CO correlations are 1). positive in the Northern Hemisphere continental outflow regions with large dO3/dCO enhancement ratios, and in the southern African westerly outflow region and Indonesia with small dO3/dCO enhancement ratios; 2). negative over the Asian continent (including the Tibetan Plateau), Middle East, northern and central Africa, and tropical and subtropical deep convective regions. These patterns are consistent with those derived from collocated measurements of O3 and CO from the Tropospheric Emission Spectrometer (TES) on board NASA's Aura satellite, except over the tropical Atlantic and Pacific. Model sensitivity experiments indicate that fossil fuel emissions are responsible for the positive O3-CO correlations in major continental outflow regions and Europe. Biomass burning emissions lead to the positive correlations in the Southern Hemisphere mid-high latitudes. Biogenic emissions make important contributions to the negative O3-CO correlations over the tropical eastern Pacific. Lightning NOx emissions significantly reduce both the positive O3-CO correlations at mid-high latitudes and the negative correlations in the tropics. The corresponding chemical and transport processes will be discussed.

  7. Design Parameters Influencing Reliability of CCGA Assembly: A Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Tasooji, Amaneh; Ghaffarian, Reza; Rinaldi, Antonio

    2006-01-01

    Area Array microelectronic packages with small pitch and large I/O counts are now widely used in microelectronics packaging. The impact of various package design and materials/process parameters on reliability has been studied through extensive literature review. Reliability of Ceramic Column Grid Array (CCGA) package assemblies has been evaluated using JPL thermal cycle test results (-50(deg)/75(deg)C, -55(deg)/100(deg)C, and -55(deg)/125(deg)C), as well as those reported by other investigators. A sensitivity analysis has been performed using the literature da to study the impact of design parameters and global/local stress conditions on assembly reliability. The applicability of various life-prediction models for CCGA design has been investigated by comparing model's predictions with the experimental thermal cycling data. Finite Element Method (FEM) analysis has been conducted to assess the state of the stress/strain in CCGA assembly under different thermal cycling, and to explain the different failure modes and locations observed in JPL test assemblies.

  8. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    PubMed

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms.

  9. Relative performance of academic departments using DEA with sensitivity analysis.

    PubMed

    Tyagi, Preeti; Yadav, Shiv Prasad; Singh, S P

    2009-05-01

    The process of liberalization and globalization of Indian economy has brought new opportunities and challenges in all areas of human endeavor including education. Educational institutions have to adopt new strategies to make best use of the opportunities and counter the challenges. One of these challenges is how to assess the performance of academic programs based on multiple criteria. Keeping this in view, this paper attempts to evaluate the performance efficiencies of 19 academic departments of IIT Roorkee (India) through data envelopment analysis (DEA) technique. The technique has been used to assess the performance of academic institutions in a number of countries like USA, UK, Australia, etc. But we are using it first time in Indian context to the best of our knowledge. Applying DEA models, we calculate technical, pure technical and scale efficiencies and identify the reference sets for inefficient departments. Input and output projections are also suggested for inefficient departments to reach the frontier. Overall performance, research performance and teaching performance are assessed separately using sensitivity analysis.

  10. Finite-frequency sensitivity kernels for global seismic wave propagation based upon adjoint methods

    NASA Astrophysics Data System (ADS)

    Liu, Qinya; Tromp, Jeroen

    2008-07-01

    We determine adjoint equations and Fréchet kernels for global seismic wave propagation based upon a Lagrange multiplier method. We start from the equations of motion for a rotating, self-gravitating earth model initially in hydrostatic equilibrium, and derive the corresponding adjoint equations that involve motions on an earth model that rotates in the opposite direction. Variations in the misfit function χ then may be expressed as , where δlnm = δm/m denotes relative model perturbations in the volume V, δlnd denotes relative topographic variations on solid-solid or fluid-solid boundaries Σ, and ∇Σδlnd denotes surface gradients in relative topographic variations on fluid-solid boundaries ΣFS. The 3-D Fréchet kernel Km determines the sensitivity to model perturbations δlnm, and the 2-D kernels Kd and Kd determine the sensitivity to topographic variations δlnd. We demonstrate also how anelasticity may be incorporated within the framework of adjoint methods. Finite-frequency sensitivity kernels are calculated by simultaneously computing the adjoint wavefield forward in time and reconstructing the regular wavefield backward in time. Both the forward and adjoint simulations are based upon a spectral-element method. We apply the adjoint technique to generate finite-frequency traveltime kernels for global seismic phases (P, Pdiff, PKP, S, SKS, depth phases, surface-reflected phases, surface waves, etc.) in both 1-D and 3-D earth models. For 1-D models these adjoint-generated kernels generally agree well with results obtained from ray-based methods. However, adjoint methods do not have the same theoretical limitations as ray-based methods, and can produce sensitivity kernels for any given phase in any 3-D earth model. The Fréchet kernels presented in this paper illustrate the sensitivity of seismic observations to structural parameters and topography on internal discontinuities. These kernels form the basis of future 3-D tomographic inversions.

  11. Is globalization healthy: a statistical indicator analysis of the impacts of globalization on health

    PubMed Central

    2010-01-01

    It is clear that globalization is something more than a purely economic phenomenon manifesting itself on a global scale. Among the visible manifestations of globalization are the greater international movement of goods and services, financial capital, information and people. In addition, there are technological developments, more transboundary cultural exchanges, facilitated by the freer trade of more differentiated products as well as by tourism and immigration, changes in the political landscape and ecological consequences. In this paper, we link the Maastricht Globalization Index with health indicators to analyse if more globalized countries are doing better in terms of infant mortality rate, under-five mortality rate, and adult mortality rate. The results indicate a positive association between a high level of globalization and low mortality rates. In view of the arguments that globalization provides winners and losers, and might be seen as a disequalizing process, we should perhaps be careful in interpreting the observed positive association as simple evidence that globalization is mostly good for our health. It is our hope that a further analysis of health impacts of globalization may help in adjusting and optimising the process of globalization on every level in the direction of a sustainable and healthy development for all. PMID:20849605

  12. Is globalization healthy: a statistical indicator analysis of the impacts of globalization on health.

    PubMed

    Martens, Pim; Akin, Su-Mia; Maud, Huynen; Mohsin, Raza

    2010-09-17

    It is clear that globalization is something more than a purely economic phenomenon manifesting itself on a global scale. Among the visible manifestations of globalization are the greater international movement of goods and services, financial capital, information and people. In addition, there are technological developments, more transboundary cultural exchanges, facilitated by the freer trade of more differentiated products as well as by tourism and immigration, changes in the political landscape and ecological consequences. In this paper, we link the Maastricht Globalization Index with health indicators to analyse if more globalized countries are doing better in terms of infant mortality rate, under-five mortality rate, and adult mortality rate. The results indicate a positive association between a high level of globalization and low mortality rates. In view of the arguments that globalization provides winners and losers, and might be seen as a disequalizing process, we should perhaps be careful in interpreting the observed positive association as simple evidence that globalization is mostly good for our health. It is our hope that a further analysis of health impacts of globalization may help in adjusting and optimising the process of globalization on every level in the direction of a sustainable and healthy development for all.

  13. Grid sensitivity for aerodynamic optimization and flow analysis

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1993-01-01

    After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.

  14. Global Human Settlement Analysis for Disaster Risk Reduction

    NASA Astrophysics Data System (ADS)

    Pesaresi, M.; Ehrlich, D.; Ferri, S.; Florczyk, A.; Freire, S.; Haag, F.; Halkia, M.; Julea, A. M.; Kemper, T.; Soille, P.

    2015-04-01

    The Global Human Settlement Layer (GHSL) is supported by the European Commission, Joint Research Center (JRC) in the frame of his institutional research activities. Scope of GHSL is developing, testing and applying the technologies and analysis methods integrated in the JRC Global Human Settlement analysis platform for applications in support to global disaster risk reduction initiatives (DRR) and regional analysis in the frame of the European Cohesion policy. GHSL analysis platform uses geo-spatial data, primarily remotely sensed and population. GHSL also cooperates with the Group on Earth Observation on SB-04-Global Urban Observation and Information, and various international partners andWorld Bank and United Nations agencies. Some preliminary results integrating global human settlement information extracted from Landsat data records of the last 40 years and population data are presented.

  15. New Uses for Sensitivity Analysis: How Different Movement Tasks Effect Limb Model Parameter Sensitivity

    NASA Technical Reports Server (NTRS)

    Winters, J. M.; Stark, L.

    1984-01-01

    Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.

  16. Discrete analysis of spatial-sensitivity models

    NASA Technical Reports Server (NTRS)

    Nielsen, Kenneth R. K.; Wandell, Brian A.

    1988-01-01

    Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.

  17. Shape design sensitivity analysis and optimal design of structural systems

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.

    1987-01-01

    The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.

  18. Global sensitivity of high-resolution estimates of crop water footprint

    NASA Astrophysics Data System (ADS)

    Tuninetti, Marta; Tamea, Stefania; D'Odorico, Paolo; Laio, Francesco; Ridolfi, Luca

    2015-10-01

    Most of the human appropriation of freshwater resources is for agriculture. Water availability is a major constraint to mankind's ability to produce food. The notion of virtual water content (VWC), also known as crop water footprint, provides an effective tool to investigate the linkage between food and water resources as a function of climate, soil, and agricultural practices. The spatial variability in the virtual water content of crops is here explored, disentangling its dependency on climate and crop yields and assessing the sensitivity of VWC estimates to parameter variability and uncertainty. Here we calculate the virtual water content of four staple crops (i.e., wheat, rice, maize, and soybean) for the entire world developing a high-resolution (5 × 5 arc min) model, and we evaluate the VWC sensitivity to input parameters. We find that food production almost entirely depends on green water (>90%), but, when applied, irrigation makes crop production more water efficient, thus requiring less water. The spatial variability of the VWC is mostly controlled by the spatial patterns of crop yields with an average correlation coefficient of 0.83. The results of the sensitivity analysis show that wheat is most sensitive to the length of the growing period, rice to reference evapotranspiration, maize and soybean to the crop planting date. The VWC sensitivity varies not only among crops, but also across the harvested areas of the world, even at the subnational scale.

  19. Sensitivity Analysis of Situational Awareness Measures

    NASA Technical Reports Server (NTRS)

    Shively, R. J.; Davison, H. J.; Burdick, M. D.; Rutkowski, Michael (Technical Monitor)

    2000-01-01

    A great deal of effort has been invested in attempts to define situational awareness, and subsequently to measure this construct. However, relatively less work has focused on the sensitivity of these measures to manipulations that affect the SA of the pilot. This investigation was designed to manipulate SA and examine the sensitivity of commonly used measures of SA. In this experiment, we tested the most commonly accepted measures of SA: SAGAT, objective performance measures, and SART, against different levels of SA manipulation to determine the sensitivity of such measures in the rotorcraft flight environment. SAGAT is a measure in which the simulation blanks in the middle of a trial and the pilot is asked specific, situation-relevant questions about the state of the aircraft or the objective of a particular maneuver. In this experiment, after the pilot responded verbally to several questions, the trial continued from the point frozen. SART is a post-trial questionnaire that asked for subjective SA ratings from the pilot at certain points in the previous flight. The objective performance measures included: contacts with hazards (power lines and towers) that impeded the flight path, lateral and vertical anticipation of these hazards, response time to detection of other air traffic, and response time until an aberrant fuel gauge was detected. An SA manipulation of the flight environment was chosen that undisputedly affects a pilot's SA-- visibility. Four variations of weather conditions (clear, light rain, haze, and fog) resulted in a different level of visibility for each trial. Pilot SA was measured by either SAGAT or the objective performance measures within each level of visibility. This enabled us to not only determine the sensitivity within a measure, but also between the measures. The SART questionnaire and the NASA-TLX, a measure of workload, were distributed after every trial. Using the newly developed rotorcraft part-task laboratory (RPTL) at NASA Ames

  20. Alanine and proline content modulate global sensitivity to discrete perturbations in disordered proteins.

    PubMed

    Perez, Romel B; Tischer, Alexander; Auton, Matthew; Whitten, Steven T

    2014-12-01

    Molecular transduction of biological signals is understood primarily in terms of the cooperative structural transitions of protein macromolecules, providing a mechanism through which discrete local structure perturbations affect global macromolecular properties. The recognition that proteins lacking tertiary stability, commonly referred to as intrinsically disordered proteins (IDPs), mediate key signaling pathways suggests that protein structures without cooperative intramolecular interactions may also have the ability to couple local and global structure changes. Presented here are results from experiments that measured and tested the ability of disordered proteins to couple local changes in structure to global changes in structure. Using the intrinsically disordered N-terminal region of the p53 protein as an experimental model, a set of proline (PRO) and alanine (ALA) to glycine (GLY) substitution variants were designed to modulate backbone conformational propensities without introducing non-native intramolecular interactions. The hydrodynamic radius (R(h)) was used to monitor changes in global structure. Circular dichroism spectroscopy showed that the GLY substitutions decreased polyproline II (PP(II)) propensities relative to the wild type, as expected, and fluorescence methods indicated that substitution-induced changes in R(h) were not associated with folding. The experiments showed that changes in local PP(II) structure cause changes in R(h) that are variable and that depend on the intrinsic chain propensities of PRO and ALA residues, demonstrating a mechanism for coupling local and global structure changes. Molecular simulations that model our results were used to extend the analysis to other proteins and illustrate the generality of the observed PRO and alanine effects on the structures of IDPs.

  1. Aircraft concept optimization using the global sensitivity approach and parametric multiobjective figures of merit

    NASA Technical Reports Server (NTRS)

    Malone, Brett; Mason, W. H.

    1992-01-01

    An extension of our parametric multidisciplinary optimization method to include design results connecting multiple objective functions is presented. New insight into the effect of the figure of merit (objective function) on aircraft configuration size and shape is demonstrated using this technique. An aircraft concept, subject to performance and aerodynamic constraints, is optimized using the global sensitivity equation method for a wide range of objective functions. These figures of merit are described parametrically such that a series of multiobjective optimal solutions can be obtained. Computational speed is facilitated by use of algebraic representations of the system technologies. Using this method, the evolution of an optimum design from one objective function to another is demonstrated. Specifically, combinations of minimum takeoff gross weight, fuel weight, and maximum cruise performance and productivity parameters are used as objective functions.

  2. Global functions in global-local finite-element analysis of localized stresses in prismatic structures

    NASA Technical Reports Server (NTRS)

    Dong, Stanley B.

    1989-01-01

    An important consideration in the global local finite-element method (GLFEM) is the availability of global functions for the given problem. The role and mathematical requirements of these global functions in a GLFEM analysis of localized stress states in prismatic structures are discussed. A method is described for determining these global functions. Underlying this method are theorems due to Toupin and Knowles on strain energy decay rates, which are related to a quantitative expression of Saint-Venant's principle. It is mentioned that a mathematically complete set of global functions can be generated, so that any arbitrary interface condition between the finite element and global subregions can be represented. Convergence to the true behavior can be achieved with increasing global functions and finite-element degrees of freedom. Specific attention is devoted to mathematically two-dimensional and three-dimensional prismatic structures. Comments are offered on the GLFEM analysis of NASA flat panel with a discontinuous stiffener. Methods for determining global functions for other effects are also indicated, such as steady-state dynamics and bodies under initial stress.

  3. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    PubMed

    Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the

  4. Global analysis of duality maps in quantum field theory

    SciTech Connect

    Restuccia, A.

    1997-03-15

    A global analysis of duality transformations is presented. Global constraints are introduced in order to have the correct structure of the configuration spaces. This global structure is completely determined from the quantum equivalence of dual actions. Applications to S-dual actions and to T duality of string theories and D-branes are briefly discussed. It is shown that a new topological term in the dual open string actions is required.

  5. Toward Global Content Analysis and Media Criticism.

    ERIC Educational Resources Information Center

    Nordenstreng, Kaarle

    1995-01-01

    Presents the background, rationale, and implementation prospects for an international system of monitoring media coverage of global problems such as peace and war, human rights, and the environment. Outlines the monitoring project carried out in January 1995 concerning the representation and portrayal of women in news media. (SR)

  6. Global Proteome Analysis of Leptospira interrogans

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Comparative global proteome analyses were performed on Leptospira interrogans serovar Copenhageni grown under conventional in vitro conditions and those mimicking in vivo conditions (iron limitation and serum presence). Proteomic analyses were conducted using iTRAQ and LC-ESI-tandem mass spectrometr...

  7. Global Population Genetic Analysis of Aspergillus fumigatus

    PubMed Central

    Ashu, Eta Ebasi; Hagen, Ferry; Chowdhary, Anuradha

    2017-01-01

    ABSTRACT Aspergillus fumigatus is a ubiquitous opportunistic fungal pathogen capable of causing invasive aspergillosis, a globally distributed disease with a mortality rate of up to 90% in high-risk populations. Effective control and prevention of this disease require a thorough understanding of its epidemiology. However, despite significant efforts, the global molecular epidemiology of A. fumigatus remains poorly understood. In this study, we analyzed 2,026 A. fumigatus isolates from 13 countries in four continents using nine highly polymorphic microsatellite markers. Genetic cluster analyses suggest that our global sample of A. fumigatus isolates belonged to eight genetic clusters, with seven of the eight clusters showing broad geographic distributions. We found common signatures of sexual recombination within individual genetic clusters and clear evidence of hybridization between several clusters. Limited but statistically significant genetic differentiations were found among geographic and ecological populations. However, there was abundant evidence for gene flow at the local, regional, and global scales. Interestingly, the triazole-susceptible and triazole-resistant populations showed different population structures, consistent with antifungal drug pressure playing a significant role in local adaptation. Our results suggest that global populations of A. fumigatus are shaped by historical differentiation, contemporary gene flow, sexual reproduction, and the localized antifungal drug selection that is driving clonal expansion of genotypes resistant to multiple triazole drugs. IMPORTANCE The genetic diversity and geographic structure of the human fungal pathogen A. fumigatus have been the subject of many studies. However, most previous studies had relatively limited sample ranges and sizes and/or used genetic markers with low-level polymorphisms. In this paper, we characterize a global collection of strains of A. fumigatus using a panel of 9 highly

  8. Sensitivity of global ocean heat content from reanalyses to the atmospheric reanalysis forcing: A comparative study

    NASA Astrophysics Data System (ADS)

    Storto, Andrea; Yang, Chunxue; Masina, Simona

    2016-05-01

    The global ocean heat content evolution is a key component of the Earth's energy budget and can be consistently determined by ocean reanalyses that assimilate hydrographic profiles. This work investigates the impact of the atmospheric reanalysis forcing through a multiforcing ensemble ocean reanalysis, where the ensemble members are forced by five state-of-the-art atmospheric reanalyses during the meteorological satellite era (1979-2013). Data assimilation leads the ensemble to converge toward robust estimates of ocean warming rates and significantly reduces the spread (1.48 ± 0.18 W/m2, per unit area of the World Ocean); hence, the impact of the atmospheric forcing appears only marginal for the global heat content estimates in both upper and deeper oceans. A sensitivity assessment performed through realistic perturbation of the main sources of uncertainty in ocean reanalyses highlights that bias correction and preprocessing of in situ observations represent the most crucial component of the reanalysis, whose perturbation accounts for up to 60% of the ocean heat content anomaly variability in the pre-Argo period. Although these results may depend on the single reanalysis system used, they reveal useful information for the ocean observation community and for the optimal generation of perturbations in ocean ensemble systems.

  9. Sensitivity of a global climate model to the critical Richardson number in the boundary layer parameterization

    DOE PAGES

    Zhang, Ning; Liu, Yangang; Gao, Zhiqiu; ...

    2015-04-27

    The critical bulk Richardson number (Ricr) is an important parameter in planetary boundary layer (PBL) parameterization schemes used in many climate models. This paper examines the sensitivity of a Global Climate Model, the Beijing Climate Center Atmospheric General Circulation Model, BCC_AGCM to Ricr. The results show that the simulated global average of PBL height increases nearly linearly with Ricr, with a change of about 114 m for a change of 0.5 in Ricr. The surface sensible (latent) heat flux decreases (increases) as Ricr increases. The influence of Ricr on surface air temperature and specific humidity is not significant. The increasingmore » Ricr may affect the location of the Westerly Belt in the Southern Hemisphere. Further diagnosis reveals that changes in Ricr affect stratiform and convective precipitations differently. Increasing Ricr leads to an increase in the stratiform precipitation but a decrease in the convective precipitation. Significant changes of convective precipitation occur over the inter-tropical convergence zone, while changes of stratiform precipitation mostly appear over arid land such as North Africa and Middle East.« less

  10. Sensitivity of a global climate model to the critical Richardson number in the boundary layer parameterization

    SciTech Connect

    Zhang, Ning; Liu, Yangang; Gao, Zhiqiu; Li, Dan

    2015-04-27

    The critical bulk Richardson number (Ricr) is an important parameter in planetary boundary layer (PBL) parameterization schemes used in many climate models. This paper examines the sensitivity of a Global Climate Model, the Beijing Climate Center Atmospheric General Circulation Model, BCC_AGCM to Ricr. The results show that the simulated global average of PBL height increases nearly linearly with Ricr, with a change of about 114 m for a change of 0.5 in Ricr. The surface sensible (latent) heat flux decreases (increases) as Ricr increases. The influence of Ricr on surface air temperature and specific humidity is not significant. The increasing Ricr may affect the location of the Westerly Belt in the Southern Hemisphere. Further diagnosis reveals that changes in Ricr affect stratiform and convective precipitations differently. Increasing Ricr leads to an increase in the stratiform precipitation but a decrease in the convective precipitation. Significant changes of convective precipitation occur over the inter-tropical convergence zone, while changes of stratiform precipitation mostly appear over arid land such as North Africa and Middle East.

  11. A simple-physics global circulation model for Venus: Sensitivity assessments of atmospheric superrotation

    NASA Astrophysics Data System (ADS)

    Hollingsworth, J. L.; Young, R. E.; Schubert, G.; Covey, C.; Grossman, A. S.

    2007-03-01

    A 3D global circulation model is adapted to the atmosphere of Venus to explore the nature of the planet's atmospheric superrotation. The model employs the full meteorological primitive equations and simplified forms for diabatic and other nonconservative forcings. It is therefore economical for performing very long simulations. To assess circulation equilibration and the occurrence of atmospheric superrotation, the climate model is run for 10,000-20,000 day integrations at 4° × 5° latitude-longitude horizontal resolution, and 56 vertical levels (denoted L56). The sensitivity of these simulations to imposed Venus-like diabatic heating rates, momentum dissipation rates, and various other key parameters (e.g., near-surface momentum drag), in addition to model configuration (e.g., low versus high vertical domain and number of atmospheric levels), is examined. We find equatorial superrotation in several of our numerical experiments, but the magnitude of superrotation is often less than observed. Further, the meridional structure of the mean zonal overturning (i.e., Hadley circulation) can consist of numerous cells which are symmetric about the equator and whose depth scale appears sensitive to the number of vertical layers imposed in the model atmosphere. We find that when realistic diabatic heating is imposed in the lowest several scales heights, only extremely weak atmospheric superrotation results.

  12. Boundary formulations for sensitivity analysis without matrix derivatives

    NASA Technical Reports Server (NTRS)

    Kane, J. H.; Guru Prasad, K.

    1993-01-01

    A new hybrid approach to continuum structural shape sensitivity analysis employing boundary element analysis (BEA) is presented. The approach uses iterative reanalysis to obviate the need to factor perturbed matrices in the determination of surface displacement and traction sensitivities via a univariate perturbation/finite difference (UPFD) step. The UPFD approach makes it possible to immediately reuse existing subroutines for computation of BEA matrix coefficients in the design sensitivity analysis process. The reanalysis technique computes economical response of univariately perturbed models without factoring perturbed matrices. The approach provides substantial computational economy without the burden of a large-scale reprogramming effort.

  13. Global Analysis of Aerosol Properties Above Clouds

    NASA Technical Reports Server (NTRS)

    Waquet, F.; Peers, F.; Ducos, F.; Goloub, P.; Platnick, S. E.; Riedi, J.; Tanre, D.; Thieuleux, F.

    2013-01-01

    The seasonal and spatial varability of Aerosol Above Cloud (AAC) properties are derived from passive satellite data for the year 2008. A significant amount of aerosols are transported above liquid water clouds on the global scale. For particles in the fine mode (i.e., radius smaller than 0.3 m), including both clear sky and AAC retrievals increases the global mean aerosol optical thickness by 25(+/- 6%). The two main regions with man-made AAC are the tropical Southeast Atlantic, for biomass burning aerosols, and the North Pacific, mainly for pollutants. Man-made AAC are also detected over the Arctic during the spring. Mineral dust particles are detected above clouds within the so-called dust belt region (5-40 N). AAC may cause a warming effect and bias the retrieval of the cloud properties. This study will then help to better quantify the impacts of aerosols on clouds and climate.

  14. Aero-Structural Interaction, Analysis, and Shape Sensitivity

    NASA Technical Reports Server (NTRS)

    Newman, James C., III

    1999-01-01

    A multidisciplinary sensitivity analysis technique that has been shown to be independent of step-size selection is examined further. The accuracy of this step-size independent technique, which uses complex variables for determining sensitivity derivatives, has been previously established. The primary focus of this work is to validate the aero-structural analysis procedure currently being used. This validation consists of comparing computed and experimental data obtained for an Aeroelastic Research Wing (ARW-2). Since the aero-structural analysis procedure has the complex variable modifications already included into the software, sensitivity derivatives can automatically be computed. Other than for design purposes, sensitivity derivatives can be used for predicting the solution at nearby conditions. The use of sensitivity derivatives for predicting the aero-structural characteristics of this configuration is demonstrated.

  15. Advanced Fuel Cycle Economic Sensitivity Analysis

    SciTech Connect

    David Shropshire; Kent Williams; J.D. Smith; Brent Boore

    2006-12-01

    A fuel cycle economic analysis was performed on four fuel cycles to provide a baseline for initial cost comparison using the Gen IV Economic Modeling Work Group G4 ECON spreadsheet model, Decision Programming Language software, the 2006 Advanced Fuel Cycle Cost Basis report, industry cost data, international papers, the nuclear power related cost study from MIT, Harvard, and the University of Chicago. The analysis developed and compared the fuel cycle cost component of the total cost of energy for a wide range of fuel cycles including: once through, thermal with fast recycle, continuous fast recycle, and thermal recycle.

  16. Global Hawk: Root Cause Analysis of Projected Unit Cost Growth

    DTIC Science & Technology

    2011-05-01

    2009 (WSARA). This report describes our task analysis and findings. The Global Hawk Program Global Hawk is a family of high -altitude, high -endurance...Document (CDD) • Cost Analysis Requirements Description (CARD) • Test and Evaluation Master Plan ( TEMP ) • Acquisition Program Baseline (APB...fixed content and completion criteria as defined by the new CDD, CARD, TEMP , and ASR. The four increments shown in the table above reflect the

  17. Parameter sensitivity analysis for pesticide impacts on honeybee colonies

    EPA Science Inventory

    We employ Monte Carlo simulation and linear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed that simulate hive population trajectories, taking into account queen strength, foraging success, weather, colo...

  18. Sobol’ sensitivity analysis for stressor impacts on honeybee colonies

    EPA Science Inventory

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather...

  19. Selecting step sizes in sensitivity analysis by finite differences

    NASA Technical Reports Server (NTRS)

    Iott, J.; Haftka, R. T.; Adelman, H. M.

    1985-01-01

    This paper deals with methods for obtaining near-optimum step sizes for finite difference approximations to first derivatives with particular application to sensitivity analysis. A technique denoted the finite difference (FD) algorithm, previously described in the literature and applicable to one derivative at a time, is extended to the calculation of several simultaneously. Both the original and extended FD algorithms are applied to sensitivity analysis for a data-fitting problem in which derivatives of the coefficients of an interpolation polynomial are calculated with respect to uncertainties in the data. The methods are also applied to sensitivity analysis of the structural response of a finite-element-modeled swept wing. In a previous study, this sensitivity analysis of the swept wing required a time-consuming trial-and-error effort to obtain a suitable step size, but it proved to be a routine application for the extended FD algorithm herein.

  20. Sensitivity Analysis and Computation for Partial Differential Equations

    DTIC Science & Technology

    2008-03-14

    Example, Journal of Mathematical Analysis and Applications , to appear. 11 [22] John R. Singler, Transition to Turbulence, Small Disturbances, and...Sensitivity Analysis II: The Navier-Stokes Equations, Journal of Mathematical Analysis and Applications , to appear. [23] A. M. Stuart and A. R. Humphries

  1. Sensitivity analysis for electromagnetic topology optimization problems

    NASA Astrophysics Data System (ADS)

    Zhou, Shiwei; Li, Wei; Li, Qing

    2010-06-01

    This paper presents a level set based method to design the metal shape in electromagnetic field such that the incident current flow on the metal surface can be minimized or maximized. We represent the interface of the free space and conducting material (solid phase) by the zero-order contour of a higher dimensional level set function. Only the electrical component of the incident wave is considered in the current study and the distribution of the induced current flow on the metallic surface is governed by the electric field integral equation (EFIE). By minimizing or maximizing a costing function relative to the current flow, its distribution can be controlled to some extent. This method paves a new avenue to many electromagnetic applications such as antenna and metamaterial whose performance or properties are dominated by their surface current flow. The sensitivity of the objective function to the shape change, an integral formulation including both the solutions to the electric field integral equation and its adjoint equation, is obtained using a variational method and shape derivative. The advantages of the level set model lie in its flexibility of disposing complex topological changes and facilitating the mathematical expression of the electromagnetic configuration. Moreover, the level set model makes the optimization an elegant evolution process during which the volume of the metallic component keeps a constant while the free space/metal interface gradually approaching its optimal position. The effectiveness of this method is demonstrated through a self-adjoint 2D topology optimization example.

  2. "Competing Conceptions of Globalization" Revisited: Relocating the Tension between World-Systems Analysis and Globalization Analysis

    ERIC Educational Resources Information Center

    Clayton, Thomas

    2004-01-01

    In recent years, many scholars have become fascinated by a contemporary, multidimensional process that has come to be known as "globalization." Globalization originally described economic developments at the world level. More specifically, scholars invoked the concept in reference to the process of global economic integration and the seemingly…

  3. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    PubMed

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.

  4. Sensitivity of contemporary sea level trends in a global ocean state estimate to effects of geothermal fluxes

    NASA Astrophysics Data System (ADS)

    Piecuch, Christopher G.; Heimbach, Patrick; Ponte, Rui M.; Forget, Gaël

    2015-12-01

    Geothermal fluxes constitute a sizable fraction of the present-day Earth net radiative imbalance and corresponding ocean heat uptake. Model simulations of contemporary sea level that impose a geothermal flux boundary condition are becoming increasingly common. To quantify the impact of geothermal fluxes on model estimates of contemporary (1993-2010) sea level changes, two ocean circulation model experiments are compared. The two simulations are based on a global ocean state estimate, produced by the Estimating the Circulation and Climate of the Ocean (ECCO) consortium, and differ only with regard to whether geothermal forcing is applied as a boundary condition. Geothermal forcing raises the global-mean sea level trend by 0.11 mm yr-1 in the perturbation experiment by suppressing a cooling trend present in the baseline solution below 2000 m. The imposed forcing also affects regional sea level trends. The Southern Ocean is particularly sensitive. In this region, anomalous heat redistribution due to geothermal fluxes results in steric height trends of up to ± 1 mm yr-1 in the perturbation experiment relative to the baseline simulation. Analysis of a passive tracer experiment suggests that the geothermal input itself is transported by horizontal diffusion, resulting in more thermal expansion over deeper ocean basins. Thermal expansion in the perturbation simulation gives rise to bottom pressure increase over shallower regions and decrease over deeper areas relative to the baseline run, consistent with mass redistribution expected for deep ocean warming. These results elucidate the influence of geothermal fluxes on sea level rise and global heat budgets in model simulations of contemporary ocean circulation and climate.

  5. Sensitivity Analysis of the Gap Heat Transfer Model in BISON.

    SciTech Connect

    Swiler, Laura Painton; Schmidt, Rodney C.; Williamson, Richard; Perez, Danielle

    2014-10-01

    This report summarizes the result of a NEAMS project focused on sensitivity analysis of the heat transfer model in the gap between the fuel rod and the cladding used in the BISON fuel performance code of Idaho National Laboratory. Using the gap heat transfer models in BISON, the sensitivity of the modeling parameters and the associated responses is investigated. The study results in a quantitative assessment of the role of various parameters in the analysis of gap heat transfer in nuclear fuel.

  6. Sensitivity Analysis of QSAR Models for Assessing Novel Military Compounds

    DTIC Science & Technology

    2009-01-01

    erties, such as log P, would aid in estimating a chemical’s environmental fate and toxicology when applied to QSAR modeling. Granted, QSAR mod- els, such...ER D C TR -0 9 -3 Strategic Environmental Research and Development Program Sensitivity Analysis of QSAR Models for Assessing Novel...Environmental Research and Development Program ERDC TR-09-3 January 2009 Sensitivity Analysis of QSAR Models for Assessing Novel Military Compound

  7. Global Gene Expression Analysis for the Assessment of Nanobiomaterials.

    PubMed

    Hanagata, Nobutaka

    2015-01-01

    Using global gene expression analysis, the effects of biomaterials and nanomaterials can be analyzed at the genetic level. Even though information obtained from global gene expression analysis can be useful for the evaluation and design of biomaterials and nanomaterials, its use for these purposes is not widespread. This is due to the difficulties involved in data analysis. Because the expression data of about 20,000 genes can be obtained at once with global gene expression analysis, the data must be analyzed using bioinformatics. A method of bioinformatic analysis called gene ontology can estimate the kinds of changes on cell functions caused by genes whose expression level is changed by biomaterials and nanomaterials. Also, by applying a statistical analysis technique called hierarchical clustering to global gene expression data between a variety of biomaterials, the effects of the properties of materials on cell functions can be estimated. In this chapter, these theories of analysis and examples of applications to nanomaterials and biomaterials are described. Furthermore, global microRNA analysis, a method that has gained attention in recent years, and its application to nanomaterials are introduced.

  8. Probabilistic methods for sensitivity analysis and calibration in the NASA challenge problem

    DOE PAGES

    Safta, Cosmin; Sargsyan, Khachik; Najm, Habib N.; ...

    2015-01-01

    In this study, a series of algorithms are proposed to address the problems in the NASA Langley Research Center Multidisciplinary Uncertainty Quantification Challenge. A Bayesian approach is employed to characterize and calibrate the epistemic parameters based on the available data, whereas a variance-based global sensitivity analysis is used to rank the epistemic and aleatory model parameters. A nested sampling of the aleatory–epistemic space is proposed to propagate uncertainties from model parameters to output quantities of interest.

  9. Advancing sensitivity analysis to precisely characterize temporal parameter dominance

    NASA Astrophysics Data System (ADS)

    Guse, Björn; Pfannerstill, Matthias; Strauch, Michael; Reusser, Dominik; Lüdtke, Stefan; Volk, Martin; Gupta, Hoshin; Fohrer, Nicola

    2016-04-01

    Parameter sensitivity analysis is a strategy for detecting dominant model parameters. A temporal sensitivity analysis calculates daily sensitivities of model parameters. This allows a precise characterization of temporal patterns of parameter dominance and an identification of the related discharge conditions. To achieve this goal, the diagnostic information as derived from the temporal parameter sensitivity is advanced by including discharge information in three steps. In a first step, the temporal dynamics are analyzed by means of daily time series of parameter sensitivities. As sensitivity analysis method, we used the Fourier Amplitude Sensitivity Test (FAST) applied directly onto the modelled discharge. Next, the daily sensitivities are analyzed in combination with the flow duration curve (FDC). Through this step, we determine whether high sensitivities of model parameters are related to specific discharges. Finally, parameter sensitivities are separately analyzed for five segments of the FDC and presented as monthly averaged sensitivities. In this way, seasonal patterns of dominant model parameter are provided for each FDC segment. For this methodical approach, we used two contrasting catchments (upland and lowland catchment) to illustrate how parameter dominances change seasonally in different catchments. For all of the FDC segments, the groundwater parameters are dominant in the lowland catchment, while in the upland catchment the controlling parameters change seasonally between parameters from different runoff components. The three methodical steps lead to clear temporal patterns, which represent the typical characteristics of the study catchments. Our methodical approach thus provides a clear idea of how the hydrological dynamics are controlled by model parameters for certain discharge magnitudes during the year. Overall, these three methodical steps precisely characterize model parameters and improve the understanding of process dynamics in hydrological

  10. Sensitivity of tropospheric hydrogen peroxide to global chemical and climate change

    SciTech Connect

    Thompson, A.M.; Stewart, R.W. ); Owens, M.A. )

    1989-01-01

    The sensitivities of tropospheric (H{sub 2}O{sub 2}) levels to increases in the CH{sub 4}, CO and NO emissions and to changes in stratospheric O{sub 3} and tropospheric O{sub 3} and H{sub 2}O have been evaluated with a one-dimensional photochemical model. Specific scenarios of CH{sub 4}-CO-NO{sub x} emissions and global climate changes are used to predict HO{sub 2} and H{sub 2}O{sub 2} changes between 1980 and 2030. Calculations are made for urban and nonurban continental conditions and for low latitudes. Generally, CO and CH{sub 4} emissions will suppress H{sub 2}O{sub 2} except in very low No{sub x} regions will suppress H{sub 2}O{sub 2} except in very low No{sub x} regions. A global warming (with increased H{sub 2}O vapor) or stratospheric O{sub 3} depletion will add to H{sub 2}O{sub 2}. Hydrogen peroxide increases from 1980 to 2030 could be 100% or more in the urban boundary layer. Increases in CH{sub 4}, CO and O{sub 3} that have occurred in the industrial era (since 1800) have probably produced temporal increases in background HO{sub 2} and H{sub 2}O{sub 2}. It might be possible to use H{sub 2}O{sub 2} in ice cores to track these changes. Where formation of sulfuric acid in cloudwater and precipitation is oxidant limited, H{sub 2}O{sub 2} and HO{sub 2} increases could be contributing to increases in acid precipitation.

  11. Sensitivity of a general circulation model to global changes in leaf area index

    NASA Astrophysics Data System (ADS)

    Chase, Thomas N.; Pielke, Roger A.; Kittel, Timothy G. F.; Nemani, Ramakrishna; Running, Steven W.

    1996-03-01

    Methods have recently become available for estimating the amount of leaf area at the surface of the Earth using satellite data. Also available are modeled estimates of what global leaf area patterns would look like should the vegetation be in equilibrium with current local climatic and soil conditions. The differences between the actual vegetation distribution and the potential vegetation distribution may reflect the impact of human activity on the Earth's surface. To examine model sensitivity to changes in leaf area index (LAI), global distributions of maximum LAI were used as surface boundary conditions in the National Center for Atmospheric Research community climate model (NCAR CCM2) coupled with the biosphere atmosphere transfer scheme (BATS). Results from 10-year ensemble averages for the months of January and July indicate that the largest effects of the decreased LAI in the actual LAI simulation occur in the northern hemisphere winter at high latitudes despite the fact that direct LAI forcing is negligible in these regions at this time of year. This is possibly a result of LAI forcing in the tropics which has long-ranging effects in the winter of both hemispheres. An assessment of the Asian monsoon region for the month of July shows decreased latent heat flux from the surface, increased surface temperature, and decreased precipitation with the actual LAI distribution. While the statistical significance of the results has not been unambiguously established in these simulations, we suspect that an effect on modeled general circulation dynamics has occurred due to changes of maximum LAI suggesting that further attention needs to be paid to the accurate designation of vegetation parameters. The incorporation of concomitant changes in albedo, vegetation fractional coverage, and roughness length is suggested for further research.

  12. Sensitivity analyses of spatial population viability analysis models for species at risk and habitat conservation planning.

    PubMed

    Naujokaitis-Lewis, Ilona R; Curtis, Janelle M R; Arcese, Peter; Rosenfeld, Jordan

    2009-02-01

    Population viability analysis (PVA) is an effective framework for modeling species- and habitat-recovery efforts, but uncertainty in parameter estimates and model structure can lead to unreliable predictions. Integrating complex and often uncertain information into spatial PVA models requires that comprehensive sensitivity analyses be applied to explore the influence of spatial and nonspatial parameters on model predictions. We reviewed 87 analyses of spatial demographic PVA models of plants and animals to identify common approaches to sensitivity analysis in recent publications. In contrast to best practices recommended in the broader modeling community, sensitivity analyses of spatial PVAs were typically ad hoc, inconsistent, and difficult to compare. Most studies applied local approaches to sensitivity analyses, but few varied multiple parameters simultaneously. A lack of standards for sensitivity analysis and reporting in spatial PVAs has the potential to compromise the ability to learn collectively from PVA results, accurately interpret results in cases where model relationships include nonlinearities and interactions, prioritize monitoring and management actions, and ensure conservation-planning decisions are robust to uncertainties in spatial and nonspatial parameters. Our review underscores the need to develop tools for global sensitivity analysis and apply these to spatial PVA.

  13. Global review of open access risk assessment software packages valid for global or continental scale analysis

    NASA Astrophysics Data System (ADS)

    Daniell, James; Simpson, Alanna; Gunasekara, Rashmin; Baca, Abigail; Schaefer, Andreas; Ishizawa, Oscar; Murnane, Rick; Tijssen, Annegien; Deparday, Vivien; Forni, Marc; Himmelfarb, Anne; Leder, Jan

    2015-04-01

    -defined exposure and vulnerability. Without this function, many tools can only be used regionally and not at global or continental scale. It is becoming increasingly easy to use multiple packages for a single region and/or hazard to characterize the uncertainty in the risk, or use as checks for the sensitivities in the analysis. There is a potential for valuable synergy between existing software. A number of open source software packages could be combined to generate a multi-risk model with multiple views of a hazard. This extensive review has simply attempted to provide a platform for dialogue between all open source and open access software packages and to hopefully inspire collaboration between developers, given the great work done by all open access and open source developers.

  14. Water Grabbing analysis at global scale

    NASA Astrophysics Data System (ADS)

    Rulli, M.; Saviori, A.; D'Odorico, P.

    2012-12-01

    "Land grabbing" is the acquisition of agricultural land by foreign governments and corporations, a phenomenon that has greatly intensified over the last few years as a result of the increase in food prices and biofuel demand. Land grabbing is inherently associated with an appropriation of freshwater resources that has never been investigated before. Here we provide a global assessment of the total grabbed land and water resources. Using process-based agro-hydrological models we estimate the rates of freshwater grabbing worldwide. We find that this phenomenon is occurring at alarming rates in all continents except Antarctica. The per capita volume of grabbed water often exceeds the water requirements for a balanced diet and would be sufficient to abate malnourishment in the grabbed countries. High rates of water grabbing are often associated with deforestation and the increase in water withdrawals for irrigation.

  15. Analysis and visualization of global magnetospheric processes

    SciTech Connect

    Winske, D.; Mozer, F.S.; Roth, I.

    1998-12-31

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). The purpose of this project is to develop new computational and visualization tools to analyze particle dynamics in the Earth`s magnetosphere. These tools allow the construction of a global picture of particle fluxes, which requires only a small number of in situ spacecraft measurements as input parameters. The methods developed in this project have led to a better understanding of particle dynamics in the Earth`s magnetotail in the presence of turbulent wave fields. They have also been used to demonstrate how large electromagnetic pulses in the solar wind can interact with the magnetosphere to increase the population of energetic particles and even form new radiation belts.

  16. National health expenditures: a global analysis.

    PubMed Central

    Murray, C. J.; Govindaraj, R.; Musgrove, P.

    1994-01-01

    As part of the background research to the World development report 1993: investing in health, an effort was made to estimate public, private and total expenditures on health for all countries of the world. Estimates could be found for public spending for most countries, but for private expenditure in many fewer countries. Regressions were used to predict the missing values of regional and global estimates. These econometric exercises were also used to relate expenditure to measures of health status. In 1990 the world spent an estimated US$ 1.7 trillion (1.7 x 10(12) on health, or $1.9 trillion (1.9 x 10(12)) in dollars adjusted for higher purchasing power in poorer countries. This amount was about 60% public and 40% private in origin. However, as incomes rise, public health expenditure tends to displace private spending and to account for the increasing share of incomes devoted to health. PMID:7923542

  17. The resolution sensitivity of the South Asian monsoon and Indo-Pacific in a global 0.35° AGCM

    NASA Astrophysics Data System (ADS)

    Johnson, Stephanie J.; Levine, Richard C.; Turner, Andrew G.; Martin, Gill M.; Woolnough, Steven J.; Schiemann, Reinhard; Mizielinski, Matthew S.; Roberts, Malcolm J.; Vidale, Pier Luigi; Demory, Marie-Estelle; Strachan, Jane

    2016-02-01

    The South Asian monsoon is one of the most significant manifestations of the seasonal cycle. It directly impacts nearly one third of the world's population and also has substantial global influence. Using 27-year integrations of a high-resolution atmospheric general circulation model (Met Office Unified Model), we study changes in South Asian monsoon precipitation and circulation when horizontal resolution is increased from approximately 200-40 km at the equator (N96-N512, 1.9°-0.35°). The high resolution, integration length and ensemble size of the dataset make this the most extensive dataset used to evaluate the resolution sensitivity of the South Asian monsoon to date. We find a consistent pattern of JJAS precipitation and circulation changes as resolution increases, which include a slight increase in precipitation over peninsular India, changes in Indian and Indochinese orographic rain bands, increasing wind speeds in the Somali Jet, increasing precipitation over the Maritime Continent islands and decreasing precipitation over the northern Maritime Continent seas. To diagnose which resolution-related processes cause these changes, we compare them to published sensitivity experiments that change regional orography and coastlines. Our analysis indicates that improved resolution of the East African Highlands results in the improved representation of the Somali Jet and further suggests that improved resolution of orography over Indochina and the Maritime Continent results in more precipitation over the Maritime Continent islands at the expense of reduced precipitation further north. We also evaluate the resolution sensitivity of monsoon depressions and lows, which contribute more precipitation over northeast India at higher resolution. We conclude that while increasing resolution at these scales does not solve the many monsoon biases that exist in GCMs, it has a number of small, beneficial impacts.

  18. Design sensitivity analysis using EAL. Part 1: Conventional design parameters

    NASA Technical Reports Server (NTRS)

    Dopker, B.; Choi, Kyung K.; Lee, J.

    1986-01-01

    A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.

  19. A study of turbulent flow with sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Dwyer, H. A.; Peterson, T.

    1980-07-01

    In this paper a new type of analysis is introduced that can be used in numerical fluid mechanics. The method is known as sensitivity analysis and it has been widely used in the field of automatic control theory. Sensitivity analysis addresses in a systematic way to the question of 'how' the solution to an equation will change due to variations in the equation's parameters and boundary conditions. An important application is turbulent flow where there exists a large uncertainty in the models used for closure. In the present work the analysis is applied to the three-dimensional planetary boundary layer equations, and sensitivity equations are generated for various parameters in turbulence model. The solution of these equations with the proper techniques leads to considerable insight into the flow field and its dependence on turbulence parameters. Also, the analysis allows for unique decompositions of the parameter dependence and is efficient.

  20. Sensitivity of agro-environmental zones in Spain to global climatic change

    NASA Astrophysics Data System (ADS)

    Vanwalleghem, T.; Guzmán, G.; Vanderlinden, K.; Laguna, A.; Giraldez, J. V.

    2014-12-01

    Soil has a key role in the regulation of carbon, water and nutrient cycles. Traditionally, agricultural soil management was oriented towards optimizing productivity. Nowadays, mitigation of climate change effects and maintaining long-term soil quality are evenly important. Developing policy guidelines for best management practices need to be site-specific, given the large spatial variability of environmental conditions within the EU. Therefore, it is necessary to classify the different farming zones that are susceptible to soil degradation. Especially in Mediterranean areas, this variability and its susceptibility to degradation is higher than in other areas of the EU. The objective of this study is therefore to delineate current agro-environmental zones in Spain and to determine the effect of global climate change on this classification in the future. The final objective is to assist policy makers in scenario analysis with respect to soil conservation. Our classification scheme is based on soil, topography and climate (seasonal temperature and rainfall) variables. We calculated slope and elevation based on a SRTM-derived DEM, soil texture was extracted from the European Soil Database and seasonal mean, minimum and maximum precipitation and temperature data were gridded from publically available weather station data (Aemet). Global change scenarios are average downscaled ensemble predictions for the emission scenarios A2 and B2. The k-means method was used for classification of the 10 km x 10 km gridded variables. Using the before-mentioned input variables, the optimal number of agro-environmental zones we obtained is 8. The classification corresponds well with the observed distribution of farming typologies in Spain. The advantage of this method is that it is a simple, objective method which uses only readily available, public data. As such, its extrapolation to other countries of the EU is straightforward. Finally, it presents a tool for policy makers to assess

  1. Global synthesis of the temperature sensitivity of leaf litter breakdown in streams and rivers.

    PubMed

    Follstad Shah, Jennifer J; Kominoski, John S; Ardón, Marcelo; Dodds, Walter K; Gessner, Mark O; Griffiths, Natalie A; Hawkins, Charles P; Johnson, Sherri L; Lecerf, Antoine; LeRoy, Carri J; Manning, David W P; Rosemond, Amy D; Sinsabaugh, Robert L; Swan, Christopher M; Webster, Jackson R; Zeglin, Lydia H

    2016-12-31

    Streams and rivers are important conduits of terrestrially derived carbon (C) to atmospheric and marine reservoirs. Leaf litter breakdown rates are expected to increase as water temperatures rise in response to climate change. The magnitude of increase in breakdown rates is uncertain, given differences in litter quality and microbial and detritivore community responses to temperature, factors that can influence the apparent temperature sensitivity of breakdown and the relative proportion of C lost to the atmosphere vs. stored or transported downstream. Here, we synthesized 1025 records of litter breakdown in streams and rivers to quantify its temperature sensitivity, as measured by the activation energy (Ea , in eV). Temperature sensitivity of litter breakdown varied among twelve plant genera for which Ea could be calculated. Higher values of Ea were correlated with lower-quality litter, but these correlations were influenced by a single, N-fixing genus (Alnus). Ea values converged when genera were classified into three breakdown rate categories, potentially due to continual water availability in streams and rivers modulating the influence of leaf chemistry on breakdown. Across all data representing 85 plant genera, the Ea was 0.34 ± 0.04 eV, or approximately half the value (0.65 eV) predicted by metabolic theory. Our results indicate that average breakdown rates may increase by 5-21% with a 1-4 °C rise in water temperature, rather than a 10-45% increase expected, according to metabolic theory. Differential warming of tropical and temperate biomes could result in a similar proportional increase in breakdown rates, despite variation in Ea values for these regions (0.75 ± 0.13 eV and 0.27 ± 0.05 eV, respectively). The relative proportions of gaseous C loss and organic matter transport downstream should not change with rising temperature given that Ea values for breakdown mediated by microbes alone and microbes plus detritivores were similar at the global

  2. Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)

    1996-01-01

    Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite

  3. Development and sensitivity analysis of a fully kinetic model of sequential reductive dechlorination in groundwater.

    PubMed

    Malaguerra, Flavio; Chambon, Julie C; Bjerg, Poul L; Scheutz, Charlotte; Binning, Philip J

    2011-10-01

    A fully kinetic biogeochemical model of sequential reductive dechlorination (SERD) occurring in conjunction with lactate and propionate fermentation, iron reduction, sulfate reduction, and methanogenesis was developed. Production and consumption of molecular hydrogen (H(2)) by microorganisms have been modeled using modified Michaelis-Menten kinetics and has been implemented in the geochemical code PHREEQC. The model have been calibrated using a Shuffled Complex Evolution Metropolis algorithm to observations of chlorinated solvents, organic acids, and H(2) concentrations in laboratory batch experiments of complete trichloroethene (TCE) degradation in natural sediments. Global sensitivity analysis was performed using the Morris method and Sobol sensitivity indices to identify the most influential model parameters. Results show that the sulfate concentration and fermentation kinetics are the most important factors influencing SERD. The sensitivity analysis also suggests that it is not possible to simplify the model description if all system behaviors are to be well described.

  4. Aeroacoustic sensitivity analysis and optimal aeroacoustic design of turbomachinery blades

    NASA Technical Reports Server (NTRS)

    Hall, Kenneth C.

    1994-01-01

    During the first year of the project, we have developed a theoretical analysis - and wrote a computer code based on this analysis - to compute the sensitivity of unsteady aerodynamic loads acting on airfoils in cascades due to small changes in airfoil geometry. The steady and unsteady flow though a cascade of airfoils is computed using the full potential equation. Once the nominal solutions have been computed, one computes the sensitivity. The analysis takes advantage of the fact that LU decomposition is used to compute the nominal steady and unsteady flow fields. If the LU factors are saved, then the computer time required to compute the sensitivity of both the steady and unsteady flows to changes in airfoil geometry is quite small. The results to date are quite encouraging, and may be summarized as follows: (1) The sensitivity procedure has been validated by comparing the results obtained by 'finite difference' techniques, that is, computing the flow using the nominal flow solver for two slightly different airfoils and differencing the results. The 'analytic' solution computed using the method developed under this grant and the finite difference results are found to be in almost perfect agreement. (2) The present sensitivity analysis is computationally much more efficient than finite difference techniques. We found that using a 129 by 33 node computational grid, the present sensitivity analysis can compute the steady flow sensitivity about ten times more efficiently that the finite difference approach. For the unsteady flow problem, the present sensitivity analysis is about two and one-half times as fast as the finite difference approach. We expect that the relative efficiencies will be even larger for the finer grids which will be used to compute high frequency aeroacoustic solutions. Computational results show that the sensitivity analysis is valid for small to moderate sized design perturbations. (3) We found that the sensitivity analysis provided important

  5. Global/local stress analysis of composite panels

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Knight, Norman F., Jr.

    1989-01-01

    A method for performing a global/local stress analysis is described, and its capabilities are demonstrated. The method employs spline interpolation functions which satisfy the linear plate bending equation to determine displacements and rotations from a global model which are used as boundary conditions for the local model. Then, the local model is analyzed independent of the global model of the structure. This approach can be used to determine local, detailed stress states for specific structural regions using independent, refined local models which exploit information from less-refined global models. The method presented is not restricted to having a priori knowledge of the location of the regions requiring local detailed stress analysis. This approach also reduces the computational effort necessary to obtain the detailed stress state. Criteria for applying the method are developed. The effectiveness of the method is demonstrated using a classical stress concentration problem and a graphite-epoxy blade-stiffened panel with a discontinuous stiffener.

  6. Sensitivity analysis of small circular cylinders as wake control

    NASA Astrophysics Data System (ADS)

    Meneghini, Julio; Patino, Gustavo; Gioria, Rafael

    2016-11-01

    We apply a sensitivity analysis to a steady external force regarding control vortex shedding from a circular cylinder using active and passive small control cylinders. We evaluate the changes on the flow produced by the device on the flow near the primary instability, transition to wake. We numerically predict by means of sensitivity analysis the effective regions to place the control devices. The quantitative effect of the hydrodynamic forces produced by the control devices is also obtained by a sensitivity analysis supporting the prediction of minimum rotation rate. These results are extrapolated for higher Reynolds. Also, the analysis provided the positions of combined passive control cylinders that suppress the wake. The latter shows that these particular positions for the devices are adequate to suppress the wake unsteadiness. In both cases the results agree very well with experimental cases of control devices previously published.

  7. Quantitative uncertainty and sensitivity analysis of a PWR control rod ejection accident

    SciTech Connect

    Pasichnyk, I.; Perin, Y.; Velkov, K.

    2013-07-01

    The paper describes the results of the quantitative Uncertainty and Sensitivity (U/S) Analysis of a Rod Ejection Accident (REA) which is simulated by the coupled system code ATHLET-QUABOX/CUBBOX applying the GRS tool for U/S analysis SUSA/XSUSA. For the present study, a UOX/MOX mixed core loading based on a generic PWR is modeled. A control rod ejection is calculated for two reactor states: Hot Zero Power (HZP) and 30% of nominal power. The worst cases for the rod ejection are determined by steady-state neutronic simulations taking into account the maximum reactivity insertion in the system and the power peaking factor. For the U/S analysis 378 uncertain parameters are identified and quantified (thermal-hydraulic initial and boundary conditions, input parameters and variations of the two-group cross sections). Results for uncertainty and sensitivity analysis are presented for safety important global and local parameters. (authors)

  8. Geostationary Coastal and Air Pollution Events (GEO-CAPE) Sensitivity Analysis Experiment

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Bowman, Kevin

    2014-01-01

    Geostationary Coastal and Air pollution Events (GEO-CAPE) is a NASA decadal survey mission to be designed to provide surface reflectance at high spectral, spatial, and temporal resolutions from a geostationary orbit necessary for studying regional-scale air quality issues and their impact on global atmospheric composition processes. GEO-CAPE's Atmospheric Science Questions explore the influence of both gases and particles on air quality, atmospheric composition, and climate. The objective of the GEO-CAPE Observing System Simulation Experiment (OSSE) is to analyze the sensitivity of ozone to the global and regional NOx emissions and improve the science impact of GEO-CAPE with respect to the global air quality. The GEO-CAPE OSSE team at Jet propulsion Laboratory has developed a comprehensive OSSE framework that can perform adjoint-sensitivity analysis for a wide range of observation scenarios and measurement qualities. This report discusses the OSSE framework and presents the sensitivity analysis results obtained from the GEO-CAPE OSSE framework for seven observation scenarios and three instrument systems.

  9. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    NASA Technical Reports Server (NTRS)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral

  10. Global Methods for Image Motion Analysis

    DTIC Science & Technology

    1992-10-01

    including the time for reviewing instructions , searching existing data sources, gathering and maintaining the data needed, and completing and reviewing...thanks go to Pankaj who inspired me in research , to Prasad from whom I have learned so much, and to Ronie and Laureen, the memories of whose company...of images to determine egomotion and to extract information from the scene. Research in motion analysis has been focussed on the problems of

  11. Sensitivity of a global coupled ocean-sea ice model to the parameterization of vertical mixing

    NASA Astrophysics Data System (ADS)

    Goosse, H.; Deleersnijder, E.; Fichefet, T.; England, M. H.

    1999-06-01

    Three numerical experiments have been carried out with a global coupled ice-ocean model to investigate its sensitivity to the treatment of vertical mixing in the upper ocean. In the first experiment, a widely used fixed profile of vertical diffusivity and viscosity is imposed, with large values in the upper 50 m to crudely represent wind-driven mixing. In the second experiment, the eddy coefficients are functions of the Richardson number, and, in the third case, a relatively sophisticated parameterization, based on the turbulence closure scheme of Mellor and Yamada version 2.5, is introduced. We monitor the way the different mixing schemes affect the simulated ocean ventilation, water mass properties, and sea ice distributions. CFC uptake is also diagnosed in the model experiments. The simulation of the mixed layer depth is improved in the experiment which includes the sophisticated turbulence closure scheme. This results in a good representation of the upper ocean thermohaline structure and in heat exchange with the atmosphere within the range of current estimates. However, the error in heat flux in the experiment with simple fixed vertical mixing coefficients can be as high as 50 W m-2 in zonal mean during summer. Using CFC tracers allows us to demonstrate that the ventilation of the deep ocean is not significantly influenced by the parameterization of vertical mixing in the upper ocean. The only exception is the Southern Ocean. There, the ventilation is too strong in all three experiments. However, modifications of the vertical diffusivity and, surprisingly, the vertical viscosity significantly affect the stability of the water column in this region through their influence on upper ocean salinity, resulting in a more realistic Southern Ocean circulation. The turbulence scheme also results in an improved simulation of Antarctic sea ice coverage. This is due to to a better simulation of the mixed layer depth and thus of heat exchanges between ice and ocean. The

  12. Sensitivity of Surface Air Quality and Global Mortality to Global, Regional, and Sectoral Black Carbon Emission Reductions

    NASA Astrophysics Data System (ADS)

    Anenberg, S.; Talgo, K.; Dolwick, P.; Jang, C.; Arunachalam, S.; West, J.

    2010-12-01

    Black carbon (BC), a component of fine particulate matter (PM2.5) released during incomplete combustion, is associated with atmospheric warming and deleterious health impacts, including premature cardiopulmonary and lung cancer mortality. A growing body of literature suggests that controlling emissions may therefore have dual benefits for climate and health. Several studies have focused on quantifying the potential impacts of reducing BC emissions from various world regions and economic sectors on radiative forcing. However, the impacts of these reductions on human health have been less well studied. Here, we use a global chemical transport model (MOZART-4) and a health impact function to quantify the surface air quality and human health benefits of controlling BC emissions. We simulate a base case and several emission control scenarios, where anthropogenic BC emissions are reduced by half globally, individually in each of eight world regions, and individually from the residential, industrial, and transportation sectors. We also simulate a global 50% reduction of both BC and organic carbon (OC) together, since they are co-emitted and both are likely to be impacted by actual control measures. Meteorology and biomass burning emissions are for the year 2002 with anthropogenic BC and OC emissions for 2000 from the IPCC AR5 inventory. Model performance is evaluated by comparing to global surface measurements of PM2.5 components. Avoided premature mortalities are calculated using the change in PM2.5 concentration between the base case and emission control scenarios and a concentration-response factor for chronic mortality from the epidemiology literature.

  13. Fast and sensitive optical toxicity bioassay based on dual wavelength analysis of bacterial ferricyanide reduction kinetics.

    PubMed

    Pujol-Vila, F; Vigués, N; Díaz-González, M; Muñoz-Berbel, X; Mas, J

    2015-05-15

    Global urban and industrial growth, with the associated environmental contamination, is promoting the development of rapid and inexpensive general toxicity methods. Current microbial methodologies for general toxicity determination rely on either bioluminescent bacteria and specific medium solution (i.e. Microtox(®)) or low sensitivity and diffusion limited protocols (i.e. amperometric microbial respirometry). In this work, fast and sensitive optical toxicity bioassay based on dual wavelength analysis of bacterial ferricyanide reduction kinetics is presented, using Escherichia coli as a bacterial model. Ferricyanide reduction kinetic analysis (variation of ferricyanide absorption with time), much more sensitive than single absorbance measurements, allowed for direct and fast toxicity determination without pre-incubation steps (assay time=10 min) and minimizing biomass interference. Dual wavelength analysis at 405 (ferricyanide and biomass) and 550 nm (biomass), allowed for ferricyanide monitoring without interference of biomass scattering. On the other hand, refractive index (RI) matching with saccharose reduced bacterial light scattering around 50%, expanding the analytical linear range in the determination of absorbent molecules. With this method, different toxicants such as metals and organic compounds were analyzed with good sensitivities. Half maximal effective concentrations (EC50) obtained after 10 min bioassay, 2.9, 1.0, 0.7 and 18.3 mg L(-1) for copper, zinc, acetic acid and 2-phenylethanol respectively, were in agreement with previously reported values for longer bioassays (around 60 min). This method represents a promising alternative for fast and sensitive water toxicity monitoring, opening the possibility of quick in situ analysis.

  14. Automation of primal and sensitivity analysis of transient coupled problems

    NASA Astrophysics Data System (ADS)

    Korelc, Jože

    2009-10-01

    The paper describes a hybrid symbolic-numeric approach to automation of primal and sensitivity analysis of computational models formulated and solved by finite element method. The necessary apparatus for the automation of steady-state, steady-state coupled, transient and transient coupled problems is introduced as combination of a symbolic system, an automatic differentiation (AD) technique and an automatic code generation. For this purpose the paper extends the classical formulation of AD by additional operators necessary for a high abstract description of primal and sensitivity analysis of the typical computational models. An appropriate abstract description for the fully implicit primal and sensitivity analysis of hyperelastic and elasto-plastic problems and a symbolic input for the generation of necessary user subroutines for the two-dimensional, hyperelastic finite element are presented at the end.

  15. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  16. Sensitivity analysis for missing data in regulatory submissions.

    PubMed

    Permutt, Thomas

    2016-07-30

    The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  17. New Methods for Sensitivity Analysis in Chaotic, Turbulent Fluid Flows

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Wang, Qiqi

    2012-11-01

    Computational methods for sensitivity analysis are invaluable tools for fluid mechanics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods break down when applied to long-time averaged quantities in chaotic fluid flowfields, such as those obtained using high-fidelity turbulence simulations. Also, a number of dynamical properties of chaotic fluid flows, most notably the ``Butterfly Effect,'' make the formulation of new sensitivity analysis methods difficult. This talk will outline two chaotic sensitivity analysis methods. The first method, the Fokker-Planck adjoint method, forms a probability density function on the strange attractor associated with the system and uses its adjoint to find gradients. The second method, the Least Squares Sensitivity method, finds some ``shadow trajectory'' in phase space for which perturbations do not grow exponentially. This method is formulated as a quadratic programing problem with linear constraints. This talk is concluded with demonstrations of these new methods on some example problems, including the Lorenz attractor and flow around an airfoil at a high angle of attack.

  18. Sensitivity analysis of a sound absorption model with correlated inputs

    NASA Astrophysics Data System (ADS)

    Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.

    2017-04-01

    Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.

  19. On global energy scenario, dye-sensitized solar cells and the promise of nanotechnology.

    PubMed

    Reddy, K Govardhan; Deepak, T G; Anjusree, G S; Thomas, Sara; Vadukumpully, Sajini; Subramanian, K R V; Nair, Shantikumar V; Nair, A Sreekumaran

    2014-04-21

    One of the major problems that humanity has to face in the next 50 years is the energy crisis. The rising population, rapidly changing life styles of people, heavy industrialization and changing landscape of cities have increased energy demands, enormously. The present annual worldwide electricity consumption is 12 TW and is expected to become 24 TW by 2050, leaving a challenging deficit of 12 TW. The present energy scenario of using fossil fuels to meet the energy demand is unable to meet the increase in demand effectively, as these fossil fuel resources are non-renewable and limited. Also, they cause significant environmental hazards, like global warming and the associated climatic issues. Hence, there is an urgent necessity to adopt renewable sources of energy, which are eco-friendly and not extinguishable. Of the various renewable sources available, such as wind, tidal, geothermal, biomass, solar, etc., solar serves as the most dependable option. Solar energy is freely and abundantly available. Once installed, the maintenance cost is very low. It is eco-friendly, safely fitting into our society without any disturbance. Producing electricity from the Sun requires the installation of solar panels, which incurs a huge initial cost and requires large areas of lands for installation. This is where nanotechnology comes into the picture and serves the purpose of increasing the efficiency to higher levels, thus bringing down the overall cost for energy production. Also, emerging low-cost solar cell technologies, e.g. thin film technologies and dye-sensitized solar cells (DSCs) help to replace the use of silicon, which is expensive. Again, nanotechnological implications can be applied in these solar cells, to achieve higher efficiencies. This paper vividly deals with the various available solar cells, choosing DSCs as the most appropriate ones. The nanotechnological implications which help to improve their performance are dealt with, in detail. Additionally, the

  20. Drought-Net: A global network to assess terrestrial ecosystem sensitivity to drought

    NASA Astrophysics Data System (ADS)

    Smith, Melinda; Sala, Osvaldo; Phillips, Richard

    2015-04-01

    All ecosystems will be impacted to some extent by climate change, with forecasts for more frequent and severe drought likely to have the greatest impact on terrestrial ecosystems. Terrestrial ecosystems are known to vary dramatically in their responses to drought. However, the factors that may make some ecosystems respond more or less than others remains unknown, but such understanding is critical for predicting drought impacts at regional and continental scales. To effectively forecast terrestrial ecosystem responses to drought, ecologists must assess responses of a range of different ecosystems to drought, and then improve existing models by incorporating the factors that cause such variation in response. Traditional site-based research cannot provide this knowledge because experiments conducted at individual sites are often not directly comparable due to differences in methodologies employed. Coordinated experimental networks, with identical protocols and comparable measurements, are ideally suited for comparative studies at regional to global scales. The US National Science Foundation-funded Drought-Net Research Coordination Network (www.drought-net.org) will advance understanding of the determinants of terrestrial ecosystem responses to drought by bringing together an international group of scientists to conduct two key activities conducted over the next five years: 1) planning and coordinating new research using standardized measurements to leverage the value of existing drought experiments across the globe (Enhancing Existing Experiments, EEE), and 2) finalizing the design and facilitating the establishment of a new international network of coordinated drought experiments (the International Drought Experiment, IDE). The primary goals of these activities are to assess: (1) patterns of differential terrestrial ecosystem sensitivity to drought and (2) potential mechanisms underlying those patterns.

  1. Annual flood sensitivities to El Niño-Southern Oscillation at the global scale

    USGS Publications Warehouse

    Ward, Philip J.; Eisner, S.; Flörke, M.; Dettinger, Michael D.; Kummu, M.

    2013-01-01

    Floods are amongst the most dangerous natural hazards in terms of economic damage. Whilst a growing number of studies have examined how river floods are influenced by climate change, the role of natural modes of interannual climate variability remains poorly understood. We present the first global assessment of the influence of El Niño–Southern Oscillation (ENSO) on annual river floods, defined here as the peak daily discharge in a given year. The analysis was carried out by simulating daily gridded discharges using the WaterGAP model (Water – a Global Assessment and Prognosis), and examining statistical relationships between these discharges and ENSO indices. We found that, over the period 1958–2000, ENSO exerted a significant influence on annual floods in river basins covering over a third of the world's land surface, and that its influence on annual floods has been much greater than its influence on average flows. We show that there are more areas in which annual floods intensify with La Niña and decline with El Niño than vice versa. However, we also found that in many regions the strength of the relationships between ENSO and annual floods have been non-stationary, with either strengthening or weakening trends during the study period. We discuss the implications of these findings for science and management. Given the strong relationships between ENSO and annual floods, we suggest that more research is needed to assess relationships between ENSO and flood impacts (e.g. loss of lives or economic damage). Moreover, we suggest that in those regions where useful relationships exist, this information could be combined with ongoing advances in ENSO prediction research, in order to provide year-to-year probabilistic flood risk forecasts.

  2. Annual flood sensitivities to El Niño-Southern Oscillation at the global scale

    NASA Astrophysics Data System (ADS)

    Ward, P. J.; Eisner, S.; Flörke, M.; Dettinger, M. D.; Kummu, M.

    2014-01-01

    Floods are amongst the most dangerous natural hazards in terms of economic damage. Whilst a growing number of studies have examined how river floods are influenced by climate change, the role of natural modes of interannual climate variability remains poorly understood. We present the first global assessment of the influence of El Niño-Southern Oscillation (ENSO) on annual river floods, defined here as the peak daily discharge in a given year. The analysis was carried out by simulating daily gridded discharges using the WaterGAP model (Water - a Global Assessment and Prognosis), and examining statistical relationships between these discharges and ENSO indices. We found that, over the period 1958-2000, ENSO exerted a significant influence on annual floods in river basins covering over a third of the world's land surface, and that its influence on annual floods has been much greater than its influence on average flows. We show that there are more areas in which annual floods intensify with La Niña and decline with El Niño than vice versa. However, we also found that in many regions the strength of the relationships between ENSO and annual floods have been non-stationary, with either strengthening or weakening trends during the study period. We discuss the implications of these findings for science and management. Given the strong relationships between ENSO and annual floods, we suggest that more research is needed to assess relationships between ENSO and flood impacts (e.g. loss of lives or economic damage). Moreover, we suggest that in those regions where useful relationships exist, this information could be combined with ongoing advances in ENSO prediction research, in order to provide year-to-year probabilistic flood risk forecasts.

  3. Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC

    NASA Astrophysics Data System (ADS)

    Yang, J.; Castelli, F.; Chen, Y.

    2014-10-01

    Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ɛ-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more

  4. Recurrence quantification analysis of global stock markets

    NASA Astrophysics Data System (ADS)

    Bastos, João A.; Caiado, Jorge

    2011-04-01

    This study investigates the presence of deterministic dependencies in international stock markets using recurrence plots and recurrence quantification analysis (RQA). The results are based on a large set of free float-adjusted market capitalization stock indices, covering a period of 15 years. The statistical tests suggest that the dynamics of stock prices in emerging markets is characterized by higher values of RQA measures when compared to their developed counterparts. The behavior of stock markets during critical financial events, such as the burst of the technology bubble, the Asian currency crisis, and the recent subprime mortgage crisis, is analyzed by performing RQA in sliding windows. It is shown that during these events stock markets exhibit a distinctive behavior that is characterized by temporary decreases in the fraction of recurrence points contained in diagonal and vertical structures.

  5. Sensitivity analysis of dynamic biological systems with time-delays

    PubMed Central

    2010-01-01

    Background Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. Results We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. Conclusions By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex

  6. Sensitivity Analysis for Dynamic Failure and Damage in Metallic Structures

    DTIC Science & Technology

    2005-03-01

    respect to the nominal alloy composition at the center of weld surface (Point 6 of Figure 7) -21 - U CO 2000 - * cE axc -2000 o" "....". . -401.11𔃺 1󈧄...Final Report Sensitivity Analysis for Dynamic Failure and Damage in Metallic Structures Office of Naval Research 800 North Quincy Street Arlington...3/31/05 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Sensitivity Analysis for Dynamic Failure and Damage in Metallic Structures Sb. GRANT NUMBER N000

  7. Sensitivity analysis of the fission gas behavior model in BISON.

    SciTech Connect

    Swiler, Laura Painton; Pastore, Giovanni; Perez, Danielle; Williamson, Richard

    2013-05-01

    This report summarizes the result of a NEAMS project focused on sensitivity analysis of a new model for the fission gas behavior (release and swelling) in the BISON fuel performance code of Idaho National Laboratory. Using the new model in BISON, the sensitivity of the calculated fission gas release and swelling to the involved parameters and the associated uncertainties is investigated. The study results in a quantitative assessment of the role of intrinsic uncertainties in the analysis of fission gas behavior in nuclear fuel.

  8. Preliminary sensitivity analysis of the Devonian shale in Ohio

    SciTech Connect

    Covatch, G.L.

    1985-06-01

    A preliminary sensitivity analysis of gas reserves in Devonian shale in Ohio was made on the six partitioned areas, based on a payout time of 3 years. Data sets were obtained from Lewin and Associates for the six partitioned areas in Ohio and used as a base case for the METC sensitivity analysis. A total of five different well stimulation techniques were evaluated in both the METC and Lewin studies. The five techniques evaluated were borehole shooting, a small radial stimulation, a large radial stimulation, a small vertical fracture, and a large vertical fracture.

  9. Stable locality sensitive discriminant analysis for image recognition.

    PubMed

    Gao, Quanxue; Liu, Jingjing; Cui, Kai; Zhang, Hailin; Wang, Xiaogang

    2014-06-01

    Locality Sensitive Discriminant Analysis (LSDA) is one of the prevalent discriminant approaches based on manifold learning for dimensionality reduction. However, LSDA ignores the intra-class variation that characterizes the diversity of data, resulting in unstableness of the intra-class geometrical structure representation and not good enough performance of the algorithm. In this paper, a novel approach is proposed, namely stable locality sensitive discriminant analysis (SLSDA), for dimensionality reduction. SLSDA constructs an adjacency graph to model the diversity of data and then integrates it in the objective function of LSDA. Experimental results in five databases show the effectiveness of the proposed approach.

  10. Parameter sensitivity analysis for different complexity land surface models using multicriteria methods

    NASA Astrophysics Data System (ADS)

    Bastidas, L. A.; Hogue, T. S.; Sorooshian, S.; Gupta, H. V.; Shuttleworth, W. J.

    2006-10-01

    A multicriteria algorithm, the MultiObjective Generalized Sensitivity Analysis (MOGSA), was used to investigate the parameter sensitivity of five different land surface models with increasing levels of complexity in the physical representation of the vegetation (BUCKET, CHASM, BATS 1, Noah, and BATS 2) at five different sites representing crop land/pasture, grassland, rain forest, cropland, and semidesert areas. The methodology allows for the inclusion of parameter interaction and does not require assumptions of independence between parameters, while at the same time allowing for the ranking of several single-criterion and a global multicriteria sensitivity indices. The analysis required on the order of 50 thousand model runs. The results confirm that parameters with similar "physical meaning" across different model structures behave in different ways depending on the model and the locations. It is also shown that after a certain level an increase in model structure complexity does not necessarily lead to better parameter identifiability, i.e., higher sensitivity, and that a certain level of overparameterization is observed. For the case of the BATS 1 and BATS 2 models, with essentially the same model structure but a more sophisticated vegetation model, paradoxically, the effect on parameter sensitivity is mainly reflected in the sensitivity of the soil-related parameters.

  11. SEDPHAT--a platform for global ITC analysis and global multi-method analysis of molecular interactions.

    PubMed

    Zhao, Huaying; Piszczek, Grzegorz; Schuck, Peter

    2015-04-01

    Isothermal titration calorimetry experiments can provide significantly more detailed information about molecular interactions when combined in global analysis. For example, global analysis can improve the precision of binding affinity and enthalpy, and of possible linkage parameters, even for simple bimolecular interactions, and greatly facilitate the study of multi-site and multi-component systems with competition or cooperativity. A pre-requisite for global analysis is the departure from the traditional binding model, including an 'n'-value describing unphysical, non-integral numbers of sites. Instead, concentration correction factors can be introduced to account for either errors in the concentration determination or for the presence of inactive fractions of material. SEDPHAT is a computer program that embeds these ideas and provides a graphical user interface for the seamless combination of biophysical experiments to be globally modeled with a large number of different binding models. It offers statistical tools for the rigorous determination of parameter errors, correlations, as well as advanced statistical functions for global ITC (gITC) and global multi-method analysis (GMMA). SEDPHAT will also take full advantage of error bars of individual titration data points determined with the unbiased integration software NITPIC. The present communication reviews principles and strategies of global analysis for ITC and its extension to GMMA in SEDPHAT. We will also introduce a new graphical tool for aiding experimental design by surveying the concentration space and generating simulated data sets, which can be subsequently statistically examined for their information content. This procedure can replace the 'c'-value as an experimental design parameter, which ceases to be helpful for multi-site systems and in the context of gITC.

  12. Efficient sensitivity analysis method for chaotic dynamical systems

    SciTech Connect

    Liao, Haitao

    2016-05-15

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.

  13. Breastfeeding policy: a globally comparative analysis

    PubMed Central

    Raub, Amy; Earle, Alison

    2013-01-01

    Abstract Objective To explore the extent to which national policies guaranteeing breastfeeding breaks to working women may facilitate breastfeeding. Methods An analysis was conducted of the number of countries that guarantee breastfeeding breaks, the daily number of hours guaranteed, and the duration of guarantees. To obtain current, detailed information on national policies, original legislation as well as secondary sources on 182 of the 193 Member States of the United Nations were examined. Regression analyses were conducted to test the association between national policy and rates of exclusive breastfeeding while controlling for national income level, level of urbanization, female percentage of the labour force and female literacy rate. Findings Breastfeeding breaks with pay are guaranteed in 130 countries (71%) and unpaid breaks are guaranteed in seven (4%). No policy on breastfeeding breaks exists in 45 countries (25%). In multivariate models, the guarantee of paid breastfeeding breaks for at least 6 months was associated with an increase of 8.86 percentage points in the rate of exclusive breastfeeding (P < 0.05). Conclusion A greater percentage of women practise exclusive breastfeeding in countries where laws guarantee breastfeeding breaks at work. If these findings are confirmed in longitudinal studies, health outcomes could be improved by passing legislation on breastfeeding breaks in countries that do not yet ensure the right to breastfeed. PMID:24052676

  14. Sensitivity analysis in a Lassa fever deterministic mathematical model

    NASA Astrophysics Data System (ADS)

    Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman

    2015-05-01

    Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.

  15. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  16. Uncertainty and sensitivity analysis and its applications in OCD measurements

    NASA Astrophysics Data System (ADS)

    Vagos, Pedro; Hu, Jiangtao; Liu, Zhuan; Rabello, Silvio

    2009-03-01

    This article describes an Uncertainty & Sensitivity Analysis package, a mathematical tool that can be an effective time-shortcut for optimizing OCD models. By including real system noises in the model, an accurate method for predicting measurements uncertainties is shown. The assessment, in an early stage, of the uncertainties, sensitivities and correlations of the parameters to be measured drives the user in the optimization of the OCD measurement strategy. Real examples are discussed revealing common pitfalls like hidden correlations and simulation results are compared with real measurements. Special emphasis is given to 2 different cases: 1) the optimization of the data set of multi-head metrology tools (NI-OCD, SE-OCD), 2) the optimization of the azimuth measurement angle in SE-OCD. With the uncertainty and sensitivity analysis result, the right data set and measurement mode (NI-OCD, SE-OCD or NI+SE OCD) can be easily selected to achieve the best OCD model performance.

  17. Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.

    2007-01-01

    To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.

  18. The Volatility of Data Space: Topology Oriented Sensitivity Analysis

    PubMed Central

    Du, Jing; Ligmann-Zielinska, Arika

    2015-01-01

    Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929

  19. Global multi-level analysis of the 'scientific food web'.

    PubMed

    Mazloumian, Amin; Helbing, Dirk; Lozano, Sergi; Light, Robert P; Börner, Katy

    2013-01-01

    We introduce a network-based index analyzing excess scientific production and consumption to perform a comprehensive global analysis of scholarly knowledge production and diffusion on the level of continents, countries, and cities. Compared to measures of scientific production and consumption such as number of publications or citation rates, our network-based citation analysis offers a more differentiated picture of the 'ecosystem of science'. Quantifying knowledge flows between 2000 and 2009, we identify global sources and sinks of knowledge production. Our knowledge flow index reveals, where ideas are born and consumed, thereby defining a global 'scientific food web'. While Asia is quickly catching up in terms of publications and citation rates, we find that its dependence on knowledge consumption has further increased.

  20. Early differential sensitivity of evoked-potentials to local and global shape during the perception of three-dimensional objects.

    PubMed

    Leek, E Charles; Roberts, Mark; Oliver, Zoe J; Cristino, Filipe; Pegna, Alan J

    2016-08-01

    Here we investigated the time course underlying differential processing of local and global shape information during the perception of complex three-dimensional (3D) objects. Observers made shape matching judgments about pairs of sequentially presented multi-part novel objects. Event-related potentials (ERPs) were used to measure perceptual sensitivity to 3D shape differences in terms of local part structure and global shape configuration - based on predictions derived from hierarchical structural description models of object recognition. There were three types of different object trials in which stimulus pairs (1) shared local parts but differed in global shape configuration; (2) contained different local parts but shared global configuration or (3) shared neither local parts nor global configuration. Analyses of the ERP data showed differential amplitude modulation as a function of shape similarity as early as the N1 component between 146-215ms post-stimulus onset. These negative amplitude deflections were more similar between objects sharing global shape configuration than local part structure. Differentiation among all stimulus types was reflected in N2 amplitude modulations between 276-330ms. sLORETA inverse solutions showed stronger involvement of left occipitotemporal areas during the N1 for object discrimination weighted towards local part structure. The results suggest that the perception of 3D object shape involves parallel processing of information at local and global scales. This processing is characterised by relatively slow derivation of 'fine-grained' local shape structure, and fast derivation of 'coarse-grained' global shape configuration. We propose that the rapid early derivation of global shape attributes underlies the observed patterns of N1 amplitude modulations.

  1. Beyond the GUM: variance-based sensitivity analysis in metrology

    NASA Astrophysics Data System (ADS)

    Lira, I.

    2016-07-01

    Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand.

  2. Omitted Variable Sensitivity Analysis with the Annotated Love Plot

    ERIC Educational Resources Information Center

    Hansen, Ben B.; Fredrickson, Mark M.

    2014-01-01

    The goal of this research is to make sensitivity analysis accessible not only to empirical researchers but also to the various stakeholders for whom educational evaluations are conducted. To do this it derives anchors for the omitted variable (OV)-program participation association intrinsically, using the Love plot to present a wide range of…

  3. Sensitivity analysis of the Ohio phosphorus risk index

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Phosphorus (P) Index is a widely used tool for assessing the vulnerability of agricultural fields to P loss; yet, few of the P Indices developed in the U.S. have been evaluated for their accuracy. Sensitivity analysis is one approach that can be used prior to calibration and field-scale testing ...

  4. Globalization and International Student Mobility: A Network Analysis

    ERIC Educational Resources Information Center

    Shields, Robin

    2013-01-01

    This article analyzes changes to the network of international student mobility in higher education over a 10-year period (1999-2008). International student flows have increased rapidly, exceeding 3 million in 2009, and extensive data on mobility provide unique insight into global educational processes. The analysis is informed by three theoretical…

  5. Global/local finite element analysis for textile composites

    NASA Technical Reports Server (NTRS)

    Woo, Kyeongsik; Whitcomb, John

    1993-01-01

    Conventional analysis of textile composites is impractical because of the complex microstructure. Global/local methodology combined with special macro elements is proposed herein as a practical alternative. Initial tests showed dramatic reductions in the computational effort with only small loss in accuracy.

  6. Global/local finite element analysis for textile composites

    SciTech Connect

    Woo, K.; Whitcomb, J.

    1993-01-01

    Conventional analysis of textile composites is impractical because of the complex microstructure. Global/local methodology combined with special macro elements is proposed herein as a practical alternative. Initial tests showed dramatic reductions in the computational effort with only small loss in accuracy. 9 refs.

  7. Ecological network analysis on global virtual water trade.

    PubMed

    Yang, Zhifeng; Mao, Xufeng; Zhao, Xu; Chen, Bin

    2012-02-07

    Global water interdependencies are likely to increase with growing virtual water trade. To address the issues of the indirect effects of water trade through the global economic circulation, we use ecological network analysis (ENA) to shed insight into the complicated system interactions. A global model of virtual water flow among agriculture and livestock production trade in 1995-1999 is also built as the basis for network analysis. Control analysis is used to identify the quantitative control or dependency relations. The utility analysis provides more indicators for describing the mutual relationship between two regions/countries by imitating the interactions in the ecosystem and distinguishes the beneficiary and the contributor of virtual water trade system. Results show control and utility relations can well depict the mutual relation in trade system, and direct observable relations differ from integral ones with indirect interactions considered. This paper offers a new way to depict the interrelations between trade components and can serve as a meaningful start as we continue to use ENA in providing more valuable implications for freshwater study on a global scale.

  8. Global Analysis of Helicity PDFs: past - present - future

    SciTech Connect

    de Florian, D.; Stratmann, M.; Sassot, R.; Vogelsang, W.

    2011-04-11

    We discuss the current status of the DSSV global analysis of helicity-dependent parton densities. A comparison with recent semi-inclusive DIS data from COMPASS is presented, and constraints on the polarized strangeness density are examined in some detail.

  9. Global Analysis of Horizontal Gene Transfer in Fusarium verticillioides

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The co-occurrence of microbes within plants and other specialized niches may facilitate horizontal gene transfer (HGT) affecting host-pathogen interactions. We recently identified fungal-to-fungal HGTs involving metabolic gene clusters. For a global analysis of HGTs in the maize pathogen Fusarium ve...

  10. The Wmo Global Atmosphere Watch Programme: Global Framework for Atmospheric Composition Observations and Analysis

    NASA Astrophysics Data System (ADS)

    Tarasova, O. A.; Jalkanen, L.

    2010-12-01

    The WMO Global Atmosphere Watch (GAW) Programme is the only existing long-term international global programme providing an international coordinated framework for observations and analysis of the chemical composition of the atmosphere. GAW is a partnership involving contributors from about 80 countries. It includes a coordinated global network of observing stations along with supporting facilities (Central Facilities) and expert groups (Scientific Advisory Groups, SAGs and Expert Teams, ETs). Currently GAW coordinates activities and data from 27 Global Stations and a substantial number of Regional and Contributing Stations. Station information is available through the GAW Station Information System GAWSIS (http://gaw.empa.ch/gawsis/). There are six key groups of variables which are addressed by the GAW Programme, namely: ozone, reactive gases, greenhouse gases, aerosols, UV radiation and precipitation chemistry. GAW works to implement integrated observations unifying measurements from different platforms (ground based in situ and remote, balloons, aircraft and satellite) supported by modeling activities. GAW provides data for ozone assessments, Greenhouse Gas Bulletins, Ozone Bulletins and precipitation chemistry assessments published on a regular basis and for early warnings of changes in the chemical composition and related physical characteristics of the atmosphere. To ensure that observations can be used for global assessments, the GAW Programme has developed a Quality Assurance system. Five types of Central Facilities dedicated to the six groups of measurement variables are operated by WMO Members and form the basis of quality assurance and data archiving for the GAW global monitoring network. They include Central Calibration Laboratories (CCLs) that host primary standards (PS), Quality Assurance/Science Activity Centres (QA/SACs), World Calibration Centers (WCCs), Regional Calibration Centers (RCCs), and World Data Centers (WDCs) with responsibility for

  11. Global land cover mapping: a review and uncertainty analysis

    USGS Publications Warehouse

    Congalton, Russell G.; Gu, Jianyu; Yadav, Kamini; Thenkabail, Prasad S.; Ozdogan, Mutlu

    2014-01-01

    Given the advances in remotely sensed imagery and associated technologies, several global land cover maps have been produced in recent times including IGBP DISCover, UMD Land Cover, Global Land Cover 2000 and GlobCover 2009. However, the utility of these maps for specific applications has often been hampered due to considerable amounts of uncertainties and inconsistencies. A thorough review of these global land cover projects including evaluating the sources of error and uncertainty is prudent and enlightening. Therefore, this paper describes our work in which we compared, summarized and conducted an uncertainty analysis of the four global land cover mapping projects using an error budget approach. The results showed that the classification scheme and the validation methodology had the highest error contribution and implementation priority. A comparison of the classification schemes showed that there are many inconsistencies between the definitions of the map classes. This is especially true for the mixed type classes for which thresholds vary for the attributes/discriminators used in the classification process. Examination of these four global mapping projects provided quite a few important lessons for the future global mapping projects including the need for clear and uniform definitions of the classification scheme and an efficient, practical, and valid design of the accuracy assessment.

  12. Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat

    2016-01-01

    The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.

  13. Sensitivity analysis of a ground-water-flow model

    USGS Publications Warehouse

    Torak, Lynn J.; ,

    1991-01-01

    A sensitivity analysis was performed on 18 hydrological factors affecting steady-state groundwater flow in the Upper Floridan aquifer near Albany, southwestern Georgia. Computations were based on a calibrated, two-dimensional, finite-element digital model of the stream-aquifer system and the corresponding data inputs. Flow-system sensitivity was analyzed by computing water-level residuals obtained from simulations involving individual changes to each hydrological factor. Hydrological factors to which computed water levels were most sensitive were those that produced the largest change in the sum-of-squares of residuals for the smallest change in factor value. Plots of the sum-of-squares of residuals against multiplier or additive values that effect change in the hydrological factors are used to evaluate the influence of each factor on the simulated flow system. The shapes of these 'sensitivity curves' indicate the importance of each hydrological factor to the flow system. Because the sensitivity analysis can be performed during the preliminary phase of a water-resource investigation, it can be used to identify the types of hydrological data required to accurately characterize the flow system prior to collecting additional data or making management decisions.

  14. Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit

    NASA Astrophysics Data System (ADS)

    Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie

    2015-09-01

    The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity

  15. Global Analysis, Interpretation, and Modelling: First Science Conference

    NASA Technical Reports Server (NTRS)

    Sahagian, Dork

    1995-01-01

    Topics considered include: Biomass of termites and their emissions of methane and carbon dioxide - A global database; Carbon isotope discrimination during photosynthesis and the isotope ratio of respired CO2 in boreal forest ecosystems; Estimation of methane emission from rice paddies in mainland China; Climate and nitrogen controls on the geography and timescales of terrestrial biogeochemical cycling; Potential role of vegetation feedback in the climate sensitivity of high-latitude regions - A case study at 6000 years B.P.; Interannual variation of carbon exchange fluxes in terrestrial ecosystems; and Variations in modeled atmospheric transport of carbon dioxide and the consequences for CO2 inversions.

  16. Sensitivity and Uncertainty Analysis of the GFR MOX Fuel Subassembly

    NASA Astrophysics Data System (ADS)

    Lüley, J.; Vrban, B.; Čerba, Š.; Haščík, J.; Nečas, V.; Pelloni, S.

    2014-04-01

    We performed sensitivity and uncertainty analysis as well as benchmark similarity assessment of the MOX fuel subassembly designed for the Gas-Cooled Fast Reactor (GFR) as a representative material of the core. Material composition was defined for each assembly ring separately allowing us to decompose the sensitivities not only for isotopes and reactions but also for spatial regions. This approach was confirmed by direct perturbation calculations for chosen materials and isotopes. Similarity assessment identified only ten partly comparable benchmark experiments that can be utilized in the field of GFR development. Based on the determined uncertainties, we also identified main contributors to the calculation bias.

  17. Sensitive Chiral Analysis via Microwave Three-Wave Mixing

    NASA Astrophysics Data System (ADS)

    Patterson, David; Doyle, John M.

    2013-07-01

    We demonstrate chirality-induced three-wave mixing in the microwave regime, using rotational transitions in cold gas-phase samples of 1,2-propanediol and 1,3-butanediol. We show that bulk three-wave mixing, which can only be realized in a chiral environment, provides a sensitive, species-selective probe of enantiomeric excess and is applicable to a broad class of molecules. The doubly resonant condition provides simultaneous identification of species and of handedness, which should allow sensitive chiral analysis even within a complex mixture.

  18. Rethinking Sensitivity Analysis of Nuclear Simulations with Topology

    SciTech Connect

    Dan Maljovec; Bei Wang; Paul Rosen; Andrea Alfonsi; Giovanni Pastore; Cristian Rabiti; Valerio Pascucci

    2016-01-01

    In nuclear engineering, understanding the safety margins of the nuclear reactor via simulations is arguably of paramount importance in predicting and preventing nuclear accidents. It is therefore crucial to perform sensitivity analysis to understand how changes in the model inputs affect the outputs. Modern nuclear simulation tools rely on numerical representations of the sensitivity information -- inherently lacking in visual encodings -- offering limited effectiveness in communicating and exploring the generated data. In this paper, we design a framework for sensitivity analysis and visualization of multidimensional nuclear simulation data using partition-based, topology-inspired regression models and report on its efficacy. We rely on the established Morse-Smale regression technique, which allows us to partition the domain into monotonic regions where easily interpretable linear models can be used to assess the influence of inputs on the output variability. The underlying computation is augmented with an intuitive and interactive visual design to effectively communicate sensitivity information to the nuclear scientists. Our framework is being deployed into the multi-purpose probabilistic risk assessment and uncertainty quantification framework RAVEN (Reactor Analysis and Virtual Control Environment). We evaluate our framework using an simulation dataset studying nuclear fuel performance.

  19. Sensitivity Analysis of Chaotic Flow around Two-Dimensional Airfoil

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Wang, Qiqi; Nielsen, Eric; Diskin, Boris

    2015-11-01

    Computational methods for sensitivity analysis are invaluable tools for fluid dynamics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods, including the adjoint method, break down when applied to long-time averaged quantities in chaotic fluid flow fields, such as high-fidelity turbulence simulations. This break down is due to the ``Butterfly Effect'' the high sensitivity of chaotic dynamical systems to the initial condition. A new sensitivity analysis method developed by the authors, Least Squares Shadowing (LSS), can compute useful and accurate gradients for quantities of interest in chaotic dynamical systems. LSS computes gradients using the ``shadow trajectory'', a phase space trajectory (or solution) for which perturbations to the flow field do not grow exponentially in time. To efficiently compute many gradients for one objective function, we use an adjoint version of LSS. This talk will briefly outline Least Squares Shadowing and demonstrate it on chaotic flow around a Two-Dimensional airfoil.

  20. The sensitivity of soil respiration to soil temperature, moisture, and carbon supply at the global scale.

    PubMed

    Hursh, Andrew; Ballantyne, Ashley; Cooper, Leila; Maneta, Marco; Kimball, John; Watts, Jennifer

    2017-05-01

    Soil respiration (Rs) is a major pathway by which fixed carbon in the biosphere is returned to the atmosphere, yet there are limits to our ability to predict respiration rates using environmental drivers at the global scale. While temperature, moisture, carbon supply, and other site characteristics are known to regulate soil respiration rates at plot scales within certain biomes, quantitative frameworks for evaluating the relative importance of these factors across different biomes and at the global scale require tests of the relationships between field estimates and global climatic data. This study evaluates the factors driving Rs at the global scale by linking global datasets of soil moisture, soil temperature, primary productivity, and soil carbon estimates with observations of annual Rs from the Global Soil Respiration Database (SRDB). We find that calibrating models with parabolic soil moisture functions can improve predictive power over similar models with asymptotic functions of mean annual precipitation. Soil temperature is comparable with previously reported air temperature observations used in predicting Rs and is the dominant driver of Rs in global models; however, within certain biomes soil moisture and soil carbon emerge as dominant predictors of Rs. We identify regions where typical temperature-driven responses are further mediated by soil moisture, precipitation, and carbon supply and regions in which environmental controls on high Rs values are difficult to ascertain due to limited field data. Because soil moisture integrates temperature and precipitation dynamics, it can more directly constrain the heterotrophic component of Rs, but global-scale models tend to smooth its spatial heterogeneity by aggregating factors that increase moisture variability within and across biomes. We compare statistical and mechanistic models that provide independent estimates of global Rs ranging from 83 to 108 Pg yr(-1) , but also highlight regions of uncertainty

  1. Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Eleshaky, Mohamed E.

    1991-01-01

    A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.

  2. The Global Challenge of Antimicrobial Resistance: Insights from Economic Analysis

    PubMed Central

    Eggleston, Karen; Zhang, Ruifang; Zeckhauser, Richard J.

    2010-01-01

    The prevalence of antimicrobial resistance (AR) limits the therapeutic options for treatment of infections, and increases the social benefit from disease prevention. Like an environmental resource, antimicrobials require stewardship. The effectiveness of an antimicrobial agent is a global public good. We argue for greater use of economic analysis as an input to policy discussion about AR, including for understanding the incentives underlying health behaviors that spawn AR, and to supplement other methods of tracing the evolution of AR internationally. We also discuss integrating antimicrobial stewardship into global health governance. PMID:20948953

  3. Sensitivity and Uncertainty Analysis to Burnup Estimates on ADS using the ACAB Code

    SciTech Connect

    Cabellos, O.; Sanz, J.; Rodriguez, A.; Gonzalez, E.; Embid, M.; Alvarez, F.; Reyes, S.

    2005-05-24

    Within the scope of the Accelerator Driven System (ADS) concept for nuclear waste management applications, the burnup uncertainty estimates due to uncertainty in the activation cross sections (XSs) are important regarding both the safety and the efficiency of the waste burning process. We have applied both sensitivity analysis and Monte Carlo methodology to actinides burnup calculations in a lead-bismuth cooled subcritical ADS. The sensitivity analysis is used to identify the reaction XSs and the dominant chains that contribute most significantly to the uncertainty. The Monte Carlo methodology gives the burnup uncertainty estimates due to the synergetic/global effect of the complete set of XS uncertainties. These uncertainty estimates are valuable to assess the need of any experimental or systematic re-evaluation of some uncertainty XSs for ADS.

  4. Methylation-sensitive amplified polymorphism analysis of Verticillium wilt-stressed cotton (Gossypium).

    PubMed

    Wang, W; Zhang, M; Chen, H D; Cai, X X; Xu, M L; Lei, K Y; Niu, J H; Deng, L; Liu, J; Ge, Z J; Yu, S X; Wang, B H

    2016-10-06

    In this study, a methylation-sensitive amplification polymorphism analysis system was used to analyze DNA methylation level in three cotton accessions. Two disease-sensitive near-isogenic lines, PD94042 and IL41, and one disease-resistant Gossypium mustelinum accession were exposed to Verticillium wilt, to investigate molecular disease resistance mechanisms in cotton. We observed multiple different DNA methylation types across the three accessions following Verticillium wilt exposure. These included hypomethylation, hypermethylation, and other patterns. In general, the global DNA methylation level was significantly increased in the disease-resistant accession G. mustelinum following disease exposure. In contrast, there was no significant difference in the disease-sensitive accession PD94042, and a significant decrease was observed in IL41. Our results suggest that disease-resistant cotton might employ a mechanism to increase methylation level in response to disease stress. The differing methylation patterns, together with the increase in global DNA methylation level, might play important roles in tolerance to Verticillium wilt in cotton. Through cloning and analysis of differently methylated DNA sequences, we were also able to identify several genes that may contribute to disease resistance in cotton. Our results revealed the effect of DNA methylation on cotton disease resistance, and also identified genes that played important roles, which may shed light on the future cotton disease-resistant molecular breeding.

  5. A GIS based spatially-explicit sensitivity and uncertainty analysis approach for multi-criteria decision analysis.

    PubMed

    Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas

    2014-03-01

    GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.

  6. Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety

    SciTech Connect

    Broadhead, B.L.; Childs, R.L.; Rearden, B.T.

    1999-09-20

    Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community.

  7. Stability investigations of airfoil flow by global analysis

    NASA Technical Reports Server (NTRS)

    Morzynski, Marek; Thiele, Frank

    1992-01-01

    As the result of global, non-parallel flow stability analysis the single value of the disturbance growth-rate and respective frequency is obtained. This complex value characterizes the stability of the whole flow configuration and is not referred to any particular flow pattern. The global analysis assures that all the flow elements (wake, boundary and shear layer) are taken into account. The physical phenomena connected with the wake instability are properly reproduced by the global analysis. This enhances the investigations of instability of any 2-D flows, including ones in which the boundary layer instability effects are known to be of dominating importance. Assuming fully 2-D disturbance form, the global linear stability problem is formulated. The system of partial differential equations is solved for the eigenvalues and eigenvectors. The equations, written in the pure stream function formulation, are discretized via FDM using a curvilinear coordinate system. The complex eigenvalues and corresponding eigenvectors are evaluated by an iterative method. The investigations performed for various Reynolds numbers emphasize that the wake instability develops into the Karman vortex street. This phenomenon is shown to be connected with the first mode obtained from the non-parallel flow stability analysis. The higher modes are reflecting different physical phenomena as for example Tollmien-Schlichting waves, originating in the boundary layer and having the tendency to emerge as instabilities for the growing Reynolds number. The investigations are carried out for a circular cylinder, oblong ellipsis and airfoil. It is shown that the onset of the wake instability, the waves in the boundary layer, the shear layer instability are different solutions of the same eigenvalue problem, formulated using the non-parallel theory. The analysis offers large potential possibilities as the generalization of methods used till now for the stability analysis.

  8. Shape sensitivity analysis of flutter response of a laminated wing

    NASA Technical Reports Server (NTRS)

    Bergen, Fred D.; Kapania, Rakesh K.

    1988-01-01

    A method is presented for calculating the shape sensitivity of a wing aeroelastic response with respect to changes in geometric shape. Yates' modified strip method is used in conjunction with Giles' equivalent plate analysis to predict the flutter speed, frequency, and reduced frequency of the wing. Three methods are used to calculate the sensitivity of the eigenvalue. The first method is purely a finite difference calculation of the eigenvalue derivative directly from the solution of the flutter problem corresponding to the two different values of the shape parameters. The second method uses an analytic expression for the eigenvalue sensitivities of a general complex matrix, where the derivatives of the aerodynamic, mass, and stiffness matrices are computed using a finite difference approximation. The third method also uses an analytic expression for the eigenvalue sensitivities, but the aerodynamic matrix is computed analytically. All three methods are found to be in good agreement with each other. The sensitivities of the eigenvalues were used to predict the flutter speed, frequency, and reduced frequency. These approximations were found to be in good agreement with those obtained using a complete reanalysis.

  9. SENSIT: a cross-section and design sensitivity and uncertainty analysis code. [In FORTRAN for CDC-7600, IBM 360

    SciTech Connect

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.

  10. Sensitivity Analysis and Optimal Control of Anthroponotic Cutaneous Leishmania

    PubMed Central

    Zamir, Muhammad; Zaman, Gul; Alshomrani, Ali Saleh

    2016-01-01

    This paper is focused on the transmission dynamics and optimal control of Anthroponotic Cutaneous Leishmania. The threshold condition R0 for initial transmission of infection is obtained by next generation method. Biological sense of the threshold condition is investigated and discussed in detail. The sensitivity analysis of the reproduction number is presented and the most sensitive parameters are high lighted. On the basis of sensitivity analysis, some control strategies are introduced in the model. These strategies positively reduce the effect of the parameters with high sensitivity indices, on the initial transmission. Finally, an optimal control strategy is presented by taking into account the cost associated with control strategies. It is also shown that an optimal control exists for the proposed control problem. The goal of optimal control problem is to minimize, the cost associated with control strategies and the chances of infectious humans, exposed humans and vector population to become infected. Numerical simulations are carried out with the help of Runge-Kutta fourth order procedure. PMID:27505634

  11. Low global sensitivity of metabolic rate to temperature in calcified marine invertebrates.

    PubMed

    Watson, Sue-Ann; Morley, Simon A; Bates, Amanda E; Clark, Melody S; Day, Robert W; Lamare, Miles; Martin, Stephanie M; Southgate, Paul C; Tan, Koh Siang; Tyler, Paul A; Peck, Lloyd S

    2014-01-01

    Metabolic rate is a key component of energy budgets that scales with body size and varies with large-scale environmental geographical patterns. Here we conduct an analysis of standard metabolic rates (SMR) of marine ectotherms across a 70° latitudinal gradient in both hemispheres that spanned collection temperatures of 0-30 °C. To account for latitudinal differences in the size and skeletal composition between species, SMR was mass normalized to that of a standard-sized (223 mg) ash-free dry mass individual. SMR was measured for 17 species of calcified invertebrates (bivalves, gastropods, urchins and brachiopods), using a single consistent methodology, including 11 species whose SMR was described for the first time. SMR of 15 out of 17 species had a mass-scaling exponent between 2/3 and 1, with no greater support for a 3/4 rather than a 2/3 scaling exponent. After accounting for taxonomy and variability in parameter estimates among species using variance-weighted linear mixed effects modelling, temperature sensitivity of SMR had an activation energy (Ea) of 0.16 for both Northern and Southern Hemisphere species which was lower than predicted under the metabolic theory of ecology (Ea 0.2-1.2 eV). Northern Hemisphere species, however, had a higher SMR at each habitat temperature, but a lower mass-scaling exponent relative to SMR. Evolutionary trade-offs that may be driving differences in metabolic rate (such as metabolic cold adaptation of Northern Hemisphere species) will have important impacts on species abilities to respond to changing environments.

  12. Performance, robustness and sensitivity analysis of the nonlinear tuned vibration absorber

    NASA Astrophysics Data System (ADS)

    Detroux, T.; Habib, G.; Masset, L.; Kerschen, G.

    2015-08-01

    The nonlinear tuned vibration absorber (NLTVA) is a recently developed nonlinear absorber which generalizes Den Hartog's equal peak method to nonlinear systems. If the purposeful introduction of nonlinearity can enhance system performance, it can also give rise to adverse dynamical phenomena, including detached resonance curves and quasiperiodic regimes of motion. Through the combination of numerical continuation of periodic solutions, bifurcation detection and tracking, and global analysis, the present study identifies boundaries in the NLTVA parameter space delimiting safe, unsafe and unacceptable operations. The sensitivity of these boundaries to uncertainty in the NLTVA parameters is also investigated.

  13. Lattice Boltzmann methods for global linear instability analysis

    NASA Astrophysics Data System (ADS)

    Pérez, José Miguel; Aguilar, Alfonso; Theofilis, Vassilis

    2016-11-01

    Modal global linear instability analysis is performed using, for the first time ever, the lattice Boltzmann method (LBM) to analyze incompressible flows with two and three inhomogeneous spatial directions. Four linearization models have been implemented in order to recover the linearized Navier-Stokes equations in the incompressible limit. Two of those models employ the single relaxation time and have been proposed previously in the literature as linearization of the collision operator of the lattice Boltzmann equation. Two additional models are derived herein for the first time by linearizing the local equilibrium probability distribution function. Instability analysis results are obtained in three benchmark problems, two in closed geometries and one in open flow, namely the square and cubic lid-driven cavity flow and flow in the wake of the circular cylinder. Comparisons with results delivered by classic spectral element methods verify the accuracy of the proposed new methodologies and point potential limitations particular to the LBM approach. The known issue of appearance of numerical instabilities when the SRT model is used in direct numerical simulations employing the LBM is shown to be reflected in a spurious global eigenmode when the SRT model is used in the instability analysis. Although this mode is absent in the multiple relaxation times model, other spurious instabilities can also arise and are documented herein. Areas of potential improvements in order to make the proposed methodology competitive with established approaches for global instability analysis are discussed.

  14. Spectrograph sensitivity analysis: an efficient tool for different design phases

    NASA Astrophysics Data System (ADS)

    Genoni, M.; Riva, M.; Pariani, G.; Aliverti, M.; Moschetti, M.

    2016-08-01

    In this paper we present an efficient tool developed to perform opto-mechanical tolerance and sensitivity analysis both for the preliminary and final design phases of a spectrograph. With this tool it will be possible to evaluate the effect of mechanical perturbation of each single spectrograph optical element in terms of image stability, i.e. the motion of the echellogram on the spectrograph focal plane, and of image quality, i.e. the spot size of the different echellogram wavelengths. We present the MATLAB-Zemax script architecture of the tool. In addition we present the detailed results concerning its application to the sensitivity analysis of the ESPRESSO spectrograph (the Echelle Spectrograph for Rocky Exoplanets and Stable Spectroscopic Observations which will be soon installed on ESO's Very Large Telescope) in the framework of the incoming assembly, alignment and integration phases.

  15. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    NASA Astrophysics Data System (ADS)

    Wang, Qiqi; Hu, Rui; Blonigan, Patrick

    2014-06-01

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned "least squares shadowing (LSS) problem". The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.

  16. Adaptive approach for nonlinear sensitivity analysis of reaction kinetics.

    PubMed

    Horenko, Illia; Lorenz, Sönke; Schütte, Christof; Huisinga, Wilhelm

    2005-07-15

    We present a unified approach for linear and nonlinear sensitivity analysis for models of reaction kinetics that are stated in terms of systems of ordinary differential equations (ODEs). The approach is based on the reformulation of the ODE problem as a density transport problem described by a Fokker-Planck equation. The resulting multidimensional partial differential equation is herein solved by extending the TRAIL algorithm originally introduced by Horenko and Weiser in the context of molecular dynamics (J. Comp. Chem. 2003, 24, 1921) and discussed it in comparison with Monte Carlo techniques. The extended TRAIL approach is fully adaptive and easily allows to study the influence of nonlinear dynamical effects. We illustrate the scheme in application to an enzyme-substrate model problem for sensitivity analysis w.r.t. to initial concentrations and parameter values.

  17. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    SciTech Connect

    Wang, Qiqi Hu, Rui Blonigan, Patrick

    2014-06-15

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.

  18. Objective analysis of the ARM IOP data: method and sensitivity

    SciTech Connect

    Cedarwall, R; Lin, J L; Xie, S C; Yio, J J; Zhang, M H

    1999-04-01

    Motivated by the need of to obtain accurate objective analysis of field experimental data to force physical parameterizations in numerical models, this paper -first reviews the existing objective analysis methods and interpolation schemes that are used to derive atmospheric wind divergence, vertical velocity, and advective tendencies. Advantages and disadvantages of each method are discussed. It is shown that considerable uncertainties in the analyzed products can result from the use of different analysis schemes and even more from different implementations of a particular scheme. The paper then describes a hybrid approach to combine the strengths of the regular grid method and the line-integral method, together with a variational constraining procedure for the analysis of field experimental data. In addition to the use of upper air data, measurements at the surface and at the top-of-the-atmosphere are used to constrain the upper air analysis to conserve column-integrated mass, water, energy, and momentum. Analyses are shown for measurements taken in the Atmospheric Radiation Measurement Programs (ARM) July 1995 Intensive Observational Period (IOP). Sensitivity experiments are carried out to test the robustness of the analyzed data and to reveal the uncertainties in the analysis. It is shown that the variational constraining process significantly reduces the sensitivity of the final data products.

  19. Probabilistic Sensitivity Analysis for Launch Vehicles with Varying Payloads and Adapters for Structural Dynamics and Loads

    NASA Technical Reports Server (NTRS)

    McGhee, David S.; Peck, Jeff A.; McDonald, Emmett J.

    2012-01-01

    This paper examines Probabilistic Sensitivity Analysis (PSA) methods and tools in an effort to understand their utility in vehicle loads and dynamic analysis. Specifically, this study addresses how these methods may be used to establish limits on payload mass and cg location and requirements on adaptor stiffnesses while maintaining vehicle loads and frequencies within established bounds. To this end, PSA methods and tools are applied to a realistic, but manageable, integrated launch vehicle analysis where payload and payload adaptor parameters are modeled as random variables. This analysis is used to study both Regional Response PSA (RRPSA) and Global Response PSA (GRPSA) methods, with a primary focus on sampling based techniques. For contrast, some MPP based approaches are also examined.

  20. Graphical methods for the sensitivity analysis in discriminant analysis

    DOE PAGES

    Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang

    2015-09-30

    Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern ofmore » the change.« less

  1. Graphical methods for the sensitivity analysis in discriminant analysis

    SciTech Connect

    Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang

    2015-09-30

    Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern of the change.

  2. Hydrological sensitivity to greenhouse gases and aerosols in a global climate model

    NASA Astrophysics Data System (ADS)

    KvalevâG, Maria Malene; Samset, BjøRn H.; Myhre, Gunnar

    2013-04-01

    Changes in greenhouse gases and aerosols alter the atmospheric energy budget on different time scales and at different levels in the atmosphere. We study the relationship between global mean precipitation changes, radiative forcing, and surface temperature change since preindustrial times caused by several climate change components (CO2, CH4, sulphate and black carbon (BC) aerosols, and solar forcing) using the National Center for Atmospheric Research Community Earth System Model (CESM1.03). We find a fast response in precipitation due to atmospheric instability that correlates with radiative forcing associated with atmospheric absorption and a slower response caused by changes in surface temperature which correlates with radiative forcing at the top of the atmosphere. In general, global climate models show large differences in climate response to global warming, but here we find a strong relationship between global mean radiative forcing and global mean precipitation changes that is very consistent with other models, indicating that precipitation changes from a particular forcing mechanism are more robust than previously expected. In addition, we look at the precipitation response and relate it to changes in lifetime of atmospheric water vapor (τ). BC aerosols have a significantly larger impact on changes in τ related to surface temperature compared to greenhouse gases, sulphate aerosols, and solar forcing and are the dominating forcing mechanism affecting fast precipitation in this quantity.

  3. A Sensitivity Analysis of Entry Age Normal Military Retirement Costs.

    DTIC Science & Technology

    1983-09-01

    sensitivity analysis of both the individual and aggregate entryU age normal actuarial cost models under differing economic, man- agerial and legal assumptions... actuarial cost models under dif- fering economic, managerial and legal assumptions. In addition to the above, a set of simple estimating equations... actuarially com- * puted variables are listed since the model uses each pay- grade’s individual actuarial data (e.g. the life expectancy of a retiring

  4. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    NASA Astrophysics Data System (ADS)

    Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based "local" methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative "bucket-style" hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  5. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  6. Sensitivity Analysis of Launch Vehicle Debris Risk Model

    NASA Technical Reports Server (NTRS)

    Gee, Ken; Lawrence, Scott L.

    2010-01-01

    As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.

  7. Sensitivity Analysis for Atmospheric Infrared Sounder (AIRS) CO2 Retrieval

    NASA Technical Reports Server (NTRS)

    Gat, Ilana

    2012-01-01

    The Atmospheric Infrared Sounder (AIRS) is a thermal infrared sensor able to retrieve the daily atmospheric state globally for clear as well as partially cloudy field-of-views. The AIRS spectrometer has 2378 channels sensing from 15.4 micrometers to 3.7 micrometers, of which a small subset in the 15 micrometers region has been selected, to date, for CO2 retrieval. To improve upon the current retrieval method, we extended the retrieval calculations to include a prior estimate component and developed a channel ranking system to optimize the channels and number of channels used. The channel ranking system uses a mathematical formalism to rapidly process and assess the retrieval potential of large numbers of channels. Implementing this system, we identifed a larger optimized subset of AIRS channels that can decrease retrieval errors and minimize the overall sensitivity to other iridescent contributors, such as water vapor, ozone, and atmospheric temperature. This methodology selects channels globally by accounting for the latitudinal, longitudinal, and seasonal dependencies of the subset. The new methodology increases accuracy in AIRS CO2 as well as other retrievals and enables the extension of retrieved CO2 vertical profiles to altitudes ranging from the lower troposphere to upper stratosphere. The extended retrieval method for CO2 vertical profile estimation using a maximum-likelihood estimation method. We use model data to demonstrate the beneficial impact of the extended retrieval method using the new channel ranking system on CO2 retrieval.

  8. A Global Optimization Approach to Multi-Polarity Sentiment Analysis

    PubMed Central

    Li, Xinmiao; Li, Jing; Wu, Yukeng

    2015-01-01

    Following the rapid development of social media, sentiment analysis has become an important social media mining technique. The performance of automatic sentiment analysis primarily depends on feature selection and sentiment classification. While information gain (IG) and support vector machines (SVM) are two important techniques, few studies have optimized both approaches in sentiment analysis. The effectiveness of applying a global optimization approach to sentiment analysis remains unclear. We propose a global optimization-based sentiment analysis (PSOGO-Senti) approach to improve sentiment analysis with IG for feature selection and SVM as the learning engine. The PSOGO-Senti approach utilizes a particle swarm optimization algorithm to obtain a global optimal combination of feature dimensions and parameters in the SVM. We evaluate the PSOGO-Senti model on two datasets from different fields. The experimental results showed that the PSOGO-Senti model can improve binary and multi-polarity Chinese sentiment analysis. We compared the optimal feature subset selected by PSOGO-Senti with the features in the sentiment dictionary. The results of this comparison indicated that PSOGO-Senti can effectively remove redundant and noisy features and can select a domain-specific feature subset with a higher-explanatory power for a particular sentiment analysis task. The experimental results showed that the PSOGO-Senti approach is effective and robust for sentiment analysis tasks in different domains. By comparing the improvements of two-polarity, three-polarity and five-polarity sentiment analysis results, we found that the five-polarity sentiment analysis delivered the largest improvement. The improvement of the two-polarity sentiment analysis was the smallest. We conclude that the PSOGO-Senti achieves higher improvement for a more complicated sentiment analysis task. We also compared the results of PSOGO-Senti with those of the genetic algorithm (GA) and grid search method. From

  9. On the variational data assimilation problem solving and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Arcucci, Rossella; D'Amore, Luisa; Pistoia, Jenny; Toumi, Ralf; Murli, Almerico

    2017-04-01

    We consider the Variational Data Assimilation (VarDA) problem in an operational framework, namely, as it results when it is employed for the analysis of temperature and salinity variations of data collected in closed and semi closed seas. We present a computing approach to solve the main computational kernel at the heart of the VarDA problem, which outperforms the technique nowadays employed by the oceanographic operative software. The new approach is obtained by means of Tikhonov regularization. We provide the sensitivity analysis of this approach and we also study its performance in terms of the accuracy gain on the computed solution. We provide validations on two realistic oceanographic data sets.

  10. Sensitivity of Forecast Skill to Different Objective Analysis Schemes

    NASA Technical Reports Server (NTRS)

    Baker, W. E.

    1979-01-01

    Numerical weather forecasts are characterized by rapidly declining skill in the first 48 to 72 h. Recent estimates of the sources of forecast error indicate that the inaccurate specification of the initial conditions contributes substantially to this error. The sensitivity of the forecast skill to the initial conditions is examined by comparing a set of real-data experiments whose initial data were obtained with two different analysis schemes. Results are presented to emphasize the importance of the objective analysis techniques used in the assimilation of observational data.

  11. A Comparative Analysis of Global Cropping Systems Models and Maps

    NASA Astrophysics Data System (ADS)

    Anderson, W. B.; You, L.; Wood, S.; Wood-Sichra, U.; Wu, W.

    2013-12-01

    Agricultural practices have dramatically altered the land cover of the Earth, but the spatial extent and intensity of these practices is often difficult to catalogue. Cropland accounts for nearly 15 million km2 of the Earth's land cover - amounting to 12% of the Earth's ice-free land surface - yet information on the distribution and performance of specific crops is often available only through national or sub-national statistics. While remote sensing products offer spatially disaggregated information, those currently available on a global scale are ill-suited for many applications due to the limited separation of crop types within the area classified as cropland. Recently, however, there have been multiple independent efforts to incorporate the detailed information available from statistical surveys with supplemental spatial information to produce a spatially explicit global dataset specific to individual cropss for the year 2000. While these datasets provide analysts and decision makers with improved information on global cropping systems, the final global cropping maps differ from one another substantially. This study aims to explore and quantify systematic similarities and differences between four major global cropping systems products: the monthly irrigated and rainfed crop areas around the year 2000 (MIRAC2000) dataset, the spatial production allocation model (SPAM), the global agro-ecological zone (GAEZ) dataset, and the dataset developed by Monfreda et al., 2008. The analysis explores not only the final cropping systems maps but also the interdependencies of each product, methodological differences and modeling assumptions, which will provide users with information vital for discerning between datasets in selecting a product appropriate for each intended application.

  12. Analysis of the stability and sensitivity of jets in crossflow

    NASA Astrophysics Data System (ADS)

    Regan, Marc; Mahesh, Krishnan

    2016-11-01

    Jets in crossflow (transverse jets) are a canonical fluid flow in which a jet of fluid is injected normal to a crossflow. A high-fidelity, unstructured, incompressible, DNS solver is shown (Iyer & Mahesh 2016) to reproduce the complex shear layer instability seen in low-speed jets in crossflow experiments. Vertical velocity spectra taken along the shear layer show good agreement between simulation and experiment. An analogy to countercurrent mixing layers has been proposed to explain the transition from absolute to convective stability with increasing jet to crossflow ratios. Global linear stability and adjoint sensitivity techniques are developed within the unstructured DNS solver in an effort to further understand the stability and sensitivity of jets in crossflow. An Arnoldi iterative approach is used to solve for the most unstable eigenvalues and their associated eigenmodes for the direct and adjoint formulations. Frequencies from the direct and adjoint modal analyses show good agreement with simulation and experiment. Development, validation, and results for the transverse jet will be presented. Supported by AFOSR.

  13. Global Climate Simulation in a Multi-scale Modeling Framework: Sensitivity to GCM- Resolution

    NASA Astrophysics Data System (ADS)

    Duffy, P. B.; Bala, G.; Gleckler, P. J.; Taylor, K. E.; Mirin, A. A.; Wickett, M. E.

    2006-12-01

    We investigate sensitivity of the simulated climate in the NCAR CAM3 atmospheric climate model to increases in horizontal spatial resolution, and to use an alternative representation of unresolved motions. In the Multi-scale Modeling Framework (MMF), cloud parameterizations in are replaced by a two-dimensional Cloud System Resolving Model (CSRM) that is embedded in each column of the general circulation model (GCM. Here we investigate both the resolution-sensitivity of the baseline version (that employing parameterizations) of the CAM3 atmospheric model as well as the sensitivity to decreasing the horizontal grid size of the GCM in an MMF version of the same model. Generally speaking, climate quantities related to clouds, precipitation, and radiative fluxes are more sensitive to treatment of subgrid scale processes than to GCM grid size. Simulated top-of-atmosphere cloud radiative forcings and related radiative fluxes are substantially improved in the MMF simulations; aspects of simulated precipitation and the simulated MJO are also substantially improved. In both the MMF and parameterized simulations, the large-scale climate shows less sensitivity to GCM resolution than has been seen in some other models, particularly the NCAR CCM3, a predecessor to CAM3. However, comparison to published simulations using CAM3 with Eulerian spectral dynamics indicates that that CAM3 configuration has very similar sensitivities to horizontal resolution as the Finite Volume dynamics version used here; comparison to MMF simulations with different GCM grid sizes confirms that parameterized physics influences resolution-sensitivity more than dynamical formulation does. Because the MMF is based more closely on first-principles physics than parameterizations are, one need not retune model parameters when the horizontal resolution of the GCM is changed in the MMF.

  14. Global Scale Variation in the Salinity Sensitivity of Riverine Macroinvertebrates: Eastern Australia, France, Israel and South Africa

    PubMed Central

    Kefford, Ben J.; Hickey, Graeme L.; Gasith, Avital; Ben-David, Elad; Dunlop, Jason E.; Palmer, Carolyn G.; Allan, Kaylene; Choy, Satish C.; Piscart, Christophe

    2012-01-01

    Salinity is a key abiotic property of inland waters; it has a major influence on biotic communities and is affected by many natural and anthropogenic processes. Salinity of inland waters tends to increase with aridity, and biota of inland waters may have evolved greater salt tolerance in more arid regions. Here we compare the sensitivity of stream macroinvertebrate species to salinity from a relatively wet region in France (Lorraine and Brittany) to that in three relatively arid regions eastern Australia (Victoria, Queensland and Tasmania), South Africa (south-east of the Eastern Cape Province) and Israel using the identical experimental method in all locations. The species whose salinity tolerance was tested, were somewhat more salt tolerant in eastern Australia and South Africa than France, with those in Israel being intermediate. However, by far the greatest source of variation in species sensitivity was between taxonomic groups (Order and Class) and not between the regions. We used a Bayesian statistical model to estimate the species sensitivity distributions (SSDs) for salinity in eastern Australia and France adjusting for the assemblages of species in these regions. The assemblage in France was slightly more salinity sensitive than that in eastern Australia. We therefore suggest that regional salinity sensitivity is therefore likely to depend most on the taxonomic composition of respective macroinvertebrate assemblages. On this basis it would be possible to screen rivers globally for risk from salinisation. PMID:22567097

  15. Assessing the sensitivity of coral reef condition indicators to local and global stressors with Bayesian networks

    EPA Science Inventory

    Coral reefs are highly valued ecosystems that are currently imperiled. Although the value of coral reefs to human societies is only just being investigated and better understood, for many local and global economies coral reefs are important providers of ecosystem services that su...

  16. Sensitivity of Simulated Global Ocean Carbon Flux Estimates to Forcing by Reanalysis Products

    NASA Technical Reports Server (NTRS)

    Gregg, Watson W.; Casey, Nancy W.; Rousseaux, Cecile S.

    2015-01-01

    Reanalysis products from MERRA, NCEP2, NCEP1, and ECMWF were used to force an established ocean biogeochemical model to estimate air-sea carbon fluxes (FCO2) and partial pressure of carbon dioxide (pCO2) in the global oceans. Global air-sea carbon fluxes and pCO2 were relatively insensitive to the choice of forcing reanalysis. All global FCO2 estimates from the model forced by the four different reanalyses were within 20% of in situ estimates (MERRA and NCEP1 were within 7%), and all models exhibited statistically significant positive correlations with in situ estimates across the 12 major oceanographic basins. Global pCO2 estimates were within 1% of in situ estimates with ECMWF being the outlier at 0.6%. Basin correlations were similar to FCO2. There were, however, substantial departures among basin estimates from the different reanalysis forcings. The high latitudes and tropics had the largest ranges in estimated fluxes among the reanalyses. Regional pCO2 differences among the reanalysis forcings were muted relative to the FCO2 results. No individual reanalysis was uniformly better or worse in the major oceanographic basins. The results provide information on the characterization of uncertainty in ocean carbon models due to choice of reanalysis forcing.

  17. Sensitivity analysis of fine sediment models using heterogeneous data

    NASA Astrophysics Data System (ADS)

    Kamel, A. M. Yousif; Bhattacharya, B.; El Serafy, G. Y.; van Kessel, T.; Solomatine, D. P.

    2012-04-01

    Sediments play an important role in many aquatic systems. Their transportation and deposition has significant implication on morphology, navigability and water quality. Understanding the dynamics of sediment transportation in time and space is therefore important in drawing interventions and making management decisions. This research is related to the fine sediment dynamics in the Dutch coastal zone, which is subject to human interference through constructions, fishing, navigation, sand mining, etc. These activities do affect the natural flow of sediments and sometimes lead to environmental concerns or affect the siltation rates in harbours and fairways. Numerical models are widely used in studying fine sediment processes. Accuracy of numerical models depends upon the estimation of model parameters through calibration. Studying the model uncertainty related to these parameters is important in improving the spatio-temporal prediction of suspended particulate matter (SPM) concentrations, and determining the limits of their accuracy. This research deals with the analysis of a 3D numerical model of North Sea covering the Dutch coast using the Delft3D modelling tool (developed at Deltares, The Netherlands). The methodology in this research was divided into three main phases. The first phase focused on analysing the performance of the numerical model in simulating SPM concentrations near the Dutch coast by comparing the model predictions with SPM concentrations estimated from NASA's MODIS sensors at different time scales. The second phase focused on carrying out a sensitivity analysis of model parameters. Four model parameters were identified for the uncertainty and sensitivity analysis: the sedimentation velocity, the critical shear stress above which re-suspension occurs, the shields shear stress for re-suspension pick-up, and the re-suspension pick-up factor. By adopting different values of these parameters the numerical model was run and a comparison between the

  18. Species sensitivity analysis of heavy metals to freshwater organisms.

    PubMed

    Xin, Zheng; Wenchao, Zang; Zhenguang, Yan; Yiguo, Hong; Zhengtao, Liu; Xianliang, Yi; Xiaonan, Wang; Tingting, Liu; Liming, Zhou

    2015-10-01

    Acute toxicity data of six heavy metals [Cu, Hg, Cd, Cr(VI), Pb, Zn] to aquatic organisms were collected and screened. Species sensitivity distributions (SSD) curves of vertebrate and invertebrate were constructed by log-logistic model separately. The comprehensive comparisons of the sensitivities of different trophic species to six typical heavy metals were performed. The results indicated invertebrate taxa to each heavy metal exhibited higher sensitivity than vertebrates. However, with respect to the same taxa species, Cu had the most adverse effect on vertebrate, followed by Hg, Cd, Zn and Cr. When datasets from all species were included, Cu and Hg were still more toxic than the others. In particular, the toxicities of Pb to vertebrate and fish were complicated as the SSD curves of Pb intersected with those of other heavy metals, while the SSD curves of Pb constructed by total species no longer crossed with others. The hazardous concentrations for 5 % of the species (HC5) affected were derived to determine the concentration protecting 95 % of species. The HC5 values of the six heavy metals were in the descending order: Zn > Pb > Cr > Cd > Hg > Cu, indicating toxicities in opposite order. Moreover, potential affected fractions were calculated to assess the ecological risks of different heavy metals at certain concentrations of the selected heavy metals. Evaluations of sensitivities of the species at various trophic levels and toxicity analysis of heavy metals are necessary prior to derivation of water quality criteria and the further environmental protection.

  19. Sensitivity analysis of the GNSS derived Victoria plate motion

    NASA Astrophysics Data System (ADS)

    Apolinário, João; Fernandes, Rui; Bos, Machiel

    2014-05-01

    estimated trend (Williams 2003, Langbein 2012). Finally, our preferable angular velocity estimation is used to evaluate the consequences on the kinematics of the Victoria block, namely the magnitude and azimuth of the relative motions with respect to the Nubia and Somalia plates and their tectonic implications. References Agnew, D. C. (2013). Realistic simulations of geodetic network data: The Fakenet package, Seismol. Res. Lett., 84 , 426-432, doi:10.1785/0220120185. Blewitt, G. & Lavallee, D., (2002). Effect of annual signals on geodetic velocity, J. geophys. Res., 107(B7), doi:10.1029/2001JB000570. Bos, M.S., R.M.S. Fernandes, S. Williams, L. Bastos (2012) Fast Error Analysis of Continuous GNSS Observations with Missing Data, Journal of Geodesy, doi: 10.1007/s00190-012-0605-0. Bos, M.S., L. Bastos, R.M.S. Fernandes, (2009). The influence of seasonal signals on the estimation of the tectonic motion in short continuous GPS time-series, J. of Geodynamics, j.jog.2009.10.005. Fernandes, R.M.S., J. M. Miranda, D. Delvaux, D. S. Stamps and E. Saria (2013). Re-evaluation of the kinematics of Victoria Block using continuous GNSS data, Geophysical Journal International, doi:10.1093/gji/ggs071. Langbein, J. (2012). Estimating rate uncertainty with maximum likelihood: differences between power-law and flicker-random-walk models, Journal of Geodesy, Volume 86, Issue 9, pp 775-783, Williams, S. D. P. (2003). Offsets in Global Positioning System time series, J. Geophys. Res., 108, 2310, doi:10.1029/2002JB002156, B6.

  20. [Global Atmospheric Chemistry/Transport Modeling and Data-Analysis

    NASA Technical Reports Server (NTRS)

    Prinn, Ronald G.

    1999-01-01

    This grant supported a global atmospheric chemistry/transport modeling and data- analysis project devoted to: (a) development, testing, and refining of inverse methods for determining regional and global transient source and sink strengths for trace gases; (b) utilization of these inverse methods which use either the Model for Atmospheric Chemistry and Transport (MATCH) which is based on analyzed observed winds or back- trajectories calculated from these same winds for determining regional and global source and sink strengths for long-lived trace gases important in ozone depletion and the greenhouse effect; (c) determination of global (and perhaps regional) average hydroxyl radical concentrations using inverse methods with multiple "titrating" gases; and (d) computation of the lifetimes and spatially resolved destruction rates of trace gases using 3D models. Important ultimate goals included determination of regional source strengths of important biogenic/anthropogenic trace gases and also of halocarbons restricted by the Montreal Protocol and its follow-on agreements, and hydrohalocarbons now used as alternatives to the above restricted halocarbons.

  1. An interactive system for analysis of global cloud imagery

    NASA Technical Reports Server (NTRS)

    Woodberry, Karen; Tanaka, Ken; Hendon, Harry; Salby, Murry

    1991-01-01

    Synoptic images of the global cloud pattern composited from six contemporaneous satellites provide an unprecedented view of the global cloud field. Having horizontal resolution of about 0.5 deg and temporal resolution of 3 h, the global cloud imagery (GCI) resolves most of the variability of organized convection, including several harmonics of the diurnal cycle. Although the GCI has these attractive features, the dense and 3D nature of that data make it a formidable volume of information to treat in a practical and efficient manner. An interactive image-analysis system (IAS) has been developed to investigate the space-time variability of global cloud behavior. In the IAS, data, hardware, and software are integrated into a single system providing a variety of space-time covariance analyses in a menu-driven format. Owing to its customized architecture and certain homogeneous properties of the GCI, the IAS calculates such quantities effectively. Many covariance statistics are derived from 3D data with interactive speed, allowing the user to interrogate the archive iteratively in a single session. The 3D nature of those analyses and the speed with which they are performed distinguish the IAS from conventional image processing of 2D data.

  2. GLobal Ocean Data Analysis Project (GLODAP): Data and Analyses

    DOE Data Explorer

    Sabine, C. L.; Key, R. M.; Feely, R. A.; Bullister, J. L.; Millero, F. J.; Wanninkhof, R.; Peng, T. H.; Kozyr, A.

    The GLobal Ocean Data Analysis Project (GLODAP) is a cooperative effort to coordinate global synthesis projects funded through NOAA, DOE, and NSF as part of the Joint Global Ocean Flux Study - Synthesis and Modeling Project (JGOFS-SMP). Cruises conducted as part of the World Ocean Circulation Experiment (WOCE), JGOFS, and the NOAA Ocean-Atmosphere Exchange Study (OACES) over the decade of the 1990s have created an important oceanographic database for the scientific community investigating carbon cycling in the oceans. The unified data help to determine the global distributions of both natural and anthropogenic inorganic carbon, including radiocarbon. These estimates provide an important benchmark against which future observational studies will be compared. They also provide tools for the direct evaluation of numerical ocean carbon models. GLODAP information available through CDIAC includes gridded and bottle data, a live server, an interactive atlas that provides access to data plots, and other tools for viewing and interacting with the data. [from http://cdiac.esd.ornl.gov/oceans/glodap/Glopintrod.htm](Specialized Interface)

  3. Life cycle assessment on biogas production from straw and its sensitivity analysis.

    PubMed

    Wang, Qiao-Li; Li, Wei; Gao, Xiang; Li, Su-Jing

    2016-02-01

    This study aims to investigate the synthetically environmental impacts and Global Warming Potentials (GWPs) of straw-based biogas production process via cradle-to-gate life cycle assessment (LCA) technique. Eco-indicator 99 (H) and IPCC 2007 GWP with three time horizons are utilized. The results indicate that the biogas production process shows beneficial effect on synthetic environment and is harmful to GWPs. Its harmful effects on GWPs are strengthened with time. Usage of gas-fired power which burns the self-produced natural gas (NG) can create a more sustainable process. Moreover, sensitivity analysis indicated that total electricity consumption and CO2 absorbents in purification unit have the largest sensitivity to the environment. Hence, more efforts should be made on more efficient use of electricity and wiser selection of CO2 absorbent.

  4. Mechanical performance and parameter sensitivity analysis of 3D braided composites joints.

    PubMed

    Wu, Yue; Nan, Bo; Chen, Liang

    2014-01-01

    3D braided composite joints are the important components in CFRP truss, which have significant influence on the reliability and lightweight of structures. To investigate the mechanical performance of 3D braided composite joints, a numerical method based on the microscopic mechanics is put forward, the modeling technologies, including the material constants selection, element type, grid size, and the boundary conditions, are discussed in detail. Secondly, a method for determination of ultimate bearing capacity is established, which can consider the strength failure. Finally, the effect of load parameters, geometric parameters, and process parameters on the ultimate bearing capacity of joints is analyzed by the global sensitivity analysis method. The results show that the main pipe diameter thickness ratio γ, the main pipe diameter D, and the braided angle α are sensitive to the ultimate bearing capacity N.

  5. Mechanical Performance and Parameter Sensitivity Analysis of 3D Braided Composites Joints

    PubMed Central

    Wu, Yue; Nan, Bo; Chen, Liang

    2014-01-01

    3D braided composite joints are the important components in CFRP truss, which have significant influence on the reliability and lightweight of structures. To investigate the mechanical performance of 3D braided composite joints, a numerical method based on the microscopic mechanics is put forward, the modeling technologies, including the material constants selection, element type, grid size, and the boundary conditions, are discussed in detail. Secondly, a method for determination of ultimate bearing capacity is established, which can consider the strength failure. Finally, the effect of load parameters, geometric parameters, and process parameters on the ultimate bearing capacity of joints is analyzed by the global sensitivity analysis method. The results show that the main pipe diameter thickness ratio γ, the main pipe diameter D, and the braided angle α are sensitive to the ultimate bearing capacity N. PMID:25121121

  6. Sensitivity-analysis techniques: self-teaching curriculum

    SciTech Connect

    Iman, R.L.; Conover, W.J.

    1982-06-01

    This self teaching curriculum on sensitivity analysis techniques consists of three parts: (1) Use of the Latin Hypercube Sampling Program (Iman, Davenport and Ziegler, Latin Hypercube Sampling (Program User's Guide), SAND79-1473, January 1980); (2) Use of the Stepwise Regression Program (Iman, et al., Stepwise Regression with PRESS and Rank Regression (Program User's Guide) SAND79-1472, January 1980); and (3) Application of the procedures to sensitivity and uncertainty analyses of the groundwater transport model MWFT/DVM (Campbell, Iman and Reeves, Risk Methodology for Geologic Disposal of Radioactive Waste - Transport Model Sensitivity Analysis; SAND80-0644, NUREG/CR-1377, June 1980: Campbell, Longsine, and Reeves, The Distributed Velocity Method of Solving the Convective-Dispersion Equation, SAND80-0717, NUREG/CR-1376, July 1980). This curriculum is one in a series developed by Sandia National Laboratories for transfer of the capability to use the technology developed under the NRC funded High Level Waste Methodology Development Program.

  7. LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    2000-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).

  8. An analytic method for sensitivity analysis of complex systems

    NASA Astrophysics Data System (ADS)

    Zhu, Yueying; Wang, Qiuping Alexandre; Li, Wei; Cai, Xu

    2017-03-01

    Sensitivity analysis is concerned with understanding how the model output depends on uncertainties (variances) in inputs and identifying which inputs are important in contributing to the prediction imprecision. Uncertainty determination in output is the most crucial step in sensitivity analysis. In the present paper, an analytic expression, which can exactly evaluate the uncertainty in output as a function of the output's derivatives and inputs' central moments, is firstly deduced for general multivariate models with given relationship between output and inputs in terms of Taylor series expansion. A γ-order relative uncertainty for output, denoted by Rvγ, is introduced to quantify the contributions of input uncertainty of different orders. On this basis, it is shown that the widely used approximation considering the first order contribution from the variance of input variable can satisfactorily express the output uncertainty only when the input variance is very small or the input-output function is almost linear. Two applications of the analytic formula are performed to the power grid and economic systems where the sensitivities of both actual power output and Economic Order Quantity models are analyzed. The importance of each input variable in response to the model output is quantified by the analytic formula.

  9. Hyperspectral data analysis procedures with reduced sensitivity to noise

    NASA Technical Reports Server (NTRS)

    Landgrebe, David A.

    1993-01-01

    Multispectral sensor systems have become steadily improved over the years in their ability to deliver increased spectral detail. With the advent of hyperspectral sensors, including imaging spectrometers, this technology is in the process of taking a large leap forward, thus providing the possibility of enabling delivery of much more detailed information. However, this direction of development has drawn even more attention to the matter of noise and other deleterious effects in the data, because reducing the fundamental limitations of spectral detail on information collection raises the limitations presented by noise to even greater importance. Much current effort in remote sensing research is thus being devoted to adjusting the data to mitigate the effects of noise and other deleterious effects. A parallel approach to the problem is to look for analysis approaches and procedures which have reduced sensitivity to such effects. We discuss some of the fundamental principles which define analysis algorithm characteristics providing such reduced sensitivity. One such analysis procedure including an example analysis of a data set is described, illustrating this effect.

  10. Global existence and boundedness of radial solutions to a two dimensional fully parabolic chemotaxis system with general sensitivity

    NASA Astrophysics Data System (ADS)

    Fujie, Kentarou; Senba, Takasi

    2016-08-01

    This paper deals with positive radially symmetric solutions of the Neumann boundary value problem for the fully parabolic chemotaxis system, {ut=Δu-∇ṡ(u∇χ(v))in Ω×(0,∞),τvt=Δv-v+uin Ω×(0,∞), in a ball Ω \\subset {{{R}}2} with general sensitivity function χ (v) satisfying {χ\\prime}>0 and decaying property {χ\\prime}(s)\\to 0 (s\\to ∞ ), parameter τ \\in ≤ft(0,1\\right] and nonnegative radially symmetric initial data. It is shown that if τ \\in ≤ft(0,1\\right] is sufficiently small, then the problem has a unique classical radially symmetric solution, which exists globally and remains uniformly bounded in time. Especially, this result establishes global existence of solutions in the case χ (v)={χ0}log v for all {χ0}>0 , which has been left as an open problem.

  11. Sensitivity Analysis of Hardwired Parameters in GALE Codes

    SciTech Connect

    Geelhood, Kenneth J.; Mitchell, Mark R.; Droppo, James G.

    2008-12-01

    The U.S. Nuclear Regulatory Commission asked Pacific Northwest National Laboratory to provide a data-gathering plan for updating the hardwired data tables and parameters of the Gaseous and Liquid Effluents (GALE) codes to reflect current nuclear reactor performance. This would enable the GALE codes to make more accurate predictions about the normal radioactive release source term applicable to currently operating reactors and to the cohort of reactors planned for construction in the next few years. A sensitivity analysis was conducted to define the importance of hardwired parameters in terms of each parameter’s effect on the emission rate of the nuclides that are most important in computing potential exposures. The results of this study were used to compile a list of parameters that should be updated based on the sensitivity of these parameters to outputs of interest.

  12. Multiplexed analysis of chromosome conformation at vastly improved sensitivity

    PubMed Central

    Davies, James O.J.; Telenius, Jelena M.; McGowan, Simon; Roberts, Nigel A.; Taylor, Stephen; Higgs, Douglas R.; Hughes, Jim R.

    2015-01-01

    Since methods for analysing chromosome conformation in mammalian cells are either low resolution or low throughput and are technically challenging they are not widely used outside of specialised laboratories. We have re-designed the Capture-C method producing a new approach, called next generation (NG) Capture-C. This produces unprecedented levels of sensitivity and reproducibility and can be used to analyse many genetic loci and samples simultaneously. Importantly, high-resolution data can be produced on as few as 100,000 cells and SNPs can be used to generate allele specific tracks. The method is straightforward to perform and should therefore greatly facilitate the task of linking SNPs identified by genome wide association studies with the genes they influence. The complete and detailed protocol presented here, with new publicly available tools for library design and data analysis, will allow most laboratories to analyse chromatin conformation at levels of sensitivity and throughput that were previously impossible. PMID:26595209

  13. Numerical Sensitivity Analysis of a Composite Impact Absorber

    NASA Astrophysics Data System (ADS)

    Caputo, F.; Lamanna, G.; Scarano, D.; Soprano, A.

    2008-08-01

    This work deals with a numerical investigation on the energy absorbing capability of structural composite components. There are several difficulties associated with the numerical simulation of a composite impact-absorber, such as high geometrical non-linearities, boundary contact conditions, failure criteria, material behaviour; all those aspects make the calibration of numerical models and the evaluation of their sensitivity to the governing geometrical, physical and numerical parameters one of the main objectives of whatever numerical investigation. The last aspect is a very important one for designers in order to make the application of the model to real cases robust from both a physical and a numerical point of view. At first, on the basis of experimental data from literature, a preliminary calibration of the numerical model of a composite impact absorber and then a sensitivity analysis to the variation of the main geometrical and material parameters have been developed, by using explicit finite element algorithms implemented in the Ls-Dyna code.

  14. SENSITIVITY ANALYSIS OF A TPB DEGRADATION RATE MODEL

    SciTech Connect

    Crawford, C; Tommy Edwards, T; Bill Wilmarth, B

    2006-08-01

    A tetraphenylborate (TPB) degradation model for use in aggregating Tank 48 material in Tank 50 is developed in this report. The influential factors for this model are listed as the headings in the table below. A sensitivity study of the predictions of the model over intervals of values for the influential factors affecting the model was conducted. These intervals bound the levels of these factors expected during Tank 50 aggregations. The results from the sensitivity analysis were used to identify settings for the influential factors that yielded the largest predicted TPB degradation rate. Thus, these factor settings are considered as those that yield the ''worst-case'' scenario for TPB degradation rate for Tank 50 aggregation, and, as such they would define the test conditions that should be studied in a waste qualification program whose dual purpose would be the investigation of the introduction of Tank 48 material for aggregation in Tank 50 and the bounding of TPB degradation rates for such aggregations.

  15. Global health initiative investments and health systems strengthening: a content analysis of global fund investments

    PubMed Central

    2013-01-01

    Background Millions of dollars are invested annually under the umbrella of national health systems strengthening. Global health initiatives provide funding for low- and middle-income countries through disease-oriented programmes while maintaining that the interventions simultaneously strengthen systems. However, it is as yet unclear which, and to what extent, system-level interventions are being funded by these initiatives, nor is it clear how much funding they allocate to disease-specific activities – through conventional ‘vertical-programming’ approach. Such funding can be channelled to one or more of the health system building blocks while targeting disease(s) or explicitly to system-wide activities. Methods We operationalized the World Health Organization health system framework of the six building blocks to conduct a detailed assessment of Global Fund health system investments. Our application of this framework framework provides a comprehensive quantification of system-level interventions. We applied this systematically to a random subset of 52 of the 139 grants funded in Round 8 of the Global Fund to Fight AIDS, Tuberculosis and Malaria (totalling approximately US$1 billion). Results According to the analysis, 37% (US$ 362 million) of the Global Fund Round 8 funding was allocated to health systems strengthening. Of that, 38% (US$ 139 million) was for generic system-level interventions, rather than disease-specific system support. Around 82% of health systems strengthening funding (US$ 296 million) was allocated to service delivery, human resources, and medicines & technology, and within each of these to two to three interventions. Governance, financing, and information building blocks received relatively low funding. Conclusions This study shows that a substantial portion of Global Fund’s Round 8 funds was devoted to health systems strengthening. Dramatic skewing among the health system building blocks suggests opportunities for more balanced

  16. Global trend analysis of the MODIS drought severity index

    NASA Astrophysics Data System (ADS)

    Orvos, P. I.; Homonnai, V.; Várai, A.; Bozóki, Z.; Jánosi, I. M.

    2015-10-01

    Recently, Mu et al. (2013) compiled an open access database of a remotely sensed global drought severity index (DSI) based on MODIS (Moderate Resolution Imaging Spectroradiometer) satellite measurements covering a continuous period of 12 years. The highest spatial resolution is 0.05° × 0.05° in the geographic band between 60° S and 80° N latitudes (more than 4.9 million locations over land). Here we present a global trend analysis of these satellite-based DSI time series in order to identify geographic locations where either positive or negative trends are statistically significant. Our main result is that 17.34 % of land areas exhibit significant trends of drying or wetting, and these sites constitute geographically connected regions. Since a DSI value conveys local characterization at a given site, we argue that usual field significance tests cannot provide more information about the observations than the presented analysis. The relatively short period of 12 years hinders linking the trends to global climate change; however, we think that the observations might be related to slow (decadal) modes of natural climate variability or anthropogenic impacts.

  17. Model Evaluation and Sensitivity Studies for Determining Aircraft Effects on the Global Atmosphere

    NASA Technical Reports Server (NTRS)

    Wuebbles, Donald J.

    1997-01-01

    This project, started in July 1995 and ending in July 1996, related: to evaluation of the possible importance of soot and sulfur dioxide emissions from subsonic and supersonic aircraft; to research contributions and special responsibilities for NASA AEAP assessments of subsonic aircraft and High Speed Civil Transport aircraft; and to science team responsibilities supporting the development of the three-dimensional atmospheric chemistry model of the Global Modeling Initiative.

  18. Vegetation sensitivity to global anthropogenic carbon dioxide emissions in a topographically complex region

    USGS Publications Warehouse

    Diffenbaugh, N.S.; Sloan, L.C.; Snyder, M.A.; Bell, J.L.; Kaplan, J.; Shafer, S.L.; Bartlein, P.J.

    2003-01-01

    Anthropogenic increases in atmospheric carbon dioxide (CO2) concentrations may affect vegetation distribution both directly through changes in photosynthesis and water-use efficiency, and indirectly through CO2-induced climate change. Using an equilibrium vegetation model (BIOME4) driven by a regional climate model (RegCM2.5), we tested the sensitivity of vegetation in the western United States, a topographically complex region, to the direct, indirect, and combined effects of doubled preindustrial atmospheric CO2 concentrations. Those sensitivities were quantified using the kappa statistic. Simulated vegetation in the western United States was sensitive to changes in atmospheric CO2 concentrations, with woody biome types replacing less woody types throughout the domain. The simulated vegetation was also sensitive to climatic effects, particularly at high elevations, due to both warming throughout the domain and decreased precipitation in key mountain regions such as the Sierra Nevada of California and the Cascade and Blue Mountains of Oregon. Significantly, when the direct effects of CO2 on vegetation were tested in combination with the indirect effects of CO2-induced climate change, new vegetation patterns were created that were not seen in either of the individual cases. This result indicates that climatic and nonclimatic effects must be considered in tandem when assessing the potential impacts of elevated CO2 levels.

  19. Sensitive landscape features for detecting biotic effects of global change. Final report

    SciTech Connect

    Ferson, S.; Kurtz, C.; Slice, D.

    1995-10-01

    Although several global climate models have forecast dramatic changes in future climatological conditions, very little can be predicted with any confidence about the effects on the earth`s vegetation from such environmental changes. Therefore some means is needed by which to monitor the biotic effects of global change, especially at its early stages. Ecotones, the transitional zones between larger, more compositionally well-defined biological communities, may be useful structures for monitoring the effects of climatic and other environmental impacts due to global as well as local perturbations. However, theoretical consideration of the ecological processes that determine the location and form of these structures suggests that ecotones that are sharp and therefore obvious to observers may be relatively insensitive to the types of environmental changes they might be asked to detect. It is necessary, therefore, to develop methods to identify ecotones according to the processes that generate them so that their usefulness in a particular environmental monitoring program can be assessed. This report summarizes the development of analytical methods for the detection, localization and characterization of these potentially important landscape features.

  20. Dynamical Model of Drug Accumulation in Bacteria: Sensitivity Analysis and Experimentally Testable Predictions

    PubMed Central

    Vesselinova, Neda; Wall, Michael E.

    2016-01-01

    We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time courses in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability. PMID:27824914

  1. Biosphere dose conversion Factor Importance and Sensitivity Analysis

    SciTech Connect

    M. Wasiolek

    2004-10-15

    This report presents importance and sensitivity analysis for the environmental radiation model for Yucca Mountain, Nevada (ERMYN). ERMYN is a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis concerns the output of the model, biosphere dose conversion factors (BDCFs) for the groundwater, and the volcanic ash exposure scenarios. It identifies important processes and parameters that influence the BDCF values and distributions, enhances understanding of the relative importance of the physical and environmental processes on the outcome of the biosphere model, includes a detailed pathway analysis for key radionuclides, and evaluates the appropriateness of selected parameter values that are not site-specific or have large uncertainty.

  2. Sensitivity analysis of Repast computational ecology models with R/Repast.

    PubMed

    Prestes García, Antonio; Rodríguez-Patón, Alfonso

    2016-12-01

    Computational ecology is an emerging interdisciplinary discipline founded mainly on modeling and simulation methods for studying ecological systems. Among the existing modeling formalisms, the individual-based modeling is particularly well suited for capturing the complex temporal and spatial dynamics as well as the nonlinearities arising in ecosystems, communities, or populations due to individual variability. In addition, being a bottom-up approach, it is useful for providing new insights on the local mechanisms which are generating some observed global dynamics. Of course, no conclusions about model results could be taken seriously if they are based on a single model execution and they are not analyzed carefully. Therefore, a sound methodology should always be used for underpinning the interpretation of model results. The sensitivity analysis is a methodology for quantitatively assessing the effect of input uncertainty in the simulation output which should be incorporated compulsorily to every work based on in-silico experimental setup. In this article, we present R/Repast a GNU R package for running and analyzing Repast Simphony models accompanied by two worked examples on how to perform global sensitivity analysis and how to interpret the results.

  3. Analysis of the temporal dynamics of model performance and parameter sensitivity for hydrological models

    NASA Astrophysics Data System (ADS)

    Reusser, D.; Zehe, E.

    2009-04-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or model structure. Dealing with a set of performance measures evaluated at a high temporal resolution implies analyzing and interpreting a high dimensional data set. We present a method for such a hydrological model performance assessment with a high temporal resolution. Information about possible relevant processes during times with distinct model performance is obtained from parameter sensitivity analysis - also with high temporal resolution. We illustrate the combined approach of temporally resolved model performance and parameter sensitivity for a rainfall-runoff modeling case study. The headwater catchment of the Wilde Weisseritz in the eastern Ore mountains is simulated with the conceptual model WaSiM-ETH. The proposed time-resolved performance assessment starts with the computation of a large set of classically used performance measures for a moving window. The key of the developed approach is a data-reduction method based on self-organizing maps (SOMs) and cluster analysis to classify the high-dimensional performance matrix. Synthetic peak errors are used to interpret the resulting error classes. The temporally resolved sensitivity analysis is based on the FAST algorithm. The final outcome of the proposed method is a time series of the occurrence of dominant error types as well as a time series of the relative parameter sensitivity. For the two case studies analyzed here, 6 error types have been identified. They show clear temporal patterns which can lead to the identification of model structural errors. The parameter sensitivity helps to identify the relevant model parts.

  4. Sensitivity Analysis of OECD Benchmark Tests in BISON

    SciTech Connect

    Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.; Williamson, Richard

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  5. Rheological Models of Blood: Sensitivity Analysis and Benchmark Simulations

    NASA Astrophysics Data System (ADS)

    Szeliga, Danuta; Macioł, Piotr; Banas, Krzysztof; Kopernik, Magdalena; Pietrzyk, Maciej

    2010-06-01

    Modeling of blood flow with respect to rheological parameters of the blood is the objective of this paper. Casson type equation was selected as a blood model and the blood flow was analyzed based on Backward Facing Step benchmark. The simulations were performed using ADINA-CFD finite element code. Three output parameters were selected, which characterize the accuracy of flow simulation. Sensitivity analysis of the results with Morris Design method was performed to identify rheological parameters and the model output, which control the blood flow to significant extent. The paper is the part of the work on identification of parameters controlling process of clotting.

  6. SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES

    SciTech Connect

    Flach, G.

    2014-10-28

    PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.

  7. Path-sensitive analysis for reducing rollback overheads

    DOEpatents

    O'Brien, John K.P.; Wang, Kai-Ting Amy; Yamashita, Mark; Zhuang, Xiaotong

    2014-07-22

    A mechanism is provided for path-sensitive analysis for reducing rollback overheads. The mechanism receives, in a compiler, program code to be compiled to form compiled code. The mechanism divides the code into basic blocks. The mechanism then determines a restore register set for each of the one or more basic blocks to form one or more restore register sets. The mechanism then stores the one or more register sets such that responsive to a rollback during execution of the compiled code. A rollback routine identifies a restore register set from the one or more restore register sets and restores registers identified in the identified restore register set.

  8. A global low order spectral model designed for climate sensitivity studies

    NASA Technical Reports Server (NTRS)

    Hanna, A. F.; Stevens, D. E.

    1984-01-01

    A two level, global, spectral model using pressure as a vertical coordinate is developed. The system of equations describing the model is nonlinear and quasi-geostrophic. A moisture budget is calculated in the lower layer only with moist convective adjustment between the two layers. The mechanical forcing of topography is introduced as a lower boundary vertical velocity. Solar forcing is specified assuming a daily mean zenith angle. On land and sea ice surfaces a steady state thermal energy equation is solved to calculate the surface temperature. Over the oceans the sea surface temperatures are prescribed from the climatological average of January. The model is integrated to simulate the January climate.

  9. Sensitivity and uncertainty analysis of a polyurethane foam decomposition model

    SciTech Connect

    HOBBS,MICHAEL L.; ROBINSON,DAVID G.

    2000-03-14

    Sensitivity/uncertainty analyses are not commonly performed on complex, finite-element engineering models because the analyses are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, an analytical sensitivity/uncertainty analysis is used to determine the standard deviation and the primary factors affecting the burn velocity of polyurethane foam exposed to firelike radiative boundary conditions. The complex, finite element model has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state burn velocity calculated as the derivative of the burn front location versus time. The standard deviation of the burn velocity was determined by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation is essentially determined from a second derivative that is extremely sensitive to numerical noise. To minimize the numerical noise, 50-micron elements and approximately 1-msec time steps were required to obtain stable uncertainty results. The primary effect variable was shown to be the emissivity of the foam.

  10. Drivers of Wetland Conversion: a Global Meta-Analysis

    PubMed Central

    van Asselen, Sanneke; Verburg, Peter H.; Vermaat, Jan E.; Janse, Jan H.

    2013-01-01

    Meta-analysis of case studies has become an important tool for synthesizing case study findings in land change. Meta-analyses of deforestation, urbanization, desertification and change in shifting cultivation systems have been published. This present study adds to this literature, with an analysis of the proximate causes and underlying forces of wetland conversion at a global scale using two complementary approaches of systematic review. Firstly, a meta-analysis of 105 case-study papers describing wetland conversion was performed, showing that different combinations of multiple-factor proximate causes, and underlying forces, drive wetland conversion. Agricultural development has been the main proximate cause of wetland conversion, and economic growth and population density are the most frequently identified underlying forces. Secondly, to add a more quantitative component to the study, a logistic meta-regression analysis was performed to estimate the likelihood of wetland conversion worldwide, using globally-consistent biophysical and socioeconomic location factor maps. Significant factors explaining wetland conversion, in order of importance, are market influence, total wetland area (lower conversion probability), mean annual temperature and cropland or built-up area. The regression analyses results support the outcomes of the meta-analysis of the processes of conversion mentioned in the individual case studies. In other meta-analyses of land change, similar factors (e.g., agricultural development, population growth, market/economic factors) are also identified as important causes of various types of land change (e.g., deforestation, desertification). Meta-analysis helps to identify commonalities across the various local case studies and identify which variables may lead to individual cases to behave differently. The meta-regression provides maps indicating the likelihood of wetland conversion worldwide based on the location factors that have determined historic

  11. Drivers of wetland conversion: a global meta-analysis.

    PubMed

    van Asselen, Sanneke; Verburg, Peter H; Vermaat, Jan E; Janse, Jan H

    2013-01-01

    Meta-analysis of case studies has become an important tool for synthesizing case study findings in land change. Meta-analyses of deforestation, urbanization, desertification and change in shifting cultivation systems have been published. This present study adds to this literature, with an analysis of the proximate causes and underlying forces of wetland conversion at a global scale using two complementary approaches of systematic review. Firstly, a meta-analysis of 105 case-study papers describing wetland conversion was performed, showing that different combinations of multiple-factor proximate causes, and underlying forces, drive wetland conversion. Agricultural development has been the main proximate cause of wetland conversion, and economic growth and population density are the most frequently identified underlying forces. Secondly, to add a more quantitative component to the study, a logistic meta-regression analysis was performed to estimate the likelihood of wetland conversion worldwide, using globally-consistent biophysical and socioeconomic location factor maps. Significant factors explaining wetland conversion, in order of importance, are market influence, total wetland area (lower conversion probability), mean annual temperature and cropland or built-up area. The regression analyses results support the outcomes of the meta-analysis of the processes of conversion mentioned in the individual case studies. In other meta-analyses of land change, similar factors (e.g., agricultural development, population growth, market/economic factors) are also identified as important causes of various types of land change (e.g., deforestation, desertification). Meta-analysis helps to identify commonalities across the various local case studies and identify which variables may lead to individual cases to behave differently. The meta-regression provides maps indicating the likelihood of wetland conversion worldwide based on the location factors that have determined historic

  12. Tree cover in Central Africa: determinants and sensitivity under contrasted scenarios of global change

    NASA Astrophysics Data System (ADS)

    Aleman, Julie C.; Blarquez, Olivier; Gourlet-Fleury, Sylvie; Bremond, Laurent; Favier, Charly

    2017-01-01

    Tree cover is a key variable for ecosystem functioning, and is widely used to study tropical ecosystems. But its determinants and their relative importance are still a matter of debate, especially because most regional and global analyses have not considered the influence of agricultural practices. More information is urgently needed regarding how human practices influence vegetation structure. Here we focused in Central Africa, a region still subjected to traditional agricultural practices with a clear vegetation gradient. Using remote sensing data and global databases, we calibrated a Random Forest model to correlatively link tree cover with climatic, edaphic, fire and agricultural practices data. We showed that annual rainfall and accumulated water deficit were the main drivers of the distribution of tree cover and vegetation classes (defined by the modes of tree cover density), but agricultural practices, especially pastoralism, were also important in determining tree cover. We simulated future tree cover with our model using different scenarios of climate and land-use (agriculture and population) changes. Our simulations suggest that tree cover may respond differently regarding the type of scenarios, but land-use change was an important driver of vegetation change even able to counterbalance the effect of climate change in Central Africa.

  13. Tree cover in Central Africa: determinants and sensitivity under contrasted scenarios of global change.

    PubMed

    Aleman, Julie C; Blarquez, Olivier; Gourlet-Fleury, Sylvie; Bremond, Laurent; Favier, Charly

    2017-01-30

    Tree cover is a key variable for ecosystem functioning, and is widely used to study tropical ecosystems. But its determinants and their relative importance are still a matter of debate, especially because most regional and global analyses have not considered the influence of agricultural practices. More information is urgently needed regarding how human practices influence vegetation structure. Here we focused in Central Africa, a region still subjected to traditional agricultural practices with a clear vegetation gradient. Using remote sensing data and global databases, we calibrated a Random Forest model to correlatively link tree cover with climatic, edaphic, fire and agricultural practices data. We showed that annual rainfall and accumulated water deficit were the main drivers of the distribution of tree cover and vegetation classes (defined by the modes of tree cover density), but agricultural practices, especially pastoralism, were also important in determining tree cover. We simulated future tree cover with our model using different scenarios of climate and land-use (agriculture and population) changes. Our simulations suggest that tree cover may respond differently regarding the type of scenarios, but land-use change was an important driver of vegetation change even able to counterbalance the effect of climate change in Central Africa.

  14. Tree cover in Central Africa: determinants and sensitivity under contrasted scenarios of global change

    PubMed Central

    Aleman, Julie C.; Blarquez, Olivier; Gourlet-Fleury, Sylvie; Bremond, Laurent; Favier, Charly

    2017-01-01

    Tree cover is a key variable for ecosystem functioning, and is widely used to study tropical ecosystems. But its determinants and their relative importance are still a matter of debate, especially because most regional and global analyses have not considered the influence of agricultural practices. More information is urgently needed regarding how human practices influence vegetation structure. Here we focused in Central Africa, a region still subjected to traditional agricultural practices with a clear vegetation gradient. Using remote sensing data and global databases, we calibrated a Random Forest model to correlatively link tree cover with climatic, edaphic, fire and agricultural practices data. We showed that annual rainfall and accumulated water deficit were the main drivers of the distribution of tree cover and vegetation classes (defined by the modes of tree cover density), but agricultural practices, especially pastoralism, were also important in determining tree cover. We simulated future tree cover with our model using different scenarios of climate and land-use (agriculture and population) changes. Our simulations suggest that tree cover may respond differently regarding the type of scenarios, but land-use change was an important driver of vegetation change even able to counterbalance the effect of climate change in Central Africa. PMID:28134259

  15. Dynamic sensitivity analysis of long running landslide models through basis set expansion and meta-modelling

    NASA Astrophysics Data System (ADS)

    Rohmer, Jeremy

    2016-04-01

    Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.

  16. Sensitivity of CFC-11 uptake to physical initial conditions and interannually varying surface forcing in a global ocean model

    NASA Astrophysics Data System (ADS)

    Danabasoglu, Gokhan; Peacock, Synte; Lindsay, Keith; Tsumune, Daisuke

    Sensitivity of the oceanic chlorofluorocarbon CFC-11 uptake to physical initial conditions and surface dynamical forcing (heat and salt fluxes and wind stress) is investigated in a global ocean model used in climate studies. Two different initial conditions are used: a solution following a short integration starting with observed temperature and salinity and zero velocities, and the quasi-equilibrium solution of an independent integration. For surface dynamical forcing, recently developed normal-year and interannually varying (1958-2000) data sets are used. The model CFC-11 global and basin inventories, particularly in the normal-year forcing case, are below the observed mean estimates, but they remain within the observational error bars. Column inventory spatial distributions indicate nontrivial differences due to both initial condition and forcing changes, particularly in the northern North Atlantic and Southern Ocean. These differences are larger between forcing sensitivity experiments than between the initial condition cases. The comparisons along the A16N and SR3 WOCE sections also show differences between cases. However, comparisons with observations do not clearly favor a particular case, and model-observation differences remain much larger than model-model differences for all simulations. The choice of initial condition does not significantly change the CFC-11 distributions. Both because of locally large differences between normal-year and interannually varying simulations and because the dynamical and CFC-11 forcing calendars are synchronized, we favor using the more realistic interannually varying forcing in future simulations, given the availability of the forcing data sets.

  17. A proteomic strategy for global analysis of plant protein complexes.

    PubMed

    Aryal, Uma K; Xiong, Yi; McBride, Zachary; Kihara, Daisuke; Xie, Jun; Hall, Mark C; Szymanski, Daniel B

    2014-10-01

    Global analyses of protein complex assembly, composition, and location are needed to fully understand how cells coordinate diverse metabolic, mechanical, and developmental activities. The most common methods for proteome-wide analysis of protein complexes rely on affinity purification-mass spectrometry or yeast two-hybrid approaches. These methods are time consuming and are not suitable for many plant species that are refractory to transformation or genome-wide cloning of open reading frames. Here, we describe the proof of concept for a method allowing simultaneous global analysis of endogenous protein complexes that begins with intact leaves and combines chromatographic separation of extracts from subcellular fractions with quantitative label-free protein abundance profiling by liquid chromatography-coupled mass spectrometry. Applying this approach to the crude cytosolic fraction of Arabidopsis thaliana leaves using size exclusion chromatography, we identified hundreds of cytosolic proteins that appeared to exist as components of stable protein complexes. The reliability of the method was validated by protein immunoblot analysis and comparisons with published size exclusion chromatography data and the masses of known complexes. The method can be implemented with appropriate instrumentation, is applicable to any biological system, and has the potential to be further developed to characterize the composition of protein complexes and measure the dynamics of protein complex localization and assembly under different conditions.

  18. Isotopic ratio outlier analysis global metabolomics of Caenorhabditis elegans.

    PubMed

    Stupp, Gregory S; Clendinen, Chaevien S; Ajredini, Ramadan; Szewc, Mark A; Garrett, Timothy; Menger, Robert F; Yost, Richard A; Beecher, Chris; Edison, Arthur S

    2013-12-17

    We demonstrate the global metabolic analysis of Caenorhabditis elegans stress responses using a mass-spectrometry-based technique called isotopic ratio outlier analysis (IROA). In an IROA protocol, control and experimental samples are isotopically labeled with 95 and 5% (13)C, and the two sample populations are mixed together for uniform extraction, sample preparation, and LC-MS analysis. This labeling strategy provides several advantages over conventional approaches: (1) compounds arising from biosynthesis are easily distinguished from artifacts, (2) errors from sample extraction and preparation are minimized because the control and experiment are combined into a single sample, (3) measurement of both the molecular weight and the exact number of carbon atoms in each molecule provides extremely accurate molecular formulas, and (4) relative concentrations of all metabolites are easily determined. A heat-shock perturbation was conducted on C. elegans to demonstrate this approach. We identified many compounds that significantly changed upon heat shock, including several from the purine metabolism pathway. The metabolomic response information by IROA may be interpreted in the context of a wealth of genetic and proteomic information available for C. elegans . Furthermore, the IROA protocol can be applied to any organism that can be isotopically labeled, making it a powerful new tool in a global metabolomics pipeline.

  19. Isotopic Ratio Outlier Analysis Global Metabolomics of Caenorhabditis elegans

    PubMed Central

    Szewc, Mark A.; Garrett, Timothy; Menger, Robert F.; Yost, Richard A.; Beecher, Chris; Edison, Arthur S.

    2014-01-01

    We demonstrate the global metabolic analysis of Caenorhabditis elegans stress responses using a mass spectrometry-based technique called Isotopic Ratio Outlier Analysis (IROA). In an IROA protocol, control and experimental samples are isotopically labeled with 95% and 5% 13C, and the two sample populations are mixed together for uniform extraction, sample preparation, and LC-MS analysis. This labeling strategy provides several advantages over conventional approaches: 1) compounds arising from biosynthesis are easily distinguished from artifacts, 2) errors from sample extraction and preparation are minimized because the control and experiment are combined into a single sample, 3) measurement of both the molecular weight and the exact number of carbon atoms in each molecule provides extremely accurate molecular formulae, and 4) relative concentrations of all metabolites are easily determined. A heat shock perturbation was conducted on C. elegans to demonstrate this approach. We identified many compounds that significantly changed upon heat shock, including several from the purine metabolism pathway, which we use to demonstrate the approach. The metabolomic response information by IROA may be interpreted in the context of a wealth of genetic and proteomic information available for C. elegans. Furthermore, the IROA protocol can be applied to any organism that can be isotopically labeled, making it a powerful new tool in a global metabolomics pipeline. PMID:24274725

  20. Global analysis of frequency modulation experiments in a vortex oscillator

    NASA Astrophysics Data System (ADS)

    Martin, S. Y.; Thirion, C.; Hoarau, C.; Baraduc, C.; Diény, B.

    2016-02-01

    Frequency modulation is performed on a vortex oscillator at various modulation frequencies and powers. A global analysis of the whole set of data is proposed, so that all experimental curves are described with the same four parameters. Three of these parameters describe the dependence of the instantaneous frequency with modulating current. This dependence appears significantly different from the frequency-current dependence observed in a quasi-static experiment. The discrepancy is ascribed to the different time scales involved, compared to the relaxation time of the vortex oscillator.

  1. Indian plant germplasm on the global platter: an analysis.

    PubMed

    Jacob, Sherry R; Tyagi, Vandana; Agrawal, Anuradha; Chakrabarty, Shyamal K; Tyagi, Rishi K

    2015-01-01

    , about 50% of the Indian-origin accessions deposited in SGSV are traditional varieties or landraces with defined traits which form the backbone of any crop gene pool. This paper is also attempting to correlate the global data on Indian-origin germplasm with the national germplasm export profile. The analysis from this paper is discussed with the perspective of possible implications in the access and benefit sharing regime of both the International Treaty on Plant Genetic Resources for Food and Agriculture and the newly enforced Nagoya Protocol under the Convention on Biological Diversity.

  2. Indian Plant Germplasm on the Global Platter: An Analysis

    PubMed Central

    Jacob, Sherry R.; Tyagi, Vandana; Agrawal, Anuradha; Chakrabarty, Shyamal K.; Tyagi, Rishi K.

    2015-01-01

    , about 50% of the Indian-origin accessions deposited in SGSV are traditional varieties or landraces with defined traits which form the backbone of any crop gene pool. This paper is also attempting to correlate the global data on Indian-origin germplasm with the national germplasm export profile. The analysis from this paper is discussed with the perspective of possible implications in the access and benefit sharing regime of both the International Treaty on Plant Genetic Resources for Food and Agriculture and the newly enforced Nagoya Protocol under the Convention on Biological Diversity. PMID:25974270

  3. Analysis of Transition-Sensitized Turbulent Transport Equations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Thacker, William D.; Gatski, Thomas B.; Grosch, Chester E,

    2005-01-01

    The dynamics of an ensemble of linear disturbances in boundary-layer flows at various Reynolds numbers is studied through an analysis of the transport equations for the mean disturbance kinetic energy and energy dissipation rate. Effects of adverse and favorable pressure-gradients on the disturbance dynamics are also included in the analysis Unlike the fully turbulent regime where nonlinear phase scrambling of the fluctuations affects the flow field even in proximity to the wall, the early stage transition regime fluctuations studied here are influenced cross the boundary layer by the solid boundary. The dominating dynamics in the disturbance kinetic energy and dissipation rate equations are described. These results are then used to formulate transition-sensitized turbulent transport equations, which are solved in a two-step process and applied to zero-pressure-gradient flow over a flat plate. Computed results are in good agreement with experimental data.

  4. Parametric sensitivity analysis for temperature control in outdoor photobioreactors.

    PubMed

    Pereira, Darlan A; Rodrigues, Vinicius O; Gómez, Sonia V; Sales, Emerson A; Jorquera, Orlando

    2013-09-01

    In this study a critical analysis of input parameters on a model to describe the broth temperature in flat plate photobioreactors throughout the day is carried out in order to assess the effect of these parameters on the model. Using the design of experiment approach, variation of selected parameters was introduced and the influence of each parameter on the broth temperature was evaluated by a parametric sensitivity analysis. The results show that the major influence on the broth temperature is that from the reactor wall and the shading factor, both related to the direct and reflected solar irradiation. Other parameter which play an important role on the temperature is the distance between plates. This study provides information to improve the design and establish the most appropriate operating conditions for the cultivation of microalgae in outdoor systems.

  5. soil organic matter pools and quality are sensitive to global climate change in tropical forests from India

    NASA Astrophysics Data System (ADS)

    Mani, Shanmugam; Merino, Agustín; García-Oliva, Felipe; Riotte, Jean; Sukumar, Raman

    2016-04-01

    Soil organic carbon (SOC) storage and quality are some of the most important factors determining ecological process in tropical forests, which are especially sensitive to global climate change (GCC). In India, the GCC scenarios expect increasing of drought period and wildfire, which may affect the SOC, and therefore the capacity of forest for C sequestration. The aim of the study was to evaluate the amount of soil C and its quality in the mineral soil across precipitation gradient with different factors (vegetation, pH, soil texture and bedrock composition) for generate SOC predictions under GCC. Six soil samples were collected (top 10 cm depth) from 19 1-ha permanent plots in the Mudumalai Wildlife Sanctuary of southern India, which are characterised by four types of forest vegetation (i.e. dry thorn, dry deciduous, moist deciduous and semi-evergreen forest) distributed along to rainfall gradient. The driest sites are dominated by sandy soils, while the soil clay proportion increased in the wet sites. Total organic C (Leco CN analyser), and the SOM quality was assessed by Differential Scanning Calorimetry (DSC) and Solid-state 13CCP-MAS NMR analyses. Soil organic C was positively correlated with precipitation (R2 = 0.502, p<0.01) and with soil clay content (R2 =0.15, p<0.05), and negatively with soil sand content (R2=0.308, p<0.001) and with pH (R2=0.529, p<0.01); while the C/N was only found positive correlation with clay (R2= 0.350, p<0.01). The driest sites (dry thorn forest) has the lowest proportion of thermal combustion of recalcitrant organic matter (Q2,375-475 °C) than the other sites (p<0.05) and this SOC fraction correlated positively with rainfall (R2=0.27, p=0.01). The Q2 model with best fit included rainfall, pH, sand, clay, C and C/N (R2=0.52, p=0.01). Principal component analysis explains 77% of total variance. The sites on the fist component are distributed along the rainfall gradient. These results suggest that the 50% of variance was explained

  6. A highly sensitive and multiplexed method for focused transcript analysis.

    PubMed

    Kataja, Kari; Satokari, Reetta M; Arvas, Mikko; Takkinen, Kristiina; Söderlund, Hans

    2006-10-01

    We describe a novel, multiplexed method for focused transcript analysis of tens to hundreds of genes. In this method TRAC (transcript analysis with aid of affinity capture) mRNA targets, a set of amplifiable detection probes of distinct sizes and biotinylated oligo(dT) capture probe are hybridized in solution. The formed sandwich hybrids are collected on magnetic streptavidin-coated microparticles and washed. The hybridized probes are eluted, optionally amplified by a PCR using a universal primer pair and detected with laser-induced fluorescence and capillary electrophoresis. The probes were designed by using a computer program developed for the purpose. The TRAC method was adapted to 96-well format by utilizing an automated magnetic particle processor. Here we demonstrate a simultaneous analysis of 18 Saccharomyces cerevisiae transcripts from two experimental conditions and show a comparison with a qPCR system. The sensitivity of the method is significantly increased by the PCR amplification of the hybridized and eluted probes. Our data demonstrate a bias-free use of at least 16 cycles of PCR amplification to increase probe signal, allowing transcript analysis from 2.5 ng of the total mRNA sample. The method is fast and simple and avoids cDNA conversion. These qualifications make it a potential, new means for routine analysis and a complementing method for microarrays and high density chips.

  7. Simple Sensitivity Analysis for Orion Guidance Navigation and Control

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch. We describe in this paper a sensitivity analysis tool ("Critical Factors Tool" or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  8. Three-dimensional aerodynamic shape optimization using discrete sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Burgreen, Gregory W.

    1995-01-01

    An aerodynamic shape optimization procedure based on discrete sensitivity analysis is extended to treat three-dimensional geometries. The function of sensitivity analysis is to directly couple computational fluid dynamics (CFD) with numerical optimization techniques, which facilitates the construction of efficient direct-design methods. The development of a practical three-dimensional design procedures entails many challenges, such as: (1) the demand for significant efficiency improvements over current design methods; (2) a general and flexible three-dimensional surface representation; and (3) the efficient solution of very large systems of linear algebraic equations. It is demonstrated that each of these challenges is overcome by: (1) employing fully implicit (Newton) methods for the CFD analyses; (2) adopting a Bezier-Bernstein polynomial parameterization of two- and three-dimensional surfaces; and (3) using preconditioned conjugate gradient-like linear system solvers. Whereas each of these extensions independently yields an improvement in computational efficiency, the combined effect of implementing all the extensions simultaneously results in a significant factor of 50 decrease in computational time and a factor of eight reduction in memory over the most efficient design strategies in current use. The new aerodynamic shape optimization procedure is demonstrated in the design of both two- and three-dimensional inviscid aerodynamic problems including a two-dimensional supersonic internal/external nozzle, two-dimensional transonic airfoils (resulting in supercritical shapes), three-dimensional transport wings, and three-dimensional supersonic delta wings. Each design application results in realistic and useful optimized shapes.

  9. Parametric sensitivity analysis of avian pancreatic polypeptide (APP).

    PubMed

    Zhang, H; Wong, C F; Thacher, T; Rabitz, H

    1995-10-01

    Computer simulations utilizing a classical force field have been widely used to study biomolecular properties. It is important to identify the key force field parameters or structural groups controlling the molecular properties. In the present paper the sensitivity analysis method is applied to study how various partial charges and solvation parameters affect the equilibrium structure and free energy of avian pancreatic polypeptide (APP). The general shape of APP is characterized by its three principal moments of inertia. A molecular dynamics simulation of APP was carried out with the OPLS/Amber force field and a continuum model of solvation energy. The analysis pinpoints the parameters which have the largest (or smallest) impact on the protein equilibrium structure (i.e., the moments of inertia) or free energy. A display of the protein with its atoms colored according to their sensitivities illustrates the patterns of the interactions responsible for the protein stability. The results suggest that the electrostatic interactions play a more dominant role in protein stability than the part of the solvation effect modeled by the atomic solvation parameters.

  10. Sensitivity Analysis of Offshore Wind Cost of Energy (Poster)

    SciTech Connect

    Dykes, K.; Ning, A.; Graf, P.; Scott, G.; Damiami, R.; Hand, M.; Meadows, R.; Musial, W.; Moriarty, P.; Veers, P.

    2012-10-01

    No matter the source, offshore wind energy plant cost estimates are significantly higher than for land-based projects. For instance, a National Renewable Energy Laboratory (NREL) review on the 2010 cost of wind energy found baseline cost estimates for onshore wind energy systems to be 71 dollars per megawatt-hour ($/MWh), versus 225 $/MWh for offshore systems. There are many ways that innovation can be used to reduce the high costs of offshore wind energy. However, the use of such innovation impacts the cost of energy because of the highly coupled nature of the system. For example, the deployment of multimegawatt turbines can reduce the number of turbines, thereby reducing the operation and maintenance (O&M) costs associated with vessel acquisition and use. On the other hand, larger turbines may require more specialized vessels and infrastructure to perform the same operations, which could result in higher costs. To better understand the full impact of a design decision on offshore wind energy system performance and cost, a system analysis approach is needed. In 2011-2012, NREL began development of a wind energy systems engineering software tool to support offshore wind energy system analysis. The tool combines engineering and cost models to represent an entire offshore wind energy plant and to perform system cost sensitivity analysis and optimization. Initial results were collected by applying the tool to conduct a sensitivity analysis on a baseline offshore wind energy system using 5-MW and 6-MW NREL reference turbines. Results included information on rotor diameter, hub height, power rating, and maximum allowable tip speeds.

  11. A Meta-Analysis of Global Urban Land Expansion

    PubMed Central

    Seto, Karen C.; Fragkias, Michail; Güneralp, Burak; Reilly, Michael K.

    2011-01-01

    The conversion of Earth's land surface to urban uses is one of the most irreversible human impacts on the global biosphere. It drives the loss of farmland, affects local climate, fragments habitats, and threatens biodiversity. Here we present a meta-analysis of 326 studies that have used remotely sensed images to map urban land conversion. We report a worldwide observed increase in urban land area of 58,000 km2 from 1970 to 2000. India, China, and Africa have experienced the highest rates of urban land expansion, and the largest change in total urban extent has occurred in North America. Across all regions and for all three decades, urban land expansion rates are higher than or equal to urban population growth rates, suggesting that urban growth is becoming more expansive than compact. Annual growth in GDP per capita drives approximately half of the observed urban land expansion in China but only moderately affects urban expansion in India and Africa, where urban land expansion is driven more by urban population growth. In high income countries, rates of urban land expansion are slower and increasingly related to GDP growth. However, in North America, population growth contributes more to urban expansion than it does in Europe. Much of the observed variation in urban expansion was not captured by either population, GDP, or other variables in the model. This suggests that contemporary urban expansion is related to a variety of factors difficult to observe comprehensively at the global level, including international capital flows, the informal economy, land use policy, and generalized transport costs. Using the results from the global model, we develop forecasts for new urban land cover using SRES Scenarios. Our results show that by 2030, global urban land cover will increase between 430,000 km2 and 12,568,000 km2, with an estimate of 1,527,000 km2 more likely. PMID:21876770

  12. Sensitivity of global climate model simulations to increased stomatal resistance and CO{sub 2} increases

    SciTech Connect

    Henderson-Sellers, A.; McGuffie, K.; Gross, C.

    1995-07-01

    Increasing levels of atmospheric CO{sub 2} will not only modify climate, they will also likely increase the water-use efficiency of plants by decreasing stomatal openings. The effect of the imposition of {open_quotes}doubled stomatal resistance{close_quotes} on climate is investigated in off-line simulations with the Biosphere-Atmosphere Transfer Scheme (BATS) and in two sets of global climate model simulations: for present-day and doubled atmospheric CO{sub 2} concentrations. The anticipated evapotranspiration decrease is seen most clearly in the boreal forests in the summer although, for the present-day climate (but not at 2 x CO{sub 2}), there are also noticeable responses in the tropical forests in South America. In the latitude zone 44{degrees}N to 58{degrees}N, evapotranspiration decreases by -15 W m{sup 2}, temperatures increase by =2 K, and the sensible heat flux by +15 W m{sup {minus}2}. Soil moisture is often, but less extensively, increased, which can cause increases in runoff. The responses at 2 x CO{sub 2} are larger in the 44{degrees}N to 58{degrees}N zone than elsewhere. Globally, the impact of imposing a doubled stomatal resistance in the present-day climate is an increase in the annually averaged surface air temperature of 0.13 K and a reduction in total precipitation of -0.82%. If both the atmospheric CO{sub 2} content and the stomatal resistance are doubled, the global response in surface air temperature and precipitation are +2.72 K and +5.01% compared with +2.67 K and + 7.73% if CO{sub 2} is doubled but stomatal resistance remains unchanged as in the usual {open_quotes}greenhouse{close_quotes} experiment. Doubling stomatal resistance as well as atmospheric CO{sub 2} results in increased soil moisture in northern midlatitudes in summer. 40 refs.. 17 figs., 5 tabs.

  13. Parameters sensitivity analysis for a~crop growth model applied to winter wheat in the Huanghuaihai Plain in China

    NASA Astrophysics Data System (ADS)

    Liu, M.; He, B.; Lü, A.; Zhou, L.; Wu, J.

    2014-06-01

    Parameters sensitivity analysis is a crucial step in effective model calibration. It quantitatively apportions the variation of model output to different sources of variation, and identifies how "sensitive" a model is to changes in the values of model parameters. Through calibration of parameters that are sensitive to model outputs, parameter estimation becomes more efficient. Due to uncertainties associated with yield estimates in a regional assessment, field-based models that perform well at field scale are not accurate enough to model at regional scale. Conducting parameters sensitivity analysis at the regional scale and analyzing the differences of parameter sensitivity between stations would make model calibration and validation in different sub-regions more efficient. Further, it would benefit the model applied to the regional scale. Through simulating 2000 × 22 samples for 10 stations in the Huanghuaihai Plain, this study discovered that TB (Optimal temperature), HI (Normal harvest index), WA (Potential radiation use efficiency), BN2 (Normal fraction of N in crop biomass at mid-season) and RWPC1 (Fraction of root weight at emergency) are more sensitive than other parameters. Parameters that determine nutrition supplement and LAI development have higher global sensitivity indices than first-order indices. For spatial application, soil diversity is crucial because soil is responsible for crop parameters sensitivity index differences between sites.

  14. Analysis of DNA Cytosine Methylation Patterns Using Methylation-Sensitive Amplification Polymorphism (MSAP).

    PubMed

    Guevara, María Ángeles; de María, Nuria; Sáez-Laguna, Enrique; Vélez, María Dolores; Cervera, María Teresa; Cabezas, José Antonio

    2017-01-01

    Different molecular techniques have been developed to study either the global level of methylated cytosines or methylation at specific gene sequences. One of them is the methylation-sensitive amplified polymorphism technique (MSAP) which is a modification of amplified fragment length polymorphism (AFLP). It has been used to study methylation of anonymous CCGG sequences in different fungi, plants, and animal species. The main variation of this technique resides on the use of isoschizomers with different methylation sensitivity (such as HpaII and MspI) as a frequent-cutter restriction enzyme. For each sample, MSAP analysis is performed using both EcoRI/HpaII- and EcoRI/MspI-digested samples. A comparative analysis between EcoRI/HpaII and EcoRI/MspI fragment patterns allows the identification of two types of polymorphisms: (1) methylation-insensitive polymorphisms that show common EcoRI/HpaII and EcoRI/MspI patterns but are detected as polymorphic amplified fragments among samples and (2) methylation-sensitive polymorphisms which are associated with the amplified fragments that differ in their presence or absence or in their intensity between EcoRI/HpaII and EcoRI/MspI patterns. This chapter describes a detailed protocol of this technique and discusses the modifications that can be applied to adjust the technology to different species of interest.

  15. A Variance Decomposition Approach to Uncertainty Quantification and Sensitivity Analysis of the J&E Model

    PubMed Central

    Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G.

    2015-01-01

    The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity, than effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g. sandy soil as compared to clayey soil, and “shallow” sources as compared to “deep” sources) are evaluated. Our results, not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive. PMID:25947051

  16. Sensitivity Analysis of Wind Plant Performance to Key Turbine Design Parameters: A Systems Engineering Approach; Preprint

    SciTech Connect

    Dykes, K.; Ning, A.; King, R.; Graf, P.; Scott, G.; Veers, P.

    2014-02-01

    This paper introduces the development of a new software framework for research, design, and development of wind energy systems which is meant to 1) represent a full wind plant including all physical and nonphysical assets and associated costs up to the point of grid interconnection, 2) allow use of interchangeable models of varying fidelity for different aspects of the system, and 3) support system level multidisciplinary analyses and optimizations. This paper describes the design of the overall software capability and applies it to a global sensitivity analysis of wind turbine and plant performance and cost. The analysis was performed using three different model configurations involving different levels of fidelity, which illustrate how increasing fidelity can preserve important system interactions that build up to overall system performance and cost. Analyses were performed for a reference wind plant based on the National Renewable Energy Laboratory's 5-MW reference turbine at a mid-Atlantic offshore location within the United States.

  17. Robust Sensitivity Analysis for Multi-Attribute Deterministic Hierarchical Value Models

    DTIC Science & Technology

    2002-03-01

    Method (Satisfying Method) Disjunctive Method Standart Level Elimination by Aspect Lexicograhic Semi order Lexicographic Method Ordinal Weigted Sum...framework for sensitivity analysis of hierarchical additive value models and standardizes the sensitivity analysis notation and terminology . Finally

  18. Stochastic and sensitivity analysis of shape error of inflatable antenna reflectors

    NASA Astrophysics Data System (ADS)

    San, Bingbing; Yang, Qingshan; Yin, Liwei

    2017-03-01

    Inflatable antennas are promising candidates to realize future satellite communications and space observations since they are lightweight, low-cost and small-packaged-volume. However, due to their high flexibility, inflatable reflectors are difficult to manufacture accurately, which may result in undesirable shape errors, and thus affect their performance negatively. In this paper, the stochastic characteristics of shape errors induced during manufacturing process are investigated using Latin hypercube sampling coupled with manufacture simulations. Four main random error sources are involved, including errors in membrane thickness, errors in elastic modulus of membrane, boundary deviations and pressure variations. Using regression and correlation analysis, a global sensitivity study is conducted to rank the importance of these error sources. This global sensitivity analysis is novel in that it can take into account the random variation and the interaction between error sources. Analyses are parametrically carried out with various focal-length-to-diameter ratios (F/D) and aperture sizes (D) of reflectors to investigate their effects on significance ranking of error sources. The research reveals that RMS (Root Mean Square) of shape error is a random quantity with an exponent probability distribution and features great dispersion; with the increase of F/D and D, both mean value and standard deviation of shape errors are increased; in the proposed range, the significance ranking of error sources is independent of F/D and D; boundary deviation imposes the greatest effect with a much higher weight than the others; pressure variation ranks the second; error in thickness and elastic modulus of membrane ranks the last with very close sensitivities to pressure variation. Finally, suggestions are given for the control of the shape accuracy of reflectors and allowable values of error sources are proposed from the perspective of reliability.

  19. Sensitivity analysis of radionuclides atmospheric dispersion following the Fukushima accident

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien

    2014-05-01

    Atmospheric dispersion models are used in response to accidental releases with two purposes: - minimising the population exposure during the accident; - complementing field measurements for the assessment of short and long term environmental and sanitary impacts. The predictions of these models are subject to considerable uncertainties of various origins. Notably, input data, such as meteorological fields or estimations of emitted quantities as function of time, are highly uncertain. The case studied here is the atmospheric release of radionuclides following the Fukushima Daiichi disaster. The model used in this study is Polyphemus/Polair3D, from which derives IRSN's operational long distance atmospheric dispersion model ldX. A sensitivity analysis was conducted in order to estimate the relative importance of a set of identified uncertainty sources. The complexity of this task was increased by four characteristics shared by most environmental models: - high dimensional inputs; - correlated inputs or inputs with complex structures; - high dimensional output; - multiplicity of purposes that require sophisticated and non-systematic post-processing of the output. The sensitivities of a set of outputs were estimated with the Morris screening method. The input ranking was highly dependent on the considered output. Yet, a few variables, such as horizontal diffusion coefficient or clouds thickness, were found to have a weak influence on most of them and could be discarded from further studies. The sensitivity analysis procedure was also applied to indicators of the model performance computed on a set of gamma dose rates observations. This original approach is of particular interest since observations could be used later to calibrate the input variables probability distributions. Indeed, only the variables that are influential on performance scores are likely to allow for calibration. An indicator based on emission peaks time matching was elaborated in order to complement

  20. LSENS - GENERAL CHEMICAL KINETICS AND SENSITIVITY ANALYSIS CODE

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1994-01-01

    LSENS has been developed for solving complex, homogeneous, gas-phase, chemical kinetics problems. The motivation for the development of this program is the continuing interest in developing detailed chemical reaction mechanisms for complex reactions such as the combustion of fuels and pollutant formation and destruction. A reaction mechanism is the set of all elementary chemical reactions that are required to describe the process of interest. Mathematical descriptions of chemical kinetics problems constitute sets of coupled, nonlinear, first-order ordinary differential equations (ODEs). The number of ODEs can be very large because of the numerous chemical species involved in the reaction mechanism. Further complicating the situation are the many simultaneous reactions needed to describe the chemical kinetics of practical fuels. For example, the mechanism describing the oxidation of the simplest hydrocarbon fuel, methane, involves over 25 species participating in nearly 100 elementary reaction steps. Validating a chemical reaction mechanism requires repetitive solutions of the governing ODEs for a variety of reaction conditions. Analytical solutions to the systems of ODEs describing chemistry are not possible, except for the simplest cases, which are of little or no practical value. Consequently, there is a need for fast and reliable numerical solution techniques for chemical kinetics problems. In addition to solving the ODEs describing chemical kinetics, it is often necessary to know what effects variations in either initial condition values or chemical reaction mechanism parameters have on the solution. Such a need arises in the development of reaction mechanisms from experimental data. The rate coefficients are often not known with great precision and in general, the experimental data are not sufficiently detailed to accurately estimate the rate coefficient parameters. The development of a reaction mechanism is facilitated by a systematic sensitivity analysis

  1. GPU-based Integration with Application in Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Atanassov, Emanouil; Ivanovska, Sofiya; Karaivanova, Aneta; Slavov, Dimitar

    2010-05-01

    The presented work is an important part of the grid application MCSAES (Monte Carlo Sensitivity Analysis for Environmental Studies) which aim is to develop an efficient Grid implementation of a Monte Carlo based approach for sensitivity studies in the domains of Environmental modelling and Environmental security. The goal is to study the damaging effects that can be caused by high pollution levels (especially effects on human health), when the main modeling tool is the Danish Eulerian Model (DEM). Generally speaking, sensitivity analysis (SA) is the study of how the variation in the output of a mathematical model can be apportioned to, qualitatively or quantitatively, different sources of variation in the input of a model. One of the important classes of methods for Sensitivity Analysis are Monte Carlo based, first proposed by Sobol, and then developed by Saltelli and his group. In MCSAES the general Saltelli procedure has been adapted for SA of the Danish Eulerian model. In our case we consider as factors the constants determining the speeds of the chemical reactions in the DEM and as output a certain aggregated measure of the pollution. Sensitivity simulations lead to huge computational tasks (systems with up to 4 × 109 equations at every time-step, and the number of time-steps can be more than a million) which motivates its grid implementation. MCSAES grid implementation scheme includes two main tasks: (i) Grid implementation of the DEM, (ii) Grid implementation of the Monte Carlo integration. In this work we present our new developments in the integration part of the application. We have developed an algorithm for GPU-based generation of scrambled quasirandom sequences which can be combined with the CPU-based computations related to the SA. Owen first proposed scrambling of Sobol sequence through permutation in a manner that improves the convergence rates. Scrambling is necessary not only for error analysis but for parallel implementations. Good scrambling is

  2. Development, sensitivity analysis, and uncertainty quantification of high-fidelity arctic sea ice models.

    SciTech Connect

    Peterson, Kara J.; Bochev, Pavel Blagoveston; Paskaleva, Biliana S.

    2010-09-01

    Arctic sea ice is an important component of the global climate system and due to feedback effects the Arctic ice cover is changing rapidly. Predictive mathematical models are of paramount importance for accurate estimates of the future ice trajectory. However, the sea ice components of Global Climate Models (GCMs) vary significantly in their prediction of the future state of Arctic sea ice and have generally underestimated the rate of decline in minimum sea ice extent seen over the past thirty years. One of the contributing factors to this variability is the sensitivity of the sea ice to model physical parameters. A new sea ice model that has the potential to improve sea ice predictions incorporates an anisotropic elastic-decohesive rheology and dynamics solved using the material-point method (MPM), which combines Lagrangian particles for advection with a background grid for gradient computations. We evaluate the variability of the Los Alamos National Laboratory CICE code and the MPM sea ice code for a single year simulation of the Arctic basin using consistent ocean and atmospheric forcing. Sensitivities of ice volume, ice area, ice extent, root mean square (RMS) ice speed, central Arctic ice thickness, and central Arctic ice speed with respect to ten different dynamic and thermodynamic parameters are evaluated both individually and in combination using the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA). We find similar responses for the two codes and some interesting seasonal variability in the strength of the parameters on the solution.

  3. Global pattern analysis and classification of dermoscopic images using textons

    NASA Astrophysics Data System (ADS)

    Sadeghi, Maryam; Lee, Tim K.; McLean, David; Lui, Harvey; Atkins, M. Stella

    2012-02-01

    Detecting and classifying global dermoscopic patterns are crucial steps for detecting melanocytic lesions from non-melanocytic ones. An important stage of melanoma diagnosis uses pattern analysis methods such as 7-point check list, Menzies method etc. In this paper, we present a novel approach to investigate texture analysis and classification of 5 classes of global lesion patterns (reticular, globular, cobblestone, homogeneous, and parallel pattern) in dermoscopic images. Our statistical approach models the texture by the joint probability distribution of filter responses using a comprehensive set of the state of the art filter banks. This distribution is represented by the frequency histogram of filter response cluster centers called textons. We have also examined other two methods: Joint Distribution of Intensities (JDI) and Convolutional Restricted Boltzmann Machine (CRBM) to learn the pattern specific features to be used for textons. The classification performance is compared over the Leung and Malik filters (LM), Root Filter Set (RFS), Maximum Response Filters (MR8), Schmid, Laws and our proposed filter set as well as CRBM and JDI. We analyzed 375 images of the 5 classes of the patterns. Our experiments show that the joint distribution of color (JDC) in the L*a*b* color space outperforms the other color spaces with a correct classification rate of 86.8%.

  4. Global mode and frequency response analysis of low-density jets

    NASA Astrophysics Data System (ADS)

    Coenen, W.; Lesshafft, L.; Garnaud, X.; Sevilla, A.

    2014-11-01

    We present a global stability analysis of a low-density jet, where the wavepacket structures are temporal eigenmodes of the linearized equations of motion in a 2D domain. As a base state we employ a numerical solution of the low-Mach number Navier-Stokes equations. The jet is characterized through the jet-to-ambient density ratio, the Reynolds number, and the momentum thickness of the velocity profile at the jet exit plane. The linear global mode analysis shows that for certain combinations of the control parameters, an isolated eigenmode dominates the eigenvalue spectrum. Its associated growth rate can be used to construct a neutral curve in the parameter space that agrees well with the experimentally observed onset of self-sustained oscillations (Hallberg & Strykowski, JFM, 2006). However, for high values of the Reynolds number, the construction of a neutral curve based on the spectrum loses validity, since for these cases the spectrum is dominated by a continuous branch of eigenvalues, sensitive to changes in domain length and grid refinement. Finally, the flow response to external forcing in a globally stable setting is investigated through the computation of the pseudospectrum, and is found to be dominated by a resonance of the stable eigenmode. Supported by Spanish MINECO under Project DPI 2011-28356-C03-02.

  5. Global biogenic volatile organic compound emissions in the ORCHIDEE and MEGAN models and sensitivity to key parameters

    NASA Astrophysics Data System (ADS)

    Messina, Palmira; Lathière, Juliette; Sindelarova, Katerina; Vuichard, Nicolas; Granier, Claire; Ghattas, Josefine; Cozic, Anne; Hauglustaine, Didier A.

    2016-11-01

    A new version of the biogenic volatile organic compounds (BVOCs) emission scheme has been developed in the global vegetation model ORCHIDEE (Organizing Carbon and Hydrology in Dynamic EcosystEm), which includes an extended list of biogenic emitted compounds, updated emission factors (EFs), a dependency on light for almost all compounds and a multi-layer radiation scheme. Over the 2000-2009 period, using this model, we estimate mean global emissions of 465 Tg C yr-1 for isoprene, 107.5 Tg C yr-1 for monoterpenes, 38 Tg C yr-1 for methanol, 25 Tg C yr-1 for acetone and 24 Tg C yr-1 for sesquiterpenes. The model results are compared to state-of-the-art emission budgets, showing that the ORCHIDEE emissions are within the range of published estimates. ORCHIDEE BVOC emissions are compared to the estimates of the Model of Emissions of Gases and Aerosols from Nature (MEGAN), which is largely used throughout the biogenic emissions and atmospheric chemistry community. Our results show that global emission budgets of the two models are, in general, in good agreement. ORCHIDEE emissions are 8 % higher for isoprene, 8 % lower for methanol, 17 % higher for acetone, 18 % higher for monoterpenes and 39 % higher for sesquiterpenes, compared to the MEGAN estimates. At the regional scale, the largest differences between ORCHIDEE and MEGAN are highlighted for isoprene in northern temperate regions, where ORCHIDEE emissions are higher by 21 Tg C yr-1, and for monoterpenes, where they are higher by 4.4 and 10.2 Tg C yr-1 in northern and southern tropical regions compared to MEGAN. The geographical differences between the two models are mainly associated with different EF and plant functional type (PFT) distributions, while differences in the seasonal cycle are mostly driven by differences in the leaf area index (LAI). Sensitivity tests are carried out for both models to explore the response to key variables or parameters such as LAI and light-dependent fraction (LDF). The ORCHIDEE and

  6. Numerical daemons in hydrological modeling: Effects on uncertainty assessment, sensitivity analysis and model predictions

    NASA Astrophysics Data System (ADS)

    Kavetski, D.; Clark, M. P.; Fenicia, F.

    2011-12-01

    instances? This talk ties together a range of empirical observations of the impact of model numerics on the analysis methodologies used for sensitivity analysis, parameter calibration and predictive application of hydrological models. We discus the impact of model implementation, including time stepping schemes and the handling of nonsmooth constitutive relationships on local and global sensitivity analysis, on parameter optimization using gradient-based and evolutionary algorithms, on uncertainty assessment using semi-analytical and Monte Carlo methods. We also report on the predictive reliability of models over a range of structural complexity, calibrated to the same data, but implemented using different numerical methods. Results are drawn from a range of case studies, including the 12 MOPEX catchments in the USA, as well as experimental catchments in New Zealand and Europe. We conclude with a range of recommendations that aim to avoid unnecessary numerical artifacts in hydrological and other environmental models, and how they can be exploited to simplify different aspects of model analysis and application.

  7. Sensitivity analysis for aeroacoustic and aeroelastic design of turbomachinery blades

    NASA Technical Reports Server (NTRS)

    Lorence, Christopher B.; Hall, Kenneth C.

    1995-01-01

    A new method for computing the effect that small changes in the airfoil shape and cascade geometry have on the aeroacoustic and aeroelastic behavior of turbomachinery cascades is presented. The nonlinear unsteady flow is assumed to be composed of a nonlinear steady flow plus a small perturbation unsteady flow that is harmonic in time. First, the full potential equation is used to describe the behavior of the nonlinear mean (steady) flow through a two-dimensional cascade. The small disturbance unsteady flow through the cascade is described by the linearized Euler equations. Using rapid distortion theory, the unsteady velocity is split into a rotational part that contains the vorticity and an irrotational part described by a scalar potential. The unsteady vorticity transport is described analytically in terms of the drift and stream functions computed from the steady flow. Hence, the solution of the linearized Euler equations may be reduced to a single inhomogeneous equation for the unsteady potential. The steady flow and small disturbance unsteady flow equations are discretized using bilinear quadrilateral isoparametric finite elements. The nonlinear mean flow solution and streamline computational grid are computed simultaneously using Newton iteration. At each step of the Newton iteration, LU decomposition is used to solve the resulting set of linear equations. The unsteady flow problem is linear, and is also solved using LU decomposition. Next, a sensitivity analysis is performed to determine the effect small changes in cascade and airfoil geometry have on the mean and unsteady flow fields. The sensitivity analysis makes use of the nominal steady and unsteady flow LU decompositions so that no additional matrices need to be factored. Hence, the present method is computationally very efficient. To demonstrate how the sensitivity analysis may be used to redesign cascades, a compressor is redesigned for improved aeroelastic stability and two different fan exit guide

  8. Sensitivity Analysis of Differential-Algebraic Equations and Partial Differential Equations

    SciTech Connect

    Petzold, L; Cao, Y; Li, S; Serban, R

    2005-08-09

    Sensitivity analysis generates essential information for model development, design optimization, parameter estimation, optimal control, model reduction and experimental design. In this paper we describe the forward and adjoint methods for sensitivity analysis, and outline some of our recent work on theory, algorithms and software for sensitivity analysis of differential-algebraic equation (DAE) and time-dependent partial differential equation (PDE) systems.

  9. A test of sensitivity to convective transport in a global atmospheric CO2 simulation

    NASA Astrophysics Data System (ADS)

    Bian, H.; Kawa, S. R.; Chin, M.; Pawson, S.; Zhu, Z.; Rasch, P.; Wu, S.

    2006-11-01

    Two approximations to convective transport have been implemented in an offline chemistry transport model (CTM) to explore the impact on calculated atmospheric CO2 distributions. Global CO2 in the year 2000 is simulated using the CTM driven by assimilated meteorological fields from the NASA's Goddard Earth Observation System Data Assimilation System, Version 4 (GEOS-4). The model simulates atmospheric CO2 by adopting the same CO2 emission inventory and dynamical modules as described in Kawa et al. (convective transport scheme denoted as Conv1). Conv1 approximates the convective transport by using the bulk convective mass fluxes to redistribute trace gases. The alternate approximation, Conv2, partitions fluxes into updraft and downdraft, as well as into entrainment and detrainment, and has potential to yield a more realistic simulation of vertical redistribution through deep convection. Replacing Conv1 by Conv2 results in an overestimate of CO2 over biospheric sink regions. The largest discrepancies result in a CO2 difference of about 7.8 ppm in the July NH boreal forest, which is about 30% of the CO2 seasonality for that area. These differences are compared to those produced by emission scenario variations constrained by the framework of Intergovernmental Panel on Climate Change (IPCC) to account for possible land use change and residual terrestrial CO2 sink. It is shown that the overestimated CO2 driven by Conv2 can be offset by introducing these supplemental emissions.

  10. Sensitivity of leaf size and shape to climate: Global patterns and paleoclimatic applications

    USGS Publications Warehouse

    Peppe, D.J.; Royer, D.L.; Cariglino, B.; Oliver, S.Y.; Newman, S.; Leight, E.; Enikolopov, G.; Fernandez-Burgos, M.; Herrera, F.; Adams, J.M.; Correa, E.; Currano, E.D.; Erickson, J.M.; Hinojosa, L.F.; Hoganson, J.W.; Iglesias, A.; Jaramillo, C.A.; Johnson, K.R.; Jordan, G.J.; Kraft, N.J.B.; Lovelock, E.C.; Lusk, C.H.; Niinemets, U.; Penuelas, J.; Rapson, G.; Wing, S.L.; Wright, I.J.

    2011-01-01

    Paleobotanists have long used models based on leaf size and shape to reconstruct paleoclimate. However, most models incorporate a single variable or use traits that are not physiologically or functionally linked to climate, limiting their predictive power. Further, they often underestimate paleotemperature relative to other proxies. Here we quantify leaf-climate correlations from 92 globally distributed, climatically diverse sites, and explore potential confounding factors. Multiple linear regression models for mean annual temperature (MAT) and mean annual precipitation (MAP) are developed and applied to nine well-studied fossil floras. We find that leaves in cold climates typically have larger, more numerous teeth, and are more highly dissected. Leaf habit (deciduous vs evergreen), local water availability, and phylogenetic history all affect these relationships. Leaves in wet climates are larger and have fewer, smaller teeth. Our multivariate MAT and MAP models offer moderate improvements in precision over univariate approaches (??4.0 vs 4.8??C for MAT) and strong improvements in accuracy. For example, our provisional MAT estimates for most North American fossil floras are considerably warmer and in better agreement with independent paleoclimate evidence. Our study demonstrates that the inclusion of additional leaf traits that are functionally linked to climate improves paleoclimate reconstructions. This work also illustrates the need for better understanding of the impact of phylogeny and leaf habit on leaf-climate relationships. ?? 2011 The Authors. New Phytologist ?? 2011 New Phytologist Trust.

  11. Sensitivity of leaf size and shape to climate: global patterns and paleoclimatic applications.

    PubMed

    Peppe, Daniel J; Royer, Dana L; Cariglino, Bárbara; Oliver, Sofia Y; Newman, Sharon; Leight, Elias; Enikolopov, Grisha; Fernandez-Burgos, Margo; Herrera, Fabiany; Adams, Jonathan M; Correa, Edwin; Currano, Ellen D; Erickson, J Mark; Hinojosa, Luis Felipe; Hoganson, John W; Iglesias, Ari; Jaramillo, Carlos A; Johnson, Kirk R; Jordan, Gregory J; Kraft, Nathan J B; Lovelock, Elizabeth C; Lusk, Christopher H; Niinemets, Ulo; Peñuelas, Josep; Rapson, Gillian; Wing, Scott L; Wright, Ian J

    2011-05-01

    • Paleobotanists have long used models based on leaf size and shape to reconstruct paleoclimate. However, most models incorporate a single variable or use traits that are not physiologically or functionally linked to climate, limiting their predictive power. Further, they often underestimate paleotemperature relative to other proxies. • Here we quantify leaf-climate correlations from 92 globally distributed, climatically diverse sites, and explore potential confounding factors. Multiple linear regression models for mean annual temperature (MAT) and mean annual precipitation (MAP) are developed and applied to nine well-studied fossil floras. • We find that leaves in cold climates typically have larger, more numerous teeth, and are more highly dissected. Leaf habit (deciduous vs evergreen), local water availability, and phylogenetic history all affect these relationships. Leaves in wet climates are larger and have fewer, smaller teeth. Our multivariate MAT and MAP models offer moderate improvements in precision over univariate approaches (± 4.0 vs 4.8°C for MAT) and strong improvements in accuracy. For example, our provisional MAT estimates for most North American fossil floras are considerably warmer and in better agreement with independent paleoclimate evidence. • Our study demonstrates that the inclusion of additional leaf traits that are functionally linked to climate improves paleoclimate reconstructions. This work also illustrates the need for better understanding of the impact of phylogeny and leaf habit on leaf-climate relationships.

  12. A Test of Sensitivity to Convective Transport in a Global Atmospheric CO2 Simulation

    NASA Technical Reports Server (NTRS)

    Bian, H.; Kawa, S. R.; Chin, M.; Pawson, S.; Zhu, Z.; Rasch, P.; Wu, S.

    2006-01-01

    Two approximations to convective transport have been implemented in an offline chemistry transport model (CTM) to explore the impact on calculated atmospheric CO2 distributions. GlobalCO2 in the year 2000 is simulated using theCTM driven by assimilated meteorological fields from the NASA s Goddard Earth Observation System Data Assimilation System, Version 4 (GEOS-4). The model simulates atmospheric CO2 by adopting the same CO2 emission inventory and dynamical modules as described in Kawa et al. (convective transport scheme denoted as Conv1). Conv1 approximates the convective transport by using the bulk convective mass fluxes to redistribute trace gases. The alternate approximation, Conv2, partitions fluxes into updraft and downdraft, as well as into entrainment and detrainment, and has potential to yield a more realistic simulation of vertical redistribution through deep convection. Replacing Conv1 by Conv2 results in an overestimate of CO2 over biospheric sink regions. The largest discrepancies result in a CO2 difference of about 7.8 ppm in the July NH boreal forest, which is about 30% of the CO2 seasonality for that area. These differences are compared to those produced by emission scenario variations constrained by the framework of Intergovernmental Panel on Climate Change (IPCC) to account for possible land use change and residual terrestrial CO2 sink. It is shown that the overestimated CO2 driven by Conv2 can be offset by introducing these supplemental emissions.

  13. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint

    SciTech Connect

    Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad

    2015-12-08

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.

  14. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis

    SciTech Connect

    Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad

    2015-10-02

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.

  15. Space Shuttle Orbiter entry guidance and control system sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Stone, H. W.; Powell, R. W.

    1976-01-01

    An approach has been developed to determine the guidance and control system sensitivity to off-nominal aerodynamics for the Space Shuttle Orbiter during entry. This approach, which uses a nonlinear six-degree-of-freedom interactive, digital simulation, has been applied to both the longitudinal and lateral-directional axes for a portion of the orbiter entry. Boundary values for each of the aerodynamic parameters have been identified, the key parameters have been determined, and system modifications that will increase system tolerance to off-nominal aerodynamics have been recommended. The simulations were judged by specified criteria and the performance was evaluated by use of key dependent variables. The analysis is now being expanded to include the latest shuttle guidance and control systems throughout the entry speed range.

  16. Neutron activation analysis; A sensitive test for trace elements

    SciTech Connect

    Hossain, T.Z. . Ward Lab.)

    1992-01-01

    This paper discusses neutron activation analysis (NAA), an extremely sensitive technique for determining the elemental constituents of an unknown specimen. Currently, there are some twenty-five moderate-power TRIGA reactors scattered across the United States (fourteen of them at universities), and one of their principal uses is for NAA. NAA is procedurally simple. A small amount of the material to be tested (typically between one and one hundred milligrams) is irradiated for a period that varies from a few minutes to several hours in a neutron flux of around 10{sup 12} neutrons per square centimeter per second. A tiny fraction of the nuclei present (about 10{sup {minus}8}) is transmuted by nuclear reactions into radioactive forms. Subsequently, the nuclei decay, and the energy and intensity of the gamma rays that they emit can be measured in a gamma-ray spectrometer.

  17. Sensitivity analysis and optimization of thin-film thermoelectric coolers

    NASA Astrophysics Data System (ADS)

    Harsha Choday, Sri; Roy, Kaushik

    2013-06-01

    The cooling performance of a thermoelectric (TE) material is dependent on the figure-of-merit (ZT = S2σT/κ), where S is the Seebeck coefficient, σ and κ are the electrical and thermal conductivities, respectively. The standard definition of ZT assigns equal importance to power factor (S2σ) and thermal conductivity. In this paper, we analyze the relative importance of each thermoelectric parameter on the cooling performance using the mathematical framework of sensitivity analysis. In addition, the impact of the electrical/thermal contact parasitics on bulk and superlattice Bi2Te3 is also investigated. In the presence of significant contact parasitics, we find that the carrier concentration that results in best cooling is lower than that of the highest ZT. We also establish the level of contact parasitics that are needed such that their impact on TE cooling is negligible.

  18. Sensitivity and uncertainty analysis of the recharge boundary condition

    NASA Astrophysics Data System (ADS)

    Jyrkama, M. I.; Sykes, J. F.

    2006-01-01

    The reliability analysis method is integrated with MODFLOW to study the impact of recharge on the groundwater flow system at a study area in New Jersey. The performance function is formulated in terms of head or flow rate at a pumping well, while the recharge sensitivity vector is computed efficiently by implementing the adjoint method in MODFLOW. The developed methodology not only quantifies the reliability of head at the well in terms of uncertainties in the recharge boundary condition, but it also delineates areas of recharge that have the highest impact on the head and flow rate at the well. The results clearly identify the most important land use areas that should be protected in order to maintain the head and hence production at the pumping well. These areas extend far beyond the steady state well capture zone used for land use planning and management within traditional wellhead protection programs.

  19. Sensitivity analysis for causal inference using inverse probability weighting.

    PubMed

    Shen, Changyu; Li, Xiaochun; Li, Lingling; Were, Martin C

    2011-09-01

    Evaluation of impact of potential uncontrolled confounding is an important component for causal inference based on observational studies. In this article, we introduce a general framework of sensitivity analysis that is based on inverse probability weighting. We propose a general methodology that allows both non-parametric and parametric analyses, which are driven by two parameters that govern the magnitude of the variation of the multiplicative errors of the propensity score and their correlations with the potential outcomes. We also introduce a specific parametric model that offers a mechanistic view on how the uncontrolled confounding may bias the inference through these parameters. Our method can be readily applied to both binary and continuous outcomes and depends on the covariates only through the propensity score that can be estimated by any parametric or non-parametric method. We illustrate our method with two medical data sets.

  20. Control sensitivity indices for stability analysis of HVdc systems

    SciTech Connect

    Nayak, O.B.; Gole, A.M.; Chapman, D.G.; Davies, J.B.

    1995-10-01

    This paper presents a new concept called the ``Control Sensitivity Index`` of CSI, for the stability analysis of HVdc converters connected to weak ac systems. The CSI for a particular control mode can be defined as the ratio of incremental changes in the two system variables that are most relevant to that control mode. The index provides valuable information on the stability of the system and, unlike other approaches, aids in the design of the controller. It also plays an important role in defining non-linear gains for the controller. This paper offers a generalized formulation of CSI and demonstrates its application through an analysis of the CSI for three modes of HVdc control. The conclusions drawn from the analysis are confirmed by a detailed electromagnetic transients simulation of the ac/dc system. The paper concludes that the CSI can be used to improve the controller design and, for an inverter in a weak ac system, the conventional voltage control mode is more stable than the conventional {gamma} control mode.

  1. Mitochondrial Complex I Is a Global Regulator of Secondary Metabolism, Virulence and Azole Sensitivity in Fungi

    PubMed Central

    Bromley, Mike; Johns, Anna; Davies, Emma; Fraczek, Marcin; Mabey Gilsenan, Jane; Kurbatova, Natalya; Keays, Maria; Kapushesky, Misha; Gut, Marta; Gut, Ivo; Denning, David W.; Bowyer, Paul

    2016-01-01

    Recent estimates of the global burden of fungal disease suggest that that their incidence has been drastically underestimated and that mortality may rival that of malaria or tuberculosis. Azoles are the principal class of antifungal drug and the only available oral treatment for fungal disease. Recent occurrence and increase in azole resistance is a major concern worldwide. Known azole resistance mechanisms include over—expression of efflux pumps and mutation of the gene encoding the target protein cyp51a, however, for one of the most important fungal pathogens of humans, Aspergillus fumigatus, much of the observed azole resistance does not appear to involve such mechanisms. Here we present evidence that azole resistance in A. fumigatus can arise through mutation of components of mitochondrial complex I. Gene deletions of the 29.9KD subunit of this complex are azole resistant, less virulent and exhibit dysregulation of secondary metabolite gene clusters in a manner analogous to deletion mutants of the secondary metabolism regulator, LaeA. Additionally we observe that a mutation leading to an E180D amino acid change in the 29.9 KD subunit is strongly associated with clinical azole resistant A. fumigatus isolates. Evidence presented in this paper suggests that complex I may play a role in the hypoxic response and that one possible mechanism for cell death during azole treatment is a dysfunctional hypoxic response that may be restored by dysregulation of complex I. Both deletion of the 29.9 KD subunit of complex I and azole treatment alone profoundly change expression of gene clusters involved in secondary metabolism and immunotoxin production raising potential concerns about long term azole therapy. PMID:27438017

  2. Mitochondrial Complex I Is a Global Regulator of Secondary Metabolism, Virulence and Azole Sensitivity in Fungi.

    PubMed

    Bromley, Mike; Johns, Anna; Davies, Emma; Fraczek, Marcin; Mabey Gilsenan, Jane; Kurbatova, Natalya; Keays, Maria; Kapushesky, Misha; Gut, Marta; Gut, Ivo; Denning, David W; Bowyer, Paul

    2016-01-01

    Recent estimates of the global burden of fungal disease suggest that that their incidence has been drastically underestimated and that mortality may rival that of malaria or tuberculosis. Azoles are the principal class of antifungal drug and the only available oral treatment for fungal disease. Recent occurrence and increase in azole resistance is a major concern worldwide. Known azole resistance mechanisms include over-expression of efflux pumps and mutation of the gene encoding the target protein cyp51a, however, for one of the most important fungal pathogens of humans, Aspergillus fumigatus, much of the observed azole resistance does not appear to involve such mechanisms. Here we present evidence that azole resistance in A. fumigatus can arise through mutation of components of mitochondrial complex I. Gene deletions of the 29.9KD subunit of this complex are azole resistant, less virulent and exhibit dysregulation of secondary metabolite gene clusters in a manner analogous to deletion mutants of the secondary metabolism regulator, LaeA. Additionally we observe that a mutation leading to an E180D amino acid change in the 29.9 KD subunit is strongly associated with clinical azole resistant A. fumigatus isolates. Evidence presented in this paper suggests that complex I may play a role in the hypoxic response and that one possible mechanism for cell death during azole treatment is a dysfunctional hypoxic response that may be restored by dysregulation of complex I. Both deletion of the 29.9 KD subunit of complex I and azole treatment alone profoundly change expression of gene clusters involved in secondary metabolism and immunotoxin production raising potential concerns about long term azole therapy.

  3. Investigation of measurement sensitivities in cross-correlation Doppler global velocimetry

    NASA Astrophysics Data System (ADS)

    Cadel, Daniel R.; Lowe, K. Todd

    2016-11-01

    Cross-correlation Doppler global velocimetry (CC-DGV) is a flow measurement technique based on the estimation of Doppler frequency shift of scattered light by means of cross-correlating two filtered intensity signals. The signal characteristics of CC-DGV result in fundamental limits for estimation variance as well as the possibility for estimator bias. The current study assesses these aspects theoretically and via Monte Carlo signal simulations. A signal model is developed using canonical numerical functions for the iodine absorption cell and incorporating Poisson and Gaussian signal noise models. Along with consideration of the analytical form of the Cramér-Rao lower bound, best practices for system settings are discussed. The CC-DGV signal processing routine is then assessed by a series of Monte Carlo simulations studying the effect of temperature mismatch between flow signal and reference detector cells, velocity magnitude, and discretization error in the frequency modulation. A measurement bias was observed; the magnitude of the bias is a weak function of the cell temperature mismatch, but it is independent of the flow velocity magnitude. The measurement variance was found to approach the Cramér-Rao lower bound for optimized conditions. A cyclical bias error resulting from the discrete nature of the laser frequency sweep is also observed with maximum errors of ± 1.0 % of the laser frequency scan step size, corresponding to peak errors of ± 0.61 m s-1 for typical settings. Overall, the signal estimator is found to perform best for matched cell temperatures, small frequency step size, and high velocity regimes, where the relative bias errors are collectively minimized.

  4. Measuring Global Surface Pressures on a Circulation Control Concept Using Pressure Sensitive Paint

    NASA Technical Reports Server (NTRS)

    Watkins, Anthony N.; Lipford, William E.; Leighty, Bradley D.; Goodman, Kyle Z.; Goad, William K.

    2012-01-01

    This report will present the results obtained from the Pressure Sensitive Paint (PSP) technique on a circulation control concept model. This test was conducted at the National Transonic Facility (NTF) at the NASA Langley Research Center. PSP was collected on the upper wing surface while the facility was operating in cryogenic mode at 227 K (-50 oF). The test envelope for the PSP portion included Mach numbers from 0.7 to 0.8 with angle of attack varying between 0 and 8 degrees and a total pressure of approximately 168 kPa (24.4 psi), resulting in a chord Reynolds number of approximately 15 million. While the PSP results did exhibit high levels of noise in certain conditions (where the oxygen content of the flow was very small), some conditions provided good correlation between the PSP and pressure taps, showing the ability of the PSP technique. This work also served as a risk reduction opportunity for future testing in cryogenic conditions at the NTF.

  5. Sensitivity analysis of ecosystem service valuation in a Mediterranean watershed.

    PubMed

    Sánchez-Canales, María; López Benito, Alfredo; Passuello, Ana; Terrado, Marta; Ziv, Guy; Acuña, Vicenç; Schuhmacher, Marta; Elorza, F Javier

    2012-12-01

    The services of natural ecosystems are clearly very important to our societies. In the last years, efforts to conserve and value ecosystem services have been fomented. By way of illustration, the Natural Capital Project integrates ecosystem services into everyday decision making around the world. This project has developed InVEST (a system for Integrated Valuation of Ecosystem Services and Tradeoffs). The InVEST model is a spatially integrated modelling tool that allows us to predict changes in ecosystem services, biodiversity conservation and commodity production levels. Here, InVEST model is applied to a stakeholder-defined scenario of land-use/land-cover change in a Mediterranean region basin (the Llobregat basin, Catalonia, Spain). Of all InVEST modules and sub-modules, only the behaviour of the water provisioning one is investigated in this article. The main novel aspect of this work is the sensitivity analysis (SA) carried out to the InVEST model in order to determine the variability of the model response when the values of three of its main coefficients: Z (seasonal precipitation distribution), prec (annual precipitation) and eto (annual evapotranspiration), change. The SA technique used here is a One-At-a-Time (OAT) screening method known as Morris method, applied over each one of the one hundred and fifty four sub-watersheds in which the Llobregat River basin is divided. As a result, this method provides three sensitivity indices for each one of the sub-watersheds under consideration, which are mapped to study how they are spatially distributed. From their analysis, the study shows that, in the case under consideration and between the limits considered for each factor, the effect of the Z coefficient on the model response is negligible, while the other two need to be accurately determined in order to obtain precise output variables. The results of this study will be applicable to the others watersheds assessed in the Consolider Scarce Project.

  6. A Multivariate Analysis of Extratropical Cyclone Environmental Sensitivity

    NASA Astrophysics Data System (ADS)

    Tierney, G.; Posselt, D. J.; Booth, J. F.

    2015-12-01

    The implications of a changing climate system include more than a simple temperature increase. A changing climate also modifies atmospheric conditions responsible for shaping the genesis and evolution of atmospheric circulations. In the mid-latitudes, the effects of climate change on extratropical cyclones (ETCs) can be expressed through changes in bulk temperature, horizontal and vertical temperature gradients (leading to changes in mean state winds) as well as atmospheric moisture content. Understanding how these changes impact ETC evolution and dynamics will help to inform climate mitigation and adaptation strategies, and allow for better informed weather emergency planning. However, our understanding is complicated by the complex interplay between a variety of environmental influences, and their potentially opposing effects on extratropical cyclone strength. Attempting to untangle competing influences from a theoretical or observational standpoint is complicated by nonlinear responses to environmental perturbations and a lack of data. As such, numerical models can serve as a useful tool for examining this complex issue. We present results from an analysis framework that combines the computational power of idealized modeling with the statistical robustness of multivariate sensitivity analysis. We first establish control variables, such as baroclinicity, bulk temperature, and moisture content, and specify a range of values that simulate possible changes in a future climate. The Weather Research and Forecasting (WRF) model serves as the link between changes in climate state and ETC relevant outcomes. A diverse set of output metrics (e.g., sea level pressure, average precipitation rates, eddy kinetic energy, and latent heat release) facilitates examination of storm dynamics, thermodynamic properties, and hydrologic cycles. Exploration of the multivariate sensitivity of ETCs to changes in control parameters space is performed via an ensemble of WRF runs coupled with

  7. Retrieval and molecule sensitivity studies for the global ozone monitoring experiment and the scanning imaging absorption spectrometer for atmospheric chartography

    NASA Technical Reports Server (NTRS)

    Chance, Kelly V.; Burrows, John P.; Schneider, Wolfgang

    1991-01-01

    The Global Ozone Monitoring Experiment (GOME) and the SCanning Imaging Absorption spectroMeter for Atmospheric CHartographY (SCIAMACHY) are diode based spectrometers that will make atmospheric constituent and aerosol measurements from European satellite platforms beginning in the mid 1990's. GOME measures the atmosphere in the UV and visible in nadir scanning, while SCIAMACHY performs a combination of nadir, limb, and occultation measurements in the UV, visible, and infrared. A summary is presented of the sensitivity studies that were performed for SCIAMACHY measurements. As the GOME measurement capability is a subset of the SCIAMACHY measurement capability, the nadir, UV, and visible portion of the studies is shown to apply to GOME as well.

  8. Global Security Rule Sets An Analysis of the Current Global Security Environment and Rule Sets Governing Nuclear Weapons Release

    SciTech Connect

    Mollahan, K; Nattrass, L

    2004-09-30

    America is in a unique position in its history. In maintaining its position as the world's only superpower, the US consistently finds itself taking on the role of a global cop, chief exporter of hard and soft power, and primary impetus for globalization. A view of the current global situation shows an America that can benefit greatly from the effects of globalization and soft power. Similarly, America's power can be reduced significantly if globalization and its soft power are not handled properly. At the same time, America has slowly come to realize that its next major adversary is not a near peer competitor but terrorism and disconnected nations that seek nuclear capabilities. In dealing with this new threat, America needs to come to terms with its own nuclear arsenal and build a security rule set that will establish for the world explicitly what actions will cause the US to consider nuclear weapons release. This rule set; however, needs to be established with sensitivity to the US's international interests in globalization and soft power. The US must find a way to establish its doctrine governing nuclear weapons release without threatening other peaceful nations in the process.

  9. Global and regional ocean carbon uptake and climate change: sensitivity to a substantial mitigation scenario

    NASA Astrophysics Data System (ADS)

    Vichi, Marcello; Manzini, Elisa; Fogli, Pier Giuseppe; Alessandri, Andrea; Patara, Lavinia; Scoccimarro, Enrico; Masina, Simona; Navarra, Antonio

    2011-11-01

    Under future scenarios of business-as-usual emissions, the ocean storage of anthropogenic carbon is anticipated to decrease because of ocean chemistry constraints and positive feedbacks in the carbon-climate dynamics, whereas it is still unknown how the oceanic carbon cycle will respond to more substantial mitigation scenarios. To evaluate the natural system response to prescribed atmospheric "target" concentrations and assess the response of the ocean carbon pool to these values, 2 centennial projection simulations have been performed with an Earth System Model that includes a fully coupled carbon cycle, forced in one case with a mitigation scenario and the other with the SRES A1B scenario. End of century ocean uptake with the mitigation scenario is projected to return to the same magnitude of carbon fluxes as simulated in 1960 in the Pacific Ocean and to lower values in the Atlantic. With A1B, the major ocean basins are instead projected to decrease the capacity for carbon uptake globally as found with simpler carbon cycle models, while at the regional level the response is contrasting. The model indicates that the equatorial Pacific may increase the carbon uptake rates in both scenarios, owing to enhancement of the biological carbon pump evidenced by an increase in Net Community Production (NCP) following changes in the subsurface equatorial circulation and enhanced iron availability from extratropical regions. NCP is a proxy of the bulk organic carbon made available to the higher trophic levels and potentially exportable from the surface layers. The model results indicate that, besides the localized increase in the equatorial Pacific, the NCP of lower trophic levels in the northern Pacific and Atlantic oceans is projected to be halved with respect to the current climate under a substantial mitigation scenario at the end of the twenty-first century. It is thus suggested that changes due to cumulative carbon emissions up to present and the projected concentration

  10. Global Analysis of Several Bands of the CF_4 Molecule

    NASA Astrophysics Data System (ADS)

    Carlos, Mickaël; Gruson, Océane; Boudon, Vincent; Georges, Robert; Pirali, Olivier; Asselin, Pierre

    2016-06-01

    Carbon tetrafluoride is a powerful greenhouse gas, mainly of anthropogenic origin. Its absorption spectrum is, however, still badly modeled, especially for hot bands in the strongly absorbing ν_3 region. To overcome this problem, we have undertaken a systematic study of all the lower rovibrational transitions of this molecule. In particul