Sample records for identify sensitive parameters

  1. Parameter screening: the use of a dummy parameter to identify non-influential parameters in a global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Khorashadi Zadeh, Farkhondeh; Nossent, Jiri; van Griensven, Ann; Bauwens, Willy

    2017-04-01

    Parameter estimation is a major concern in hydrological modeling, which may limit the use of complex simulators with a large number of parameters. To support the selection of parameters to include in or exclude from the calibration process, Global Sensitivity Analysis (GSA) is widely applied in modeling practices. Based on the results of GSA, the influential and the non-influential parameters are identified (i.e. parameters screening). Nevertheless, the choice of the screening threshold below which parameters are considered non-influential is a critical issue, which has recently received more attention in GSA literature. In theory, the sensitivity index of a non-influential parameter has a value of zero. However, since numerical approximations, rather than analytical solutions, are utilized in GSA methods to calculate the sensitivity indices, small but non-zero indices may be obtained for the indices of non-influential parameters. In order to assess the threshold that identifies non-influential parameters in GSA methods, we propose to calculate the sensitivity index of a "dummy parameter". This dummy parameter has no influence on the model output, but will have a non-zero sensitivity index, representing the error due to the numerical approximation. Hence, the parameters whose indices are above the sensitivity index of the dummy parameter can be classified as influential, whereas the parameters whose indices are below this index are within the range of the numerical error and should be considered as non-influential. To demonstrated the effectiveness of the proposed "dummy parameter approach", 26 parameters of a Soil and Water Assessment Tool (SWAT) model are selected to be analyzed and screened, using the variance-based Sobol' and moment-independent PAWN methods. The sensitivity index of the dummy parameter is calculated from sampled data, without changing the model equations. Moreover, the calculation does not even require additional model evaluations for the Sobol

  2. Identifying sensitive ranges in global warming precipitation change dependence on convective parameters

    DOE PAGES

    Bernstein, Diana N.; Neelin, J. David

    2016-04-28

    A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme.more » This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive dangerous ranges. Here, the low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.« less

  3. Identifying sensitive ranges in global warming precipitation change dependence on convective parameters

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernstein, Diana N.; Neelin, J. David

    A branch-run perturbed-physics ensemble in the Community Earth System Model estimates impacts of parameters in the deep convection scheme on current hydroclimate and on end-of-century precipitation change projections under global warming. Regional precipitation change patterns prove highly sensitive to these parameters, especially in the tropics with local changes exceeding 3mm/d, comparable to the magnitude of the predicted change and to differences in global warming predictions among the Coupled Model Intercomparison Project phase 5 models. This sensitivity is distributed nonlinearly across the feasible parameter range, notably in the low-entrainment range of the parameter for turbulent entrainment in the deep convection scheme.more » This suggests that a useful target for parameter sensitivity studies is to identify such disproportionately sensitive dangerous ranges. Here, the low-entrainment range is used to illustrate the reduction in global warming regional precipitation sensitivity that could occur if this dangerous range can be excluded based on evidence from current climate.« less

  4. Information sensitivity functions to assess parameter information gain and identifiability of dynamical systems.

    PubMed

    Pant, Sanjay

    2018-05-01

    A new class of functions, called the 'information sensitivity functions' (ISFs), which quantify the information gain about the parameters through the measurements/observables of a dynamical system are presented. These functions can be easily computed through classical sensitivity functions alone and are based on Bayesian and information-theoretic approaches. While marginal information gain is quantified by decrease in differential entropy, correlations between arbitrary sets of parameters are assessed through mutual information. For individual parameters, these information gains are also presented as marginal posterior variances, and, to assess the effect of correlations, as conditional variances when other parameters are given. The easy to interpret ISFs can be used to (a) identify time intervals or regions in dynamical system behaviour where information about the parameters is concentrated; (b) assess the effect of measurement noise on the information gain for the parameters; (c) assess whether sufficient information in an experimental protocol (input, measurements and their frequency) is available to identify the parameters; (d) assess correlation in the posterior distribution of the parameters to identify the sets of parameters that are likely to be indistinguishable; and (e) assess identifiability problems for particular sets of parameters. © 2018 The Authors.

  5. Application of identified sensitive physical parameters in reducing the uncertainty of numerical simulation

    NASA Astrophysics Data System (ADS)

    Sun, Guodong; Mu, Mu

    2016-04-01

    An important source of uncertainty, which then causes further uncertainty in numerical simulations, is that residing in the parameters describing physical processes in numerical models. There are many physical parameters in numerical models in the atmospheric and oceanic sciences, and it would cost a great deal to reduce uncertainties in all physical parameters. Therefore, finding a subset of these parameters, which are relatively more sensitive and important parameters, and reducing the errors in the physical parameters in this subset would be a far more efficient way to reduce the uncertainties involved in simulations. In this context, we present a new approach based on the conditional nonlinear optimal perturbation related to parameter (CNOP-P) method. The approach provides a framework to ascertain the subset of those relatively more sensitive and important parameters among the physical parameters. The Lund-Potsdam-Jena (LPJ) dynamical global vegetation model was utilized to test the validity of the new approach. The results imply that nonlinear interactions among parameters play a key role in the uncertainty of numerical simulations in arid and semi-arid regions of China compared to those in northern, northeastern and southern China. The uncertainties in the numerical simulations were reduced considerably by reducing the errors of the subset of relatively more sensitive and important parameters. The results demonstrate that our approach not only offers a new route to identify relatively more sensitive and important physical parameters but also that it is viable to then apply "target observations" to reduce the uncertainties in model parameters.

  6. Parameter sensitivity and identifiability for a biogeochemical model of hypoxia in the northern Gulf of Mexico

    EPA Science Inventory

    Local sensitivity analyses and identifiable parameter subsets were used to describe numerical constraints of a hypoxia model for bottom waters of the northern Gulf of Mexico. The sensitivity of state variables differed considerably with parameter changes, although most variables ...

  7. Normalized sensitivities and parameter identifiability of in situ diffusion experiments on Callovo Oxfordian clay at Bure site

    NASA Astrophysics Data System (ADS)

    Samper, J.; Dewonck, S.; Zheng, L.; Yang, Q.; Naves, A.

    Diffusion of inert and reactive tracers (DIR) is an experimental program performed by ANDRA at Bure underground research laboratory in Meuse/Haute Marne (France) to characterize diffusion and retention of radionuclides in Callovo-Oxfordian (C-Ox) argillite. In situ diffusion experiments were performed in vertical boreholes to determine diffusion and retention parameters of selected radionuclides. C-Ox clay exhibits a mild diffusion anisotropy due to stratification. Interpretation of in situ diffusion experiments is complicated by several non-ideal effects caused by the presence of a sintered filter, a gap between the filter and borehole wall and an excavation disturbed zone (EdZ). The relevance of such non-ideal effects and their impact on estimated clay parameters have been evaluated with numerical sensitivity analyses and synthetic experiments having similar parameters and geometric characteristics as real DIR experiments. Normalized dimensionless sensitivities of tracer concentrations at the test interval have been computed numerically. Tracer concentrations are found to be sensitive to all key parameters. Sensitivities are tracer dependent and vary with time. These sensitivities are useful to identify which are the parameters that can be estimated with less uncertainty and find the times at which tracer concentrations begin to be sensitive to each parameter. Synthetic experiments generated with prescribed known parameters have been interpreted automatically with INVERSE-CORE 2D and used to evaluate the relevance of non-ideal effects and ascertain parameter identifiability in the presence of random measurement errors. Identifiability analysis of synthetic experiments reveals that data noise makes difficult the estimation of clay parameters. Parameters of clay and EdZ cannot be estimated simultaneously from noisy data. Models without an EdZ fail to reproduce synthetic data. Proper interpretation of in situ diffusion experiments requires accounting for filter, gap

  8. Global Sensitivity Analysis for Identifying Important Parameters of Nitrogen Nitrification and Denitrification under Model and Scenario Uncertainties

    NASA Astrophysics Data System (ADS)

    Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.

    2017-12-01

    Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.

  9. Classification of hydrological parameter sensitivity and evaluation of parameter transferability across 431 US MOPEX basins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ren, Huiying; Hou, Zhangshuan; Huang, Maoyi

    The Community Land Model (CLM) represents physical, chemical, and biological processes of the terrestrial ecosystems that interact with climate across a range of spatial and temporal scales. As CLM includes numerous sub-models and associated parameters, the high-dimensional parameter space presents a formidable challenge for quantifying uncertainty and improving Earth system predictions needed to assess environmental changes and risks. This study aims to evaluate the potential of transferring hydrologic model parameters in CLM through sensitivity analyses and classification across watersheds from the Model Parameter Estimation Experiment (MOPEX) in the United States. The sensitivity of CLM-simulated water and energy fluxes to hydrologicalmore » parameters across 431 MOPEX basins are first examined using an efficient stochastic sampling-based sensitivity analysis approach. Linear, interaction, and high-order nonlinear impacts are all identified via statistical tests and stepwise backward removal parameter screening. The basins are then classified accordingly to their parameter sensitivity patterns (internal attributes), as well as their hydrologic indices/attributes (external hydrologic factors) separately, using a Principal component analyses (PCA) and expectation-maximization (EM) –based clustering approach. Similarities and differences among the parameter sensitivity-based classification system (S-Class), the hydrologic indices-based classification (H-Class), and the Koppen climate classification systems (K-Class) are discussed. Within each S-class with similar parameter sensitivity characteristics, similar inversion modeling setups can be used for parameter calibration, and the parameters and their contribution or significance to water and energy cycling may also be more transferrable. This classification study provides guidance on identifiable parameters, and on parameterization and inverse model design for CLM but the methodology is applicable to other

  10. Two statistics for evaluating parameter identifiability and error reduction

    USGS Publications Warehouse

    Doherty, John; Hunt, Randall J.

    2009-01-01

    Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.

  11. Material and morphology parameter sensitivity analysis in particulate composite materials

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyu; Oskay, Caglar

    2017-12-01

    This manuscript presents a novel parameter sensitivity analysis framework for damage and failure modeling of particulate composite materials subjected to dynamic loading. The proposed framework employs global sensitivity analysis to study the variance in the failure response as a function of model parameters. In view of the computational complexity of performing thousands of detailed microstructural simulations to characterize sensitivities, Gaussian process (GP) surrogate modeling is incorporated into the framework. In order to capture the discontinuity in response surfaces, the GP models are integrated with a support vector machine classification algorithm that identifies the discontinuities within response surfaces. The proposed framework is employed to quantify variability and sensitivities in the failure response of polymer bonded particulate energetic materials under dynamic loads to material properties and morphological parameters that define the material microstructure. Particular emphasis is placed on the identification of sensitivity to interfaces between the polymer binder and the energetic particles. The proposed framework has been demonstrated to identify the most consequential material and morphological parameters under vibrational and impact loads.

  12. New Uses for Sensitivity Analysis: How Different Movement Tasks Effect Limb Model Parameter Sensitivity

    NASA Technical Reports Server (NTRS)

    Winters, J. M.; Stark, L.

    1984-01-01

    Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.

  13. Global sensitivity analysis for identifying important parameters of nitrogen nitrification and denitrification under model uncertainty and scenario uncertainty

    NASA Astrophysics Data System (ADS)

    Chen, Zhuowei; Shi, Liangsheng; Ye, Ming; Zhu, Yan; Yang, Jinzhong

    2018-06-01

    Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. By using a new variance-based global sensitivity analysis method, this paper identifies important parameters for nitrogen reactive transport with simultaneous consideration of these three uncertainties. A combination of three scenarios of soil temperature and two scenarios of soil moisture creates a total of six scenarios. Four alternative models describing the effect of soil temperature and moisture content are used to evaluate the reduction functions used for calculating actual reaction rates. The results show that for nitrogen reactive transport problem, parameter importance varies substantially among different models and scenarios. Denitrification and nitrification process is sensitive to soil moisture content status rather than to the moisture function parameter. Nitrification process becomes more important at low moisture content and low temperature. However, the changing importance of nitrification activity with respect to temperature change highly relies on the selected model. Model-averaging is suggested to assess the nitrification (or denitrification) contribution by reducing the possible model error. Despite the introduction of biochemical heterogeneity or not, fairly consistent parameter importance rank is obtained in this study: optimal denitrification rate (Kden) is the most important parameter; reference temperature (Tr) is more important than temperature coefficient (Q10); empirical constant in moisture response function (m) is the least important one. Vertical distribution of soil moisture but not temperature plays predominant role controlling nitrogen reaction. This study provides insight into the nitrogen reactive transport modeling and demonstrates an effective strategy of selecting the important parameters when future temperature and soil moisture carry uncertainties or when modelers face with multiple ways of establishing nitrogen

  14. Stochastic control system parameter identifiability

    NASA Technical Reports Server (NTRS)

    Lee, C. H.; Herget, C. J.

    1975-01-01

    The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.

  15. A new approach to identify the sensitivity and importance of physical parameters combination within numerical models using the Lund-Potsdam-Jena (LPJ) model as an example

    NASA Astrophysics Data System (ADS)

    Sun, Guodong; Mu, Mu

    2017-05-01

    An important source of uncertainty, which causes further uncertainty in numerical simulations, is that residing in the parameters describing physical processes in numerical models. Therefore, finding a subset among numerous physical parameters in numerical models in the atmospheric and oceanic sciences, which are relatively more sensitive and important parameters, and reducing the errors in the physical parameters in this subset would be a far more efficient way to reduce the uncertainties involved in simulations. In this context, we present a new approach based on the conditional nonlinear optimal perturbation related to parameter (CNOP-P) method. The approach provides a framework to ascertain the subset of those relatively more sensitive and important parameters among the physical parameters. The Lund-Potsdam-Jena (LPJ) dynamical global vegetation model was utilized to test the validity of the new approach in China. The results imply that nonlinear interactions among parameters play a key role in the identification of sensitive parameters in arid and semi-arid regions of China compared to those in northern, northeastern, and southern China. The uncertainties in the numerical simulations were reduced considerably by reducing the errors of the subset of relatively more sensitive and important parameters. The results demonstrate that our approach not only offers a new route to identify relatively more sensitive and important physical parameters but also that it is viable to then apply "target observations" to reduce the uncertainties in model parameters.

  16. Unscented Kalman filter with parameter identifiability analysis for the estimation of multiple parameters in kinetic models

    PubMed Central

    2011-01-01

    In systems biology, experimentally measured parameters are not always available, necessitating the use of computationally based parameter estimation. In order to rely on estimated parameters, it is critical to first determine which parameters can be estimated for a given model and measurement set. This is done with parameter identifiability analysis. A kinetic model of the sucrose accumulation in the sugar cane culm tissue developed by Rohwer et al. was taken as a test case model. What differentiates this approach is the integration of an orthogonal-based local identifiability method into the unscented Kalman filter (UKF), rather than using the more common observability-based method which has inherent limitations. It also introduces a variable step size based on the system uncertainty of the UKF during the sensitivity calculation. This method identified 10 out of 12 parameters as identifiable. These ten parameters were estimated using the UKF, which was run 97 times. Throughout the repetitions the UKF proved to be more consistent than the estimation algorithms used for comparison. PMID:21989173

  17. Selecting Sensitive Parameter Subsets in Dynamical Models With Application to Biomechanical System Identification.

    PubMed

    Ramadan, Ahmed; Boss, Connor; Choi, Jongeun; Peter Reeves, N; Cholewicki, Jacek; Popovich, John M; Radcliffe, Clark J

    2018-07-01

    Estimating many parameters of biomechanical systems with limited data may achieve good fit but may also increase 95% confidence intervals in parameter estimates. This results in poor identifiability in the estimation problem. Therefore, we propose a novel method to select sensitive biomechanical model parameters that should be estimated, while fixing the remaining parameters to values obtained from preliminary estimation. Our method relies on identifying the parameters to which the measurement output is most sensitive. The proposed method is based on the Fisher information matrix (FIM). It was compared against the nonlinear least absolute shrinkage and selection operator (LASSO) method to guide modelers on the pros and cons of our FIM method. We present an application identifying a biomechanical parametric model of a head position-tracking task for ten human subjects. Using measured data, our method (1) reduced model complexity by only requiring five out of twelve parameters to be estimated, (2) significantly reduced parameter 95% confidence intervals by up to 89% of the original confidence interval, (3) maintained goodness of fit measured by variance accounted for (VAF) at 82%, (4) reduced computation time, where our FIM method was 164 times faster than the LASSO method, and (5) selected similar sensitive parameters to the LASSO method, where three out of five selected sensitive parameters were shared by FIM and LASSO methods.

  18. Importance analysis for Hudson River PCB transport and fate model parameters using robust sensitivity studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.; Toll, J.; Cothern, K.

    1995-12-31

    The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less

  19. Monte Carlo sensitivity analysis of land surface parameters using the Variable Infiltration Capacity model

    NASA Astrophysics Data System (ADS)

    Demaria, Eleonora M.; Nijssen, Bart; Wagener, Thorsten

    2007-06-01

    Current land surface models use increasingly complex descriptions of the processes that they represent. Increase in complexity is accompanied by an increase in the number of model parameters, many of which cannot be measured directly at large spatial scales. A Monte Carlo framework was used to evaluate the sensitivity and identifiability of ten parameters controlling surface and subsurface runoff generation in the Variable Infiltration Capacity model (VIC). Using the Monte Carlo Analysis Toolbox (MCAT), parameter sensitivities were studied for four U.S. watersheds along a hydroclimatic gradient, based on a 20-year data set developed for the Model Parameter Estimation Experiment (MOPEX). Results showed that simulated streamflows are sensitive to three parameters when evaluated with different objective functions. Sensitivity of the infiltration parameter (b) and the drainage parameter (exp) were strongly related to the hydroclimatic gradient. The placement of vegetation roots played an important role in the sensitivity of model simulations to the thickness of the second soil layer (thick2). Overparameterization was found in the base flow formulation indicating that a simplified version could be implemented. Parameter sensitivity was more strongly dictated by climatic gradients than by changes in soil properties. Results showed how a complex model can be reduced to a more parsimonious form, leading to a more identifiable model with an increased chance of successful regionalization to ungauged basins. Although parameter sensitivities are strictly valid for VIC, this model is representative of a wider class of macroscale hydrological models. Consequently, the results and methodology will have applicability to other hydrological models.

  20. Identifying Crucial Parameter Correlations Maintaining Bursting Activity

    PubMed Central

    Doloc-Mihu, Anca; Calabrese, Ronald L.

    2014-01-01

    Recent experimental and computational studies suggest that linearly correlated sets of parameters (intrinsic and synaptic properties of neurons) allow central pattern-generating networks to produce and maintain their rhythmic activity regardless of changing internal and external conditions. To determine the role of correlated conductances in the robust maintenance of functional bursting activity, we used our existing database of half-center oscillator (HCO) model instances of the leech heartbeat CPG. From the database, we identified functional activity groups of burster (isolated neuron) and half-center oscillator model instances and realistic subgroups of each that showed burst characteristics (principally period and spike frequency) similar to the animal. To find linear correlations among the conductance parameters maintaining functional leech bursting activity, we applied Principal Component Analysis (PCA) to each of these four groups. PCA identified a set of three maximal conductances (leak current, Leak; a persistent K current, K2; and of a persistent Na+ current, P) that correlate linearly for the two groups of burster instances but not for the HCO groups. Visualizations of HCO instances in a reduced space suggested that there might be non-linear relationships between these parameters for these instances. Experimental studies have shown that period is a key attribute influenced by modulatory inputs and temperature variations in heart interneurons. Thus, we explored the sensitivity of period to changes in maximal conductances of Leak, K2, and P, and we found that for our realistic bursters the effect of these parameters on period could not be assessed because when varied individually bursting activity was not maintained. PMID:24945358

  1. An approach to measure parameter sensitivity in watershed ...

    EPA Pesticide Factsheets

    Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier for the Little Miami River (LMR) and Las Vegas Wash (LVW) watersheds were used for detail sensitivity analyses. To compare the relative sensitivities of the hydrologic parameters of these two models, we used Normalized Root Mean Square Error (NRMSE). By combining the NRMSE index with the flow duration curve analysis, we derived an approach to measure parameter sensitivities under different flow regimes. Results show that the parameters related to groundwater are highly sensitive in the LMR watershed, whereas the LVW watershed is primarily sensitive to near surface and impervious parameters. The high and medium flows are more impacted by most of the parameters. Low flow regime was highly sensitive to groundwater related parameters. Moreover, our approach is found to be useful in facilitating model development and calibration. This journal article describes hydrological modeling of climate change and land use changes on stream hydrology, and elucidates the importance of hydrological model construction in generating valid modeling results.

  2. How often do sensitivity analyses for economic parameters change cost-utility analysis conclusions?

    PubMed

    Schackman, Bruce R; Gold, Heather Taffet; Stone, Patricia W; Neumann, Peter J

    2004-01-01

    There is limited evidence about the extent to which sensitivity analysis has been used in the cost-effectiveness literature. Sensitivity analyses for health-related QOL (HR-QOL), cost and discount rate economic parameters are of particular interest because they measure the effects of methodological and estimation uncertainties. To investigate the use of sensitivity analyses in the pharmaceutical cost-utility literature in order to test whether a change in economic parameters could result in a different conclusion regarding the cost effectiveness of the intervention analysed. Cost-utility analyses of pharmaceuticals identified in a prior comprehensive audit (70 articles) were reviewed and further audited. For each base case for which sensitivity analyses were reported (n = 122), up to two sensitivity analyses for HR-QOL (n = 133), cost (n = 99), and discount rate (n = 128) were examined. Article mentions of thresholds for acceptable cost-utility ratios were recorded (total 36). Cost-utility ratios were denominated in US dollars for the year reported in each of the original articles in order to determine whether a different conclusion would have been indicated at the time the article was published. Quality ratings from the original audit for articles where sensitivity analysis results crossed the cost-utility ratio threshold above the base-case result were compared with those that did not. The most frequently mentioned cost-utility thresholds were $US20,000/QALY, $US50,000/QALY, and $US100,000/QALY. The proportions of sensitivity analyses reporting quantitative results that crossed the threshold above the base-case results (or where the sensitivity analysis result was dominated) were 31% for HR-QOL sensitivity analyses, 20% for cost-sensitivity analyses, and 15% for discount-rate sensitivity analyses. Almost half of the discount-rate sensitivity analyses did not report quantitative results. Articles that reported sensitivity analyses where results crossed the cost

  3. Identification of Bouc-Wen hysteretic parameters based on enhanced response sensitivity approach

    NASA Astrophysics Data System (ADS)

    Wang, Li; Lu, Zhong-Rong

    2017-05-01

    This paper aims to identify parameters of Bouc-Wen hysteretic model using time-domain measured data. It follows a general inverse identification procedure, that is, identifying model parameters is treated as an optimization problem with the nonlinear least squares objective function. Then, the enhanced response sensitivity approach, which has been shown convergent and proper for such kind of problems, is adopted to solve the optimization problem. Numerical tests are undertaken to verify the proposed identification approach.

  4. Local identifiability and sensitivity analysis of neuromuscular blockade and depth of hypnosis models.

    PubMed

    Silva, M M; Lemos, J M; Coito, A; Costa, B A; Wigren, T; Mendonça, T

    2014-01-01

    This paper addresses the local identifiability and sensitivity properties of two classes of Wiener models for the neuromuscular blockade and depth of hypnosis, when drug dose profiles like the ones commonly administered in the clinical practice are used as model inputs. The local parameter identifiability was assessed based on the singular value decomposition of the normalized sensitivity matrix. For the given input signal excitation, the results show an over-parameterization of the standard pharmacokinetic/pharmacodynamic models. The same identifiability assessment was performed on recently proposed minimally parameterized parsimonious models for both the neuromuscular blockade and the depth of hypnosis. The results show that the majority of the model parameters are identifiable from the available input-output data. This indicates that any identification strategy based on the minimally parameterized parsimonious Wiener models for the neuromuscular blockade and for the depth of hypnosis is likely to be more successful than if standard models are used. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  5. Systematic parameter estimation and sensitivity analysis using a multidimensional PEMFC model coupled with DAKOTA.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao Yang; Luo, Gang; Jiang, Fangming

    2010-05-01

    Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated inmore » order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.« less

  6. Efficient computation of parameter sensitivities of discrete stochastic chemical reaction networks.

    PubMed

    Rathinam, Muruhan; Sheppard, Patrick W; Khammash, Mustafa

    2010-01-21

    Parametric sensitivity of biochemical networks is an indispensable tool for studying system robustness properties, estimating network parameters, and identifying targets for drug therapy. For discrete stochastic representations of biochemical networks where Monte Carlo methods are commonly used, sensitivity analysis can be particularly challenging, as accurate finite difference computations of sensitivity require a large number of simulations for both nominal and perturbed values of the parameters. In this paper we introduce the common random number (CRN) method in conjunction with Gillespie's stochastic simulation algorithm, which exploits positive correlations obtained by using CRNs for nominal and perturbed parameters. We also propose a new method called the common reaction path (CRP) method, which uses CRNs together with the random time change representation of discrete state Markov processes due to Kurtz to estimate the sensitivity via a finite difference approximation applied to coupled reaction paths that emerge naturally in this representation. While both methods reduce the variance of the estimator significantly compared to independent random number finite difference implementations, numerical evidence suggests that the CRP method achieves a greater variance reduction. We also provide some theoretical basis for the superior performance of CRP. The improved accuracy of these methods allows for much more efficient sensitivity estimation. In two example systems reported in this work, speedup factors greater than 300 and 10,000 are demonstrated.

  7. Identifying parameter regions for multistationarity

    PubMed Central

    Conradi, Carsten; Mincheva, Maya; Wiuf, Carsten

    2017-01-01

    Mathematical modelling has become an established tool for studying the dynamics of biological systems. Current applications range from building models that reproduce quantitative data to identifying systems with predefined qualitative features, such as switching behaviour, bistability or oscillations. Mathematically, the latter question amounts to identifying parameter values associated with a given qualitative feature. We introduce a procedure to partition the parameter space of a parameterized system of ordinary differential equations into regions for which the system has a unique or multiple equilibria. The procedure is based on the computation of the Brouwer degree, and it creates a multivariate polynomial with parameter depending coefficients. The signs of the coefficients determine parameter regions with and without multistationarity. A particular strength of the procedure is the avoidance of numerical analysis and parameter sampling. The procedure consists of a number of steps. Each of these steps might be addressed algorithmically using various computer programs and available software, or manually. We demonstrate our procedure on several models of gene transcription and cell signalling, and show that in many cases we obtain a complete partitioning of the parameter space with respect to multistationarity. PMID:28972969

  8. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.

    PubMed

    Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H

    2017-04-01

    Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  9. Identifying mechanical property parameters of planetary soil using in-situ data obtained from exploration rovers

    NASA Astrophysics Data System (ADS)

    Ding, Liang; Gao, Haibo; Liu, Zhen; Deng, Zongquan; Liu, Guangjun

    2015-12-01

    Identifying the mechanical property parameters of planetary soil based on terramechanics models using in-situ data obtained from autonomous planetary exploration rovers is both an important scientific goal and essential for control strategy optimization and high-fidelity simulations of rovers. However, identifying all the terrain parameters is a challenging task because of the nonlinear and coupling nature of the involved functions. Three parameter identification methods are presented in this paper to serve different purposes based on an improved terramechanics model that takes into account the effects of slip, wheel lugs, etc. Parameter sensitivity and coupling of the equations are analyzed, and the parameters are grouped according to their sensitivity to the normal force, resistance moment and drawbar pull. An iterative identification method using the original integral model is developed first. In order to realize real-time identification, the model is then simplified by linearizing the normal and shearing stresses to derive decoupled closed-form analytical equations. Each equation contains one or two groups of soil parameters, making step-by-step identification of all the unknowns feasible. Experiments were performed using six different types of single-wheels as well as a four-wheeled rover moving on planetary soil simulant. All the unknown model parameters were identified using the measured data and compared with the values obtained by conventional experiments. It is verified that the proposed iterative identification method provides improved accuracy, making it suitable for scientific studies of soil properties, whereas the step-by-step identification methods based on simplified models require less calculation time, making them more suitable for real-time applications. The models have less than 10% margin of error comparing with the measured results when predicting the interaction forces and moments using the corresponding identified parameters.

  10. Parameter Estimation and Sensitivity Analysis of an Urban Surface Energy Balance Parameterization at a Tropical Suburban Site

    NASA Astrophysics Data System (ADS)

    Harshan, S.; Roth, M.; Velasco, E.

    2014-12-01

    Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model

  11. Optimization for minimum sensitivity to uncertain parameters

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw

    1994-01-01

    A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.

  12. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines

    PubMed Central

    Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.

    2017-01-01

    Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445

  13. Analysis of sensitivity of simulated recharge to selected parameters for seven watersheds modeled using the precipitation-runoff modeling system

    USGS Publications Warehouse

    Ely, D. Matthew

    2006-01-01

    Recharge is a vital component of the ground-water budget and methods for estimating it range from extremely complex to relatively simple. The most commonly used techniques, however, are limited by the scale of application. One method that can be used to estimate ground-water recharge includes process-based models that compute distributed water budgets on a watershed scale. These models should be evaluated to determine which model parameters are the dominant controls in determining ground-water recharge. Seven existing watershed models from different humid regions of the United States were chosen to analyze the sensitivity of simulated recharge to model parameters. Parameter sensitivities were determined using a nonlinear regression computer program to generate a suite of diagnostic statistics. The statistics identify model parameters that have the greatest effect on simulated ground-water recharge and that compare and contrast the hydrologic system responses to those parameters. Simulated recharge in the Lost River and Big Creek watersheds in Washington State was sensitive to small changes in air temperature. The Hamden watershed model in west-central Minnesota was developed to investigate the relations that wetlands and other landscape features have with runoff processes. Excess soil moisture in the Hamden watershed simulation was preferentially routed to wetlands, instead of to the ground-water system, resulting in little sensitivity of any parameters to recharge. Simulated recharge in the North Fork Pheasant Branch watershed, Wisconsin, demonstrated the greatest sensitivity to parameters related to evapotranspiration. Three watersheds were simulated as part of the Model Parameter Estimation Experiment (MOPEX). Parameter sensitivities for the MOPEX watersheds, Amite River, Louisiana and Mississippi, English River, Iowa, and South Branch Potomac River, West Virginia, were similar and most sensitive to small changes in air temperature and a user-defined flow

  14. A simple method for identifying parameter correlations in partially observed linear dynamic models.

    PubMed

    Li, Pu; Vu, Quoc Dong

    2015-12-14

    Parameter estimation represents one of the most significant challenges in systems biology. This is because biological models commonly contain a large number of parameters among which there may be functional interrelationships, thus leading to the problem of non-identifiability. Although identifiability analysis has been extensively studied by analytical as well as numerical approaches, systematic methods for remedying practically non-identifiable models have rarely been investigated. We propose a simple method for identifying pairwise correlations and higher order interrelationships of parameters in partially observed linear dynamic models. This is made by derivation of the output sensitivity matrix and analysis of the linear dependencies of its columns. Consequently, analytical relations between the identifiability of the model parameters and the initial conditions as well as the input functions can be achieved. In the case of structural non-identifiability, identifiable combinations can be obtained by solving the resulting homogenous linear equations. In the case of practical non-identifiability, experiment conditions (i.e. initial condition and constant control signals) can be provided which are necessary for remedying the non-identifiability and unique parameter estimation. It is noted that the approach does not consider noisy data. In this way, the practical non-identifiability issue, which is popular for linear biological models, can be remedied. Several linear compartment models including an insulin receptor dynamics model are taken to illustrate the application of the proposed approach. Both structural and practical identifiability of partially observed linear dynamic models can be clarified by the proposed method. The result of this method provides important information for experimental design to remedy the practical non-identifiability if applicable. The derivation of the method is straightforward and thus the algorithm can be easily implemented into a

  15. Efficient Screening of Climate Model Sensitivity to a Large Number of Perturbed Input Parameters [plus supporting information

    DOE PAGES

    Covey, Curt; Lucas, Donald D.; Tannahill, John; ...

    2013-07-01

    Modern climate models contain numerous input parameters, each with a range of possible values. Since the volume of parameter space increases exponentially with the number of parameters N, it is generally impossible to directly evaluate a model throughout this space even if just 2-3 values are chosen for each parameter. Sensitivity screening algorithms, however, can identify input parameters having relatively little effect on a variety of output fields, either individually or in nonlinear combination.This can aid both model development and the uncertainty quantification (UQ) process. Here we report results from a parameter sensitivity screening algorithm hitherto untested in climate modeling,more » the Morris one-at-a-time (MOAT) method. This algorithm drastically reduces the computational cost of estimating sensitivities in a high dimensional parameter space because the sample size grows linearly rather than exponentially with N. It nevertheless samples over much of the N-dimensional volume and allows assessment of parameter interactions, unlike traditional elementary one-at-a-time (EOAT) parameter variation. We applied both EOAT and MOAT to the Community Atmosphere Model (CAM), assessing CAM’s behavior as a function of 27 uncertain input parameters related to the boundary layer, clouds, and other subgrid scale processes. For radiation balance at the top of the atmosphere, EOAT and MOAT rank most input parameters similarly, but MOAT identifies a sensitivity that EOAT underplays for two convection parameters that operate nonlinearly in the model. MOAT’s ranking of input parameters is robust to modest algorithmic variations, and it is qualitatively consistent with model development experience. Supporting information is also provided at the end of the full text of the article.« less

  16. Quantifying Parameter Sensitivity, Interaction and Transferability in Hydrologically Enhanced Versions of Noah-LSM over Transition Zones

    NASA Technical Reports Server (NTRS)

    Rosero, Enrique; Yang, Zong-Liang; Wagener, Thorsten; Gulden, Lindsey E.; Yatheendradas, Soni; Niu, Guo-Yue

    2009-01-01

    We use sensitivity analysis to identify the parameters that are most responsible for shaping land surface model (LSM) simulations and to understand the complex interactions in three versions of the Noah LSM: the standard version (STD), a version enhanced with a simple groundwater module (GW), and version augmented by a dynamic phenology module (DV). We use warm season, high-frequency, near-surface states and turbulent fluxes collected over nine sites in the US Southern Great Plains. We quantify changes in the pattern of sensitive parameters, the amount and nature of the interaction between parameters, and the covariance structure of the distribution of behavioral parameter sets. Using Sobol s total and first-order sensitivity indexes, we show that very few parameters directly control the variance of the model output. Significant parameter interaction occurs so that not only the optimal parameter values differ between models, but the relationships between parameters change. GW decreases parameter interaction and appears to improve model realism, especially at wetter sites. DV increases parameter interaction and decreases identifiability, implying it is overparameterized and/or underconstrained. A case study at a wet site shows GW has two functional modes: one that mimics STD and a second in which GW improves model function by decoupling direct evaporation and baseflow. Unsupervised classification of the posterior distributions of behavioral parameter sets cannot group similar sites based solely on soil or vegetation type, helping to explain why transferability between sites and models is not straightforward. This evidence suggests a priori assignment of parameters should also consider climatic differences.

  17. Simulation of the right-angle car collision based on identified parameters

    NASA Astrophysics Data System (ADS)

    Kostek, R.; Aleksandrowicz, P.

    2017-10-01

    This article presents an influence of contact parameters on the collision pattern of vehicles. In this case a crash of two Fiat Cinquecentos with perpendicular median planes was simulated. The first vehicle was driven with a speed 50 km/h and crashed into the other one, standing still. It is a typical collision at junctions. For the first simulation, the default parameters of the V-SIM simulation program were assumed and then the parameters identified from the crash test of a Fiat Cinquecento, published by ADAC (Allgemeiner Deutscher Automobil-Club) were used. Various post-impact movements were observed for both simulations, which demonstrates a sensitivity of the simulation results to the assumed parameters. Applying the default parameters offered by the program can lead to inadequate evaluation of the collision part due to its only approximate reconstruction, which in consequence, influences the court decision. It was demonstrated how complex it is to reconstruct the pattern of the vehicles’ crash and what problems are faced by expert witnesses who tend to use default parameters.

  18. Parameter identifiability of linear dynamical systems

    NASA Technical Reports Server (NTRS)

    Glover, K.; Willems, J. C.

    1974-01-01

    It is assumed that the system matrices of a stationary linear dynamical system were parametrized by a set of unknown parameters. The question considered here is, when can such a set of unknown parameters be identified from the observed data? Conditions for the local identifiability of a parametrization are derived in three situations: (1) when input/output observations are made, (2) when there exists an unknown feedback matrix in the system and (3) when the system is assumed to be driven by white noise and only output observations are made. Also a sufficient condition for global identifiability is derived.

  19. Field-sensitivity To Rheological Parameters

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan; Ewoldt, Randy

    2017-11-01

    We ask this question: where in a flow is a quantity of interest Q quantitatively sensitive to the model parameters θ-> describing the rheology of the fluid? This field sensitivity is computed via the numerical solution of the adjoint flow equations, as developed to expose the target sensitivity δQ / δθ-> (x) via the constraint of satisfying the flow equations. Our primary example is a sphere settling in Carbopol, for which we have experimental data. For this Carreau-model configuration, we simultaneously calculate how much a local change in the fluid intrinsic time-scale λ, limit-viscosities ηo and η∞, and exponent n would affect the drag D. Such field sensitivities can show where different fluid physics in the model (time scales, elastic versus viscous components, etc.) are important for the target observable and generally guide model refinement based on predictive goals. In this case, the computational cost of solving the local sensitivity problem is negligible relative to the flow. The Carreau-fluid/sphere example is illustrative; the utility of field sensitivity is in the design and analysis of less intuitive flows, for which we provide some additional examples.

  20. Assessing the sensitivity of a land-surface scheme to the parameter values using a single column model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pitman, A.J.

    The sensitivity of a land-surface scheme (the Biosphere Atmosphere Transfer Scheme, BATS) to its parameter values was investigated using a single column model. Identifying which parameters were important in controlling the turbulent energy fluxes, temperature, soil moisture, and runoff was dependent upon many factors. In the simulation of a nonmoisture-stressed tropical forest, results were dependent on a combination of reservoir terms (soil depth, root distribution), flux efficiency terms (roughness length, stomatal resistance), and available energy (albedo). If moisture became limited, the reservoir terms increased in importance because the total fluxes predicted depended on moisture availability and not on the ratemore » of transfer between the surface and the atmosphere. The sensitivity shown by BATS depended on which vegetation type was being simulated, which variable was used to determine sensitivity, the magnitude and sign of the parameter change, the climate regime (precipitation amount and frequency), and soil moisture levels and proximity to wilting. The interactions between these factors made it difficult to identify the most important parameters in BATS. Therefore, this paper does not argue that a particular set of parameters is important in BATS, rather it shows that no general ranking of parameters is possible. It is also emphasized that using `stand-alone` forcing to examine the sensitivity of a land-surface scheme to perturbations, in either parameters or the atmosphere, is unreliable due to the lack of surface-atmospheric feedbacks.« less

  1. A modified Leslie-Gower predator-prey interaction model and parameter identifiability

    NASA Astrophysics Data System (ADS)

    Tripathi, Jai Prakash; Meghwani, Suraj S.; Thakur, Manoj; Abbas, Syed

    2018-01-01

    In this work, bifurcation and a systematic approach for estimation of identifiable parameters of a modified Leslie-Gower predator-prey system with Crowley-Martin functional response and prey refuge is discussed. Global asymptotic stability is discussed by applying fluctuation lemma. The system undergoes into Hopf bifurcation with respect to parameters intrinsic growth rate of predators (s) and prey reserve (m). The stability of Hopf bifurcation is also discussed by calculating Lyapunov number. The sensitivity analysis of the considered model system with respect to all variables is performed which also supports our theoretical study. To estimate the unknown parameter from the data, an optimization procedure (pseudo-random search algorithm) is adopted. System responses and phase plots for estimated parameters are also compared with true noise free data. It is found that the system dynamics with true set of parametric values is similar to the estimated parametric values. Numerical simulations are presented to substantiate the analytical findings.

  2. Bayesian inference to identify parameters in viscoelasticity

    NASA Astrophysics Data System (ADS)

    Rappel, Hussein; Beex, Lars A. A.; Bordas, Stéphane P. A.

    2017-08-01

    This contribution discusses Bayesian inference (BI) as an approach to identify parameters in viscoelasticity. The aims are: (i) to show that the prior has a substantial influence for viscoelasticity, (ii) to show that this influence decreases for an increasing number of measurements and (iii) to show how different types of experiments influence the identified parameters and their uncertainties. The standard linear solid model is the material description of interest and a relaxation test, a constant strain-rate test and a creep test are the tensile experiments focused on. The experimental data are artificially created, allowing us to make a one-to-one comparison between the input parameters and the identified parameter values. Besides dealing with the aforementioned issues, we believe that this contribution forms a comprehensible start for those interested in applying BI in viscoelasticity.

  3. Modelling of intermittent microwave convective drying: parameter sensitivity

    NASA Astrophysics Data System (ADS)

    Zhang, Zhijun; Qin, Wenchao; Shi, Bin; Gao, Jingxin; Zhang, Shiwei

    2017-06-01

    The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.

  4. Universally Sloppy Parameter Sensitivities in Systems Biology Models

    PubMed Central

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-01-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a “sloppy” spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters. PMID:17922568

  5. Universally sloppy parameter sensitivities in systems biology models.

    PubMed

    Gutenkunst, Ryan N; Waterfall, Joshua J; Casey, Fergal P; Brown, Kevin S; Myers, Christopher R; Sethna, James P

    2007-10-01

    Quantitative computational models play an increasingly important role in modern biology. Such models typically involve many free parameters, and assigning their values is often a substantial obstacle to model development. Directly measuring in vivo biochemical parameters is difficult, and collectively fitting them to other experimental data often yields large parameter uncertainties. Nevertheless, in earlier work we showed in a growth-factor-signaling model that collective fitting could yield well-constrained predictions, even when it left individual parameters very poorly constrained. We also showed that the model had a "sloppy" spectrum of parameter sensitivities, with eigenvalues roughly evenly distributed over many decades. Here we use a collection of models from the literature to test whether such sloppy spectra are common in systems biology. Strikingly, we find that every model we examine has a sloppy spectrum of sensitivities. We also test several consequences of this sloppiness for building predictive models. In particular, sloppiness suggests that collective fits to even large amounts of ideal time-series data will often leave many parameters poorly constrained. Tests over our model collection are consistent with this suggestion. This difficulty with collective fits may seem to argue for direct parameter measurements, but sloppiness also implies that such measurements must be formidably precise and complete to usefully constrain many model predictions. We confirm this implication in our growth-factor-signaling model. Our results suggest that sloppy sensitivity spectra are universal in systems biology models. The prevalence of sloppiness highlights the power of collective fits and suggests that modelers should focus on predictions rather than on parameters.

  6. Comment on “Two statistics for evaluating parameter identifiability and error reduction” by John Doherty and Randall J. Hunt

    USGS Publications Warehouse

    Hill, Mary C.

    2010-01-01

    Doherty and Hunt (2009) present important ideas for first-order-second moment sensitivity analysis, but five issues are discussed in this comment. First, considering the composite-scaled sensitivity (CSS) jointly with parameter correlation coefficients (PCC) in a CSS/PCC analysis addresses the difficulties with CSS mentioned in the introduction. Second, their new parameter identifiability statistic actually is likely to do a poor job of parameter identifiability in common situations. The statistic instead performs the very useful role of showing how model parameters are included in the estimated singular value decomposition (SVD) parameters. Its close relation to CSS is shown. Third, the idea from p. 125 that a suitable truncation point for SVD parameters can be identified using the prediction variance is challenged using results from Moore and Doherty (2005). Fourth, the relative error reduction statistic of Doherty and Hunt is shown to belong to an emerging set of statistics here named perturbed calculated variance statistics. Finally, the perturbed calculated variance statistics OPR and PPR mentioned on p. 121 are shown to explicitly include the parameter null-space component of uncertainty. Indeed, OPR and PPR results that account for null-space uncertainty have appeared in the literature since 2000.

  7. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1988-01-01

    Parameter sensitivity is defined as the estimation of changes in the modeling functions and the design variables due to small changes in the fixed parameters of the formulation. There are currently several methods for estimating parameter sensitivities requiring either difficult to obtain second order information, or do not return reliable estimates for the derivatives. Additionally, all the methods assume that the set of active constraints does not change in a neighborhood of the estimation point. If the active set does in fact change, than any extrapolations based on these derivatives may be in error. The objective here is to investigate more efficient new methods for estimating parameter sensitivities when the active set changes. The new method is based on the recursive quadratic programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RPQ algorithm. Inital testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity. To handle changes in the active set, a deflection algorithm is proposed for those cases where the new set of active constraints remains linearly independent. For those cases where dependencies occur, a directional derivative is proposed. A few simple examples are included for the algorithm, but extensive testing has not yet been performed.

  8. Preliminary Investigation of Ice Shape Sensitivity to Parameter Variations

    NASA Technical Reports Server (NTRS)

    Miller, Dean R.; Potapczuk, Mark G.; Langhals, Tammy J.

    2005-01-01

    A parameter sensitivity study was conducted at the NASA Glenn Research Center's Icing Research Tunnel (IRT) using a 36 in. chord (0.91 m) NACA-0012 airfoil. The objective of this preliminary work was to investigate the feasibility of using ice shape feature changes to define requirements for the simulation and measurement of SLD icing conditions. It was desired to identify the minimum change (threshold) in a parameter value, which yielded an observable change in the ice shape. Liquid Water Content (LWC), drop size distribution (MVD), and tunnel static temperature were varied about a nominal value, and the effects of these parameter changes on the resulting ice shapes were documented. The resulting differences in ice shapes were compared on the basis of qualitative and quantitative criteria (e.g., mass, ice horn thickness, ice horn angle, icing limits, and iced area). This paper will provide a description of the experimental method, present selected experimental results, and conclude with an evaluation of these results, followed by a discussion of recommendations for future research.

  9. Reduction of low frequency vibration of truck driver and seating system through system parameter identification, sensitivity analysis and active control

    NASA Astrophysics Data System (ADS)

    Wang, Xu; Bi, Fengrong; Du, Haiping

    2018-05-01

    This paper aims to develop an 5-degree-of-freedom driver and seating system model for optimal vibration control. A new method for identification of the driver seating system parameters from experimental vibration measurement has been developed. The parameter sensitivity analysis has been conducted considering the random excitation frequency and system parameter uncertainty. The most and least sensitive system parameters for the transmissibility ratio have been identified. The optimised PID controllers have been developed to reduce the driver's body vibration.

  10. Optimization of Parameter Ranges for Composite Tape Winding Process Based on Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Yu, Tao; Shi, Yaoyao; He, Xiaodong; Kang, Chao; Deng, Bo; Song, Shibo

    2017-08-01

    This study is focus on the parameters sensitivity of winding process for composite prepreg tape. The methods of multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis are proposed. The polynomial empirical model of interlaminar shear strength is established by response surface experimental method. Using this model, the relative sensitivity of key process parameters including temperature, tension, pressure and velocity is calculated, while the single-parameter sensitivity curves are obtained. According to the analysis of sensitivity curves, the stability and instability range of each parameter are recognized. Finally, the optimization method of winding process parameters is developed. The analysis results show that the optimized ranges of the process parameters for interlaminar shear strength are: temperature within [100 °C, 150 °C], tension within [275 N, 387 N], pressure within [800 N, 1500 N], and velocity within [0.2 m/s, 0.4 m/s], respectively.

  11. Parameter sensitivity analysis of a lumped-parameter model of a chain of lymphangions in series.

    PubMed

    Jamalian, Samira; Bertram, Christopher D; Richardson, William J; Moore, James E

    2013-12-01

    Any disruption of the lymphatic system due to trauma or injury can lead to edema. There is no effective cure for lymphedema, partly because predictive knowledge of lymphatic system reactions to interventions is lacking. A well-developed model of the system could greatly improve our understanding of its function. Lymphangions, defined as the vessel segment between two valves, are the individual pumping units. Based on our previous lumped-parameter model of a chain of lymphangions, this study aimed to identify the parameters that affect the system output the most using a sensitivity analysis. The system was highly sensitive to minimum valve resistance, such that variations in this parameter caused an order-of-magnitude change in time-average flow rate for certain values of imposed pressure difference. Average flow rate doubled when contraction frequency was increased within its physiological range. Optimum lymphangion length was found to be some 13-14.5 diameters. A peak of time-average flow rate occurred when transmural pressure was such that the pressure-diameter loop for active contractions was centered near maximum passive vessel compliance. Increasing the number of lymphangions in the chain improved the pumping in the presence of larger adverse pressure differences. For a given pressure difference, the optimal number of lymphangions increased with the total vessel length. These results indicate that further experiments to estimate valve resistance more accurately are necessary. The existence of an optimal value of transmural pressure may provide additional guidelines for increasing pumping in areas affected by edema.

  12. Understanding the Day Cent model: Calibration, sensitivity, and identifiability through inverse modeling

    USGS Publications Warehouse

    Necpálová, Magdalena; Anex, Robert P.; Fienen, Michael N.; Del Grosso, Stephen J.; Castellano, Michael J.; Sawyer, John E.; Iqbal, Javed; Pantoja, Jose L.; Barker, Daniel W.

    2015-01-01

    The ability of biogeochemical ecosystem models to represent agro-ecosystems depends on their correct integration with field observations. We report simultaneous calibration of 67 DayCent model parameters using multiple observation types through inverse modeling using the PEST parameter estimation software. Parameter estimation reduced the total sum of weighted squared residuals by 56% and improved model fit to crop productivity, soil carbon, volumetric soil water content, soil temperature, N2O, and soil3NO− compared to the default simulation. Inverse modeling substantially reduced predictive model error relative to the default model for all model predictions, except for soil 3NO− and 4NH+. Post-processing analyses provided insights into parameter–observation relationships based on parameter correlations, sensitivity and identifiability. Inverse modeling tools are shown to be a powerful way to systematize and accelerate the process of biogeochemical model interrogation, improving our understanding of model function and the underlying ecosystem biogeochemical processes that they represent.

  13. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process.

    PubMed

    Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-31

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.

  14. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process

    PubMed Central

    Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-01

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048

  15. An analysis of sensitivity of CLIMEX parameters in mapping species potential distribution and the broad-scale changes observed with minor variations in parameters values: an investigation using open-field Solanum lycopersicum and Neoleucinodes elegantalis as an example

    NASA Astrophysics Data System (ADS)

    da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho

    2018-04-01

    A sensitivity analysis can categorize levels of parameter influence on a model's output. Identifying parameters having the most influence facilitates establishing the best values for parameters of models, providing useful implications in species modelling of crops and associated insect pests. The aim of this study was to quantify the response of species models through a CLIMEX sensitivity analysis. Using open-field Solanum lycopersicum and Neoleucinodes elegantalis distribution records, and 17 fitting parameters, including growth and stress parameters, comparisons were made in model performance by altering one parameter value at a time, in comparison to the best-fit parameter values. Parameters that were found to have a greater effect on the model results are termed "sensitive". Through the use of two species, we show that even when the Ecoclimatic Index has a major change through upward or downward parameter value alterations, the effect on the species is dependent on the selection of suitability categories and regions of modelling. Two parameters were shown to have the greatest sensitivity, dependent on the suitability categories of each species in the study. Results enhance user understanding of which climatic factors had a greater impact on both species distributions in our model, in terms of suitability categories and areas, when parameter values were perturbed by higher or lower values, compared to the best-fit parameter values. Thus, the sensitivity analyses have the potential to provide additional information for end users, in terms of improving management, by identifying the climatic variables that are most sensitive.

  16. Accuracy and sensitivity analysis on seismic anisotropy parameter estimation

    NASA Astrophysics Data System (ADS)

    Yan, Fuyong; Han, De-Hua

    2018-04-01

    There is significant uncertainty in measuring the Thomsen’s parameter δ in laboratory even though the dimensions and orientations of the rock samples are known. It is expected that more challenges will be encountered in the estimating of the seismic anisotropy parameters from field seismic data. Based on Monte Carlo simulation of vertical transversely isotropic layer cake model using the database of laboratory anisotropy measurement from the literature, we apply the commonly used quartic non-hyperbolic reflection moveout equation to estimate the seismic anisotropy parameters and test its accuracy and sensitivities to the source-receive offset, vertical interval velocity error and time picking error. The testing results show that the methodology works perfectly for noise-free synthetic data with short spread length. However, this method is extremely sensitive to the time picking error caused by mild random noises, and it requires the spread length to be greater than the depth of the reflection event. The uncertainties increase rapidly for the deeper layers and the estimated anisotropy parameters can be very unreliable for a layer with more than five overlain layers. It is possible that an isotropic formation can be misinterpreted as a strong anisotropic formation. The sensitivity analysis should provide useful guidance on how to group the reflection events and build a suitable geological model for anisotropy parameter inversion.

  17. Influence of parameter values on the oscillation sensitivities of two p53-Mdm2 models.

    PubMed

    Cuba, Christian E; Valle, Alexander R; Ayala-Charca, Giancarlo; Villota, Elizabeth R; Coronado, Alberto M

    2015-09-01

    Biomolecular networks that present oscillatory behavior are ubiquitous in nature. While some design principles for robust oscillations have been identified, it is not well understood how these oscillations are affected when the kinetic parameters are constantly changing or are not precisely known, as often occurs in cellular environments. Many models of diverse complexity level, for systems such as circadian rhythms, cell cycle or the p53 network, have been proposed. Here we assess the influence of hundreds of different parameter sets on the sensitivities of two configurations of a well-known oscillatory system, the p53 core network. We show that, for both models and all parameter sets, the parameter related to the p53 positive feedback, i.e. self-promotion, is the only one that presents sizeable sensitivities on extrema, periods and delay. Moreover, varying the parameter set values to change the dynamical characteristics of the response is more restricted in the simple model, whereas the complex model shows greater tunability. These results highlight the importance of the presence of specific network patterns, in addition to the role of parameter values, when we want to characterize oscillatory biochemical systems.

  18. Sensitivity Analysis and Parameter Estimation for a Reactive Transport Model of Uranium Bioremediation

    NASA Astrophysics Data System (ADS)

    Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.

    2011-12-01

    A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.

  19. An analysis of parameter sensitivities of preference-inspired co-evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Rui; Mansor, Maszatul M.; Purshouse, Robin C.; Fleming, Peter J.

    2015-10-01

    Many-objective optimisation problems remain challenging for many state-of-the-art multi-objective evolutionary algorithms. Preference-inspired co-evolutionary algorithms (PICEAs) which co-evolve the usual population of candidate solutions with a family of decision-maker preferences during the search have been demonstrated to be effective on such problems. However, it is unknown whether PICEAs are robust with respect to the parameter settings. This study aims to address this question. First, a global sensitivity analysis method - the Sobol' variance decomposition method - is employed to determine the relative importance of the parameters controlling the performance of PICEAs. Experimental results show that the performance of PICEAs is controlled for the most part by the number of function evaluations. Next, we investigate the effect of key parameters identified from the Sobol' test and the genetic operators employed in PICEAs. Experimental results show improved performance of the PICEAs as more preferences are co-evolved. Additionally, some suggestions for genetic operator settings are provided for non-expert users.

  20. Identifying tectonic parameters that affect tsunamigenesis

    NASA Astrophysics Data System (ADS)

    van Zelst, I.; Brizzi, S.; Heuret, A.; Funiciello, F.; van Dinther, Y.

    2016-12-01

    The role of tectonics in tsunami generation is at present poorly understood. However, the fact thatsome regions produce more tsunamis than others indicates that tectonics could influencetsunamigenesis. Here, we complement a global earthquake database that contains geometrical,mechanical, and seismicity parameters of subduction zones with tsunami data. We statisticallyanalyse the database to identify the tectonic parameters that affect tsunamigenesis. The Pearson'sproduct-moment correlation coefficients reveal high positive correlations of 0.65 between,amongst others, the maximum water height of tsunamis and the seismic coupling in a subductionzone. However, these correlations are mainly caused by outliers. The Spearman's rank correlationcoefficient results in statistically significant correlations of 0.60 between the number of tsunamisin a subduction zone and subduction velocity (positive correlation) and the sediment thickness atthe trench (negative correlation). Interestingly, there is a positive correlation between the latter andtsunami magnitude. These bivariate statistical methods are extended to a binary decision tree(BDT) and multivariate analysis. Using the BDT, the tectonic parameters that distinguish betweensubduction zones with tsunamigenic and non-tsunamigenic earthquakes are identified. To assessphysical causality of the tectonic parameters with regard to tsunamigenesis, we complement ouranalysis by a numerical study of the most promising parameters using a geodynamic seismic cyclemodel. We show that the inclusion of sediments on the subducting plate results in an increase insplay fault activity, which could lead to larger vertical seafloor displacements due to their steeperdips and hence a larger tsunamigenic potential. We also show that the splay fault is the preferredrupture path for a strongly velocity strengthening friction regime in the shallow part of thesubduction zone, which again increases the tsunamigenic potential.

  1. Breathing dynamics based parameter sensitivity analysis of hetero-polymeric DNA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talukder, Srijeeta; Sen, Shrabani; Chaudhury, Pinaki, E-mail: pinakc@rediffmail.com

    We study the parameter sensitivity of hetero-polymeric DNA within the purview of DNA breathing dynamics. The degree of correlation between the mean bubble size and the model parameters is estimated for this purpose for three different DNA sequences. The analysis leads us to a better understanding of the sequence dependent nature of the breathing dynamics of hetero-polymeric DNA. Out of the 14 model parameters for DNA stability in the statistical Poland-Scheraga approach, the hydrogen bond interaction ε{sub hb}(AT) for an AT base pair and the ring factor ξ turn out to be the most sensitive parameters. In addition, the stackingmore » interaction ε{sub st}(TA-TA) for an TA-TA nearest neighbor pair of base-pairs is found to be the most sensitive one among all stacking interactions. Moreover, we also establish that the nature of stacking interaction has a deciding effect on the DNA breathing dynamics, not the number of times a particular stacking interaction appears in a sequence. We show that the sensitivity analysis can be used as an effective measure to guide a stochastic optimization technique to find the kinetic rate constants related to the dynamics as opposed to the case where the rate constants are measured using the conventional unbiased way of optimization.« less

  2. [Parameter sensitivity of simulating net primary productivity of Larix olgensis forest based on BIOME-BGC model].

    PubMed

    He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong

    2016-02-01

    Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.

  3. Sensitivity study and parameter optimization of OCD tool for 14nm finFET process

    NASA Astrophysics Data System (ADS)

    Zhang, Zhensheng; Chen, Huiping; Cheng, Shiqiu; Zhan, Yunkun; Huang, Kun; Shi, Yaoming; Xu, Yiping

    2016-03-01

    Optical critical dimension (OCD) measurement has been widely demonstrated as an essential metrology method for monitoring advanced IC process in the technology node of 90 nm and beyond. However, the rapidly shrunk critical dimensions of the semiconductor devices and the increasing complexity of the manufacturing process bring more challenges to OCD. The measurement precision of OCD technology highly relies on the optical hardware configuration, spectral types, and inherently interactions between the incidence of light and various materials with various topological structures, therefore sensitivity analysis and parameter optimization are very critical in the OCD applications. This paper presents a method for seeking the optimum sensitive measurement configuration to enhance the metrology precision and reduce the noise impact to the greatest extent. In this work, the sensitivity of different types of spectra with a series of hardware configurations of incidence angles and azimuth angles were investigated. The optimum hardware measurement configuration and spectrum parameter can be identified. The FinFET structures in the technology node of 14 nm were constructed to validate the algorithm. This method provides guidance to estimate the measurement precision before measuring actual device features and will be beneficial for OCD hardware configuration.

  4. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE PAGES

    Dai, Heng; Ye, Ming; Walker, Anthony P.; ...

    2017-03-28

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  5. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  6. An investigation of new methods for estimating parameter sensitivities

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    The method proposed for estimating sensitivity derivatives is based on the Recursive Quadratic Programming (RQP) method and in conjunction a differencing formula to produce estimates of the sensitivities. This method is compared to existing methods and is shown to be very competitive in terms of the number of function evaluations required. In terms of accuracy, the method is shown to be equivalent to a modified version of the Kuhn-Tucker method, where the Hessian of the Lagrangian is estimated using the BFS method employed by the RQP algorithm. Initial testing on a test set with known sensitivities demonstrates that the method can accurately calculate the parameter sensitivity.

  7. On approaches to analyze the sensitivity of simulated hydrologic fluxes to model parameters in the community land model

    DOE PAGES

    Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...

    2015-12-04

    Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less

  8. Sensitivity of Dynamical Systems to Banach Space Parameters

    DTIC Science & Technology

    2005-02-13

    We consider general nonlinear dynamical systems in a Banach space with dependence on parameters in a second Banach space. An abstract theoretical ... framework for sensitivity equations is developed. An application to measure dependent delay differential systems arising in a class of HIV models is presented.

  9. Global Sensitivity Analysis and Parameter Calibration for an Ecosystem Carbon Model

    NASA Astrophysics Data System (ADS)

    Safta, C.; Ricciuto, D. M.; Sargsyan, K.; Najm, H. N.; Debusschere, B.; Thornton, P. E.

    2013-12-01

    We present uncertainty quantification results for a process-based ecosystem carbon model. The model employs 18 parameters and is driven by meteorological data corresponding to years 1992-2006 at the Harvard Forest site. Daily Net Ecosystem Exchange (NEE) observations were available to calibrate the model parameters and test the performance of the model. Posterior distributions show good predictive capabilities for the calibrated model. A global sensitivity analysis was first performed to determine the important model parameters based on their contribution to the variance of NEE. We then proceed to calibrate the model parameters in a Bayesian framework. The daily discrepancies between measured and predicted NEE values were modeled as independent and identically distributed Gaussians with prescribed daily variance according to the recorded instrument error. All model parameters were assumed to have uninformative priors with bounds set according to expert opinion. The global sensitivity results show that the rate of leaf fall (LEAFALL) is responsible for approximately 25% of the total variance in the average NEE for 1992-2005. A set of 4 other parameters, Nitrogen use efficiency (NUE), base rate for maintenance respiration (BR_MR), growth respiration fraction (RG_FRAC), and allocation to plant stem pool (ASTEM) contribute between 5% and 12% to the variance in average NEE, while the rest of the parameters have smaller contributions. The posterior distributions, sampled with a Markov Chain Monte Carlo algorithm, exhibit significant correlations between model parameters. However LEAFALL, the most important parameter for the average NEE, is not informed by the observational data, while less important parameters show significant updates between their prior and posterior densities. The Fisher information matrix values, indicating which parameters are most informed by the experimental observations, are examined to augment the comparison between the calibration and global

  10. An investigation of using an RQP based method to calculate parameter sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Beltracchi, Todd J.; Gabriele, Gary A.

    1989-01-01

    Estimation of the sensitivity of problem functions with respect to problem variables forms the basis for many of our modern day algorithms for engineering optimization. The most common application of problem sensitivities has been in the calculation of objective function and constraint partial derivatives for determining search directions and optimality conditions. A second form of sensitivity analysis, parameter sensitivity, has also become an important topic in recent years. By parameter sensitivity, researchers refer to the estimation of changes in the modeling functions and current design point due to small changes in the fixed parameters of the formulation. Methods for calculating these derivatives have been proposed by several authors (Armacost and Fiacco 1974, Sobieski et al 1981, Schmit and Chang 1984, and Vanderplaats and Yoshida 1985). Two drawbacks to estimating parameter sensitivities by current methods have been: (1) the need for second order information about the Lagrangian at the current point, and (2) the estimates assume no change in the active set of constraints. The first of these two problems is addressed here and a new algorithm is proposed that does not require explicit calculation of second order information.

  11. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  12. Identifying populations sensitive to environmental chemicals by simulating toxicokinetic variability.

    PubMed

    Ring, Caroline L; Pearce, Robert G; Setzer, R Woodrow; Wetmore, Barbara A; Wambaugh, John F

    2017-09-01

    The thousands of chemicals present in the environment (USGAO, 2013) must be triaged to identify priority chemicals for human health risk research. Most chemicals have little of the toxicokinetic (TK) data that are necessary for relating exposures to tissue concentrations that are believed to be toxic. Ongoing efforts have collected limited, in vitro TK data for a few hundred chemicals. These data have been combined with biomonitoring data to estimate an approximate margin between potential hazard and exposure. The most "at risk" 95th percentile of adults have been identified from simulated populations that are generated either using standard "average" adult human parameters or very specific cohorts such as Northern Europeans. To better reflect the modern U.S. population, we developed a population simulation using physiologies based on distributions of demographic and anthropometric quantities from the most recent U.S. Centers for Disease Control and Prevention National Health and Nutrition Examination Survey (NHANES) data. This allowed incorporation of inter-individual variability, including variability across relevant demographic subgroups. Variability was analyzed with a Monte Carlo approach that accounted for the correlation structure in physiological parameters. To identify portions of the U.S. population that are more at risk for specific chemicals, physiologic variability was incorporated within an open-source high-throughput (HT) TK modeling framework. We prioritized 50 chemicals based on estimates of both potential hazard and exposure. Potential hazard was estimated from in vitro HT screening assays (i.e., the Tox21 and ToxCast programs). Bioactive in vitro concentrations were extrapolated to doses that produce equivalent concentrations in body tissues using a reverse dosimetry approach in which generic TK models are parameterized with: 1) chemical-specific parameters derived from in vitro measurements and predicted from chemical structure; and 2) with

  13. Identification of sensitive parameters in the modeling of SVOC reemission processes from soil to atmosphere.

    PubMed

    Loizeau, Vincent; Ciffroy, Philippe; Roustan, Yelva; Musson-Genon, Luc

    2014-09-15

    Semi-volatile organic compounds (SVOCs) are subject to Long-Range Atmospheric Transport because of transport-deposition-reemission successive processes. Several experimental data available in the literature suggest that soil is a non-negligible contributor of SVOCs to atmosphere. Then coupling soil and atmosphere in integrated coupled models and simulating reemission processes can be essential for estimating atmospheric concentration of several pollutants. However, the sources of uncertainty and variability are multiple (soil properties, meteorological conditions, chemical-specific parameters) and can significantly influence the determination of reemissions. In order to identify the key parameters in reemission modeling and their effect on global modeling uncertainty, we conducted a sensitivity analysis targeted on the 'reemission' output variable. Different parameters were tested, including soil properties, partition coefficients and meteorological conditions. We performed EFAST sensitivity analysis for four chemicals (benzo-a-pyrene, hexachlorobenzene, PCB-28 and lindane) and different spatial scenari (regional and continental scales). Partition coefficients between air, solid and water phases are influent, depending on the precision of data and global behavior of the chemical. Reemissions showed a lower variability to soil parameters (soil organic matter and water contents at field capacity and wilting point). A mapping of these parameters at a regional scale is sufficient to correctly estimate reemissions when compared to other sources of uncertainty. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Assessment of Wind Parameter Sensitivity on Extreme and Fatigue Wind Turbine Loads

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Amy N; Sethuraman, Latha; Jonkman, Jason

    Wind turbines are designed using a set of simulations to ascertain the structural loads that the turbine could encounter. While mean hub-height wind speed is considered to vary, other wind parameters such as turbulence spectra, sheer, veer, spatial coherence, and component correlation are fixed or conditional values that, in reality, could have different characteristics at different sites and have a significant effect on the resulting loads. This paper therefore seeks to assess the sensitivity of different wind parameters on the resulting ultimate and fatigue loads on the turbine during normal operational conditions. Eighteen different wind parameters are screened using anmore » Elementary Effects approach with radial points. As expected, the results show a high sensitivity of the loads to the turbulence standard deviation in the primary wind direction, but the sensitivity to wind shear is often much greater. To a lesser extent, other wind parameters that drive loads include the coherence in the primary wind direction and veer.« less

  15. Identifiability of altimetry-based rating curve parameters in function of river morphological parameters

    NASA Astrophysics Data System (ADS)

    Paris, Adrien; André Garambois, Pierre; Calmant, Stéphane; Paiva, Rodrigo; Walter, Collischonn; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Bonnet, Marie-Paule; Seyler, Frédérique; Monnier, Jérôme

    2016-04-01

    Estimating river discharge for ungauged river reaches from satellite measurements is not straightforward given the nonlinearity of flow behavior with respect to measurable and non measurable hydraulic parameters. As a matter of facts, current satellite datasets do not give access to key parameters such as river bed topography and roughness. A unique set of almost one thousand altimetry-based rating curves was built by fit of ENVISAT and Jason-2 water stages with discharges obtained from the MGB-IPH rainfall-runoff model in the Amazon basin. These rated discharges were successfully validated towards simulated discharges (Ens = 0.70) and in-situ discharges (Ens = 0.71) and are not mission-dependent. The rating curve writes Q = a(Z-Z0)b*sqrt(S), with Z the water surface elevation and S its slope gained from satellite altimetry, a and b power law coefficient and exponent and Z0 the river bed elevation such as Q(Z0) = 0. For several river reaches in the Amazon basin where ADCP measurements are available, the Z0 values are fairly well validated with a relative error lower than 10%. The present contribution aims at relating the identifiability and the physical meaning of a, b and Z0given various hydraulic and geomorphologic conditions. Synthetic river bathymetries sampling a wide range of rivers and inflow discharges are used to perform twin experiments. A shallow water model is run for generating synthetic satellite observations, and then rating curve parameters are determined for each river section thanks to a MCMC algorithm. Thanks to twin experiments, it is shown that rating curve formulation with water surface slope, i.e. closer from Manning equation form, improves parameter identifiability. The compensation between parameters is limited, especially for reaches with little water surface variability. Rating curve parameters are analyzed for riffle and pools for small to large rivers, different river slopes and cross section shapes. It is shown that the river bed

  16. Sensitivity to Rhythmic Parameters in Dyslexic Children: A Comparison of Hungarian and English

    ERIC Educational Resources Information Center

    Suranyi, Zsuzsanna; Csepe, Valeria; Richardson, Ulla; Thomson, Jennifer M.; Honbolygo, Ferenc; Goswami, Usha

    2009-01-01

    It has been proposed that sensitivity to the parameters underlying speech rhythm may be important in setting up well-specified phonological representations in the mental lexicon. However, different acoustic parameters may contribute differentially to rhythm and stress in different languages. Here we contrast sensitivity to one such cue, amplitude…

  17. Impact of the time scale of model sensitivity response on coupled model parameter estimation

    NASA Astrophysics Data System (ADS)

    Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu

    2017-11-01

    That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.

  18. The Early Eocene equable climate problem: can perturbations of climate model parameters identify possible solutions?

    PubMed

    Sagoo, Navjit; Valdes, Paul; Flecker, Rachel; Gregoire, Lauren J

    2013-10-28

    Geological data for the Early Eocene (56-47.8 Ma) indicate extensive global warming, with very warm temperatures at both poles. However, despite numerous attempts to simulate this warmth, there are remarkable data-model differences in the prediction of these polar surface temperatures, resulting in the so-called 'equable climate problem'. In this paper, for the first time an ensemble with a perturbed climate-sensitive model parameters approach has been applied to modelling the Early Eocene climate. We performed more than 100 simulations with perturbed physics parameters, and identified two simulations that have an optimal fit with the proxy data. We have simulated the warmth of the Early Eocene at 560 ppmv CO2, which is a much lower CO2 level than many other models. We investigate the changes in atmospheric circulation, cloud properties and ocean circulation that are common to these simulations and how they differ from the remaining simulations in order to understand what mechanisms contribute to the polar warming. The parameter set from one of the optimal Early Eocene simulations also produces a favourable fit for the last glacial maximum boundary climate and outperforms the control parameter set for the present day. Although this does not 'prove' that this model is correct, it is very encouraging that there is a parameter set that creates a climate model able to simulate well very different palaeoclimates and the present-day climate. Interestingly, to achieve the great warmth of the Early Eocene this version of the model does not have a strong future climate change Charney climate sensitivity. It produces a Charney climate sensitivity of 2.7(°)C, whereas the mean value of the 18 models in the IPCC Fourth Assessment Report (AR4) is 3.26(°)C±0.69(°)C. Thus, this value is within the range and below the mean of the models included in the AR4.

  19. SU-E-T-249: Determining the Sensitivity of Beam Profile Parameters for Detecting Energy Changes in Flattening Filter-Free Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mooney, K; Yaddanapudi, S; Mutic, S

    2015-06-15

    Purpose: To identify the beam profile parameters that can be used to detect energy changes in a flattening filter-free photon beams. Methods: Flattening filter-free beam profiles (inline, crossline, and diagonals) were measured for multiple field sizes (25×25cm and 10×10cm) at 6MV on a clinical system (Truebeam, Varian Medical Systems Palo Alto CA). Profiles were acquired for baseline energy and detuned beams by changing the bending magnet current (BMC), above and below baseline. The following profile parameters were measured: flatness (off-axis ratio at 80% of field size), symmetry, uniformity, slope, and the off-axis ratio (OAR) at several off-axis distances. Tolerance valuesmore » were determined from repeated measurements. Each parameter was evaluated for sensitivity to the induced beam changes, and the minimum detectable BMC change was calculated for each parameter by calculating the change in BMC that would Result in a change in the parameter above the measurement tolerance. Results: Tolerance values for the parameters were-Flatness≤0.1%; Symmetry≤0.4%; Uniformity≤0.01%; Slope≤ 0.001%/mm. The measurements made with a field size of 25cm and a depth of d=1.5cm showed the greatest sensitivity to bending magnet current variations. Uniformity had the highest sensitivity, able to detect a change in BMC of BMC=0.02A. The OARs and slope were sensitive to the magnitude and direction of BMC change. The sensitivity in the flatness parameter was BMC=0.04A; slope was sensitive to BMC=0.05A. The sensitivity decreased for OARs measured closer to central axis-BMC(8cm)=0.23A; BMC(5cm)=0.47A; BMC(2cm)=1.35A. Symmetry was not sensitive to changes in BMC. Conclusion: These tests allow for better QA of FFF beams by setting tolerance levels to beam parameter baseline values which reflect variations in machine calibration. Uniformity is most sensitive to BMC changes, while OARs provide information about magnitude and direction of miscalibration. Research funding

  20. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    PubMed

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  1. Behavior of sensitivities in the one-dimensional advection-dispersion equation: Implications for parameter estimation and sampling design

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1987-01-01

    The spatial and temporal variability of sensitivities has a significant impact on parameter estimation and sampling design for studies of solute transport in porous media. Physical insight into the behavior of sensitivities is offered through an analysis of analytically derived sensitivities for the one-dimensional form of the advection-dispersion equation. When parameters are estimated in regression models of one-dimensional transport, the spatial and temporal variability in sensitivities influences variance and covariance of parameter estimates. Several principles account for the observed influence of sensitivities on parameter uncertainty. (1) Information about a physical parameter may be most accurately gained at points in space and time with a high sensitivity to the parameter. (2) As the distance of observation points from the upstream boundary increases, maximum sensitivity to velocity during passage of the solute front increases and the consequent estimate of velocity tends to have lower variance. (3) The frequency of sampling must be “in phase” with the S shape of the dispersion sensitivity curve to yield the most information on dispersion. (4) The sensitivity to the dispersion coefficient is usually at least an order of magnitude less than the sensitivity to velocity. (5) The assumed probability distribution of random error in observations of solute concentration determines the form of the sensitivities. (6) If variance in random error in observations is large, trends in sensitivities of observation points may be obscured by noise and thus have limited value in predicting variance in parameter estimates among designs. (7) Designs that minimize the variance of one parameter may not necessarily minimize the variance of other parameters. (8) The time and space interval over which an observation point is sensitive to a given parameter depends on the actual values of the parameters in the underlying physical system.

  2. Pattern statistics on Markov chains and sensitivity to parameter estimation.

    PubMed

    Nuel, Grégory

    2006-10-17

    In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,...). In the particular case where pattern statistics (overlap counting only) computed through binomial approximations we use the delta-method to give an explicit expression of sigma, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation.

  3. Are LOD and LOQ Reliable Parameters for Sensitivity Evaluation of Spectroscopic Methods?

    PubMed

    Ershadi, Saba; Shayanfar, Ali

    2018-03-22

    The limit of detection (LOD) and the limit of quantification (LOQ) are common parameters to assess the sensitivity of analytical methods. In this study, the LOD and LOQ of previously reported terbium sensitized analysis methods were calculated by different methods, and the results were compared with sensitivity parameters [lower limit of quantification (LLOQ)] of U.S. Food and Drug Administration guidelines. The details of the calibration curve and standard deviation of blank samples of three different terbium-sensitized luminescence methods for the quantification of mycophenolic acid, enrofloxacin, and silibinin were used for the calculation of LOD and LOQ. A comparison of LOD and LOQ values calculated by various methods and LLOQ shows a considerable difference. The significant difference of the calculated LOD and LOQ with various methods and LLOQ should be considered in the sensitivity evaluation of spectroscopic methods.

  4. Parameter optimization, sensitivity, and uncertainty analysis of an ecosystem model at a forest flux tower site in the United States

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shuguang; Huang, Zhihong; Yan, Wende

    2014-01-01

    Ecosystem models are useful tools for understanding ecological processes and for sustainable management of resources. In biogeochemical field, numerical models have been widely used for investigating carbon dynamics under global changes from site to regional and global scales. However, it is still challenging to optimize parameters and estimate parameterization uncertainty for complex process-based models such as the Erosion Deposition Carbon Model (EDCM), a modified version of CENTURY, that consider carbon, water, and nutrient cycles of ecosystems. This study was designed to conduct the parameter identifiability, optimization, sensitivity, and uncertainty analysis of EDCM using our developed EDCM-Auto, which incorporated a comprehensive R package—Flexible Modeling Framework (FME) and the Shuffled Complex Evolution (SCE) algorithm. Using a forest flux tower site as a case study, we implemented a comprehensive modeling analysis involving nine parameters and four target variables (carbon and water fluxes) with their corresponding measurements based on the eddy covariance technique. The local sensitivity analysis shows that the plant production-related parameters (e.g., PPDF1 and PRDX) are most sensitive to the model cost function. Both SCE and FME are comparable and performed well in deriving the optimal parameter set with satisfactory simulations of target variables. Global sensitivity and uncertainty analysis indicate that the parameter uncertainty and the resulting output uncertainty can be quantified, and that the magnitude of parameter-uncertainty effects depends on variables and seasons. This study also demonstrates that using the cutting-edge R functions such as FME can be feasible and attractive for conducting comprehensive parameter analysis for ecosystem modeling.

  5. Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes

    NASA Astrophysics Data System (ADS)

    Guerrero, José-Luis; Pernica, Patricia; Wheater, Howard; Mackay, Murray; Spence, Chris

    2017-12-01

    Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere - heat-exchange fluxes - is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM), a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd). A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), was used to perform sensitivity analysis (SA) and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE) was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue - different parameter-value combinations yielding equivalent results - the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.

  6. Pattern statistics on Markov chains and sensitivity to parameter estimation

    PubMed Central

    Nuel, Grégory

    2006-01-01

    Background: In order to compute pattern statistics in computational biology a Markov model is commonly used to take into account the sequence composition. Usually its parameter must be estimated. The aim of this paper is to determine how sensitive these statistics are to parameter estimation, and what are the consequences of this variability on pattern studies (finding the most over-represented words in a genome, the most significant common words to a set of sequences,...). Results: In the particular case where pattern statistics (overlap counting only) computed through binomial approximations we use the delta-method to give an explicit expression of σ, the standard deviation of a pattern statistic. This result is validated using simulations and a simple pattern study is also considered. Conclusion: We establish that the use of high order Markov model could easily lead to major mistakes due to the high sensitivity of pattern statistics to parameter estimation. PMID:17044916

  7. Identifiability of sorption parameters in stirred flow-through reactor experiments and their identification with a Bayesian approach.

    PubMed

    Nicoulaud-Gouin, V; Garcia-Sanchez, L; Giacalone, M; Attard, J C; Martin-Garin, A; Bois, F Y

    2016-10-01

    This paper addresses the methodological conditions -particularly experimental design and statistical inference- ensuring the identifiability of sorption parameters from breakthrough curves measured during stirred flow-through reactor experiments also known as continuous flow stirred-tank reactor (CSTR) experiments. The equilibrium-kinetic (EK) sorption model was selected as nonequilibrium parameterization embedding the K d approach. Parameter identifiability was studied formally on the equations governing outlet concentrations. It was also studied numerically on 6 simulated CSTR experiments on a soil with known equilibrium-kinetic sorption parameters. EK sorption parameters can not be identified from a single breakthrough curve of a CSTR experiment, because K d,1 and k - were diagnosed collinear. For pairs of CSTR experiments, Bayesian inference allowed to select the correct models of sorption and error among sorption alternatives. Bayesian inference was conducted with SAMCAT software (Sensitivity Analysis and Markov Chain simulations Applied to Transfer models) which launched the simulations through the embedded simulation engine GNU-MCSim, and automated their configuration and post-processing. Experimental designs consisting in varying flow rates between experiments reaching equilibrium at contamination stage were found optimal, because they simultaneously gave accurate sorption parameters and predictions. Bayesian results were comparable to maximum likehood method but they avoided convergence problems, the marginal likelihood allowed to compare all models, and credible interval gave directly the uncertainty of sorption parameters θ. Although these findings are limited to the specific conditions studied here, in particular the considered sorption model, the chosen parameter values and error structure, they help in the conception and analysis of future CSTR experiments with radionuclides whose kinetic behaviour is suspected. Copyright © 2016 Elsevier Ltd. All

  8. MXLKID: a maximum likelihood parameter identifier. [In LRLTRAN for CDC 7600

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gavel, D.T.

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables.

  9. Sensitive zone parameters and curvature radius evaluation for polymer optical fiber curvature sensors

    NASA Astrophysics Data System (ADS)

    Leal-Junior, Arnaldo G.; Frizera, Anselmo; José Pontes, Maria

    2018-03-01

    Polymer optical fibers (POFs) are suitable for applications such as curvature sensors, strain, temperature, liquid level, among others. However, for enhancing sensitivity, many polymer optical fiber curvature sensors based on intensity variation require a lateral section. Lateral section length, depth, and surface roughness have great influence on the sensor sensitivity, hysteresis, and linearity. Moreover, the sensor curvature radius increase the stress on the fiber, which leads on variation of the sensor behavior. This paper presents the analysis relating the curvature radius and lateral section length, depth and surface roughness with the sensor sensitivity, hysteresis and linearity for a POF curvature sensor. Results show a strong correlation between the decision parameters behavior and the performance for sensor applications based on intensity variation. Furthermore, there is a trade-off among the sensitive zone length, depth, surface roughness, and curvature radius with the sensor desired performance parameters, which are minimum hysteresis, maximum sensitivity, and maximum linearity. The optimization of these parameters is applied to obtain a sensor with sensitivity of 20.9 mV/°, linearity of 0.9992 and hysteresis below 1%, which represent a better performance of the sensor when compared with the sensor without the optimization.

  10. Sensitivity of tire response to variations in material and geometric parameters

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Tanner, John A.; Peters, Jeanne M.

    1992-01-01

    A computational procedure is presented for evaluating the analytic sensitivity derivatives of the tire response with respect to material and geometric parameters of the tire. The tire is modeled by using a two-dimensional laminated anisotropic shell theory with the effects of variation in material and geometric parameters included. The computational procedure is applied to the case of uniform inflation pressure on the Space Shuttle nose-gear tire when subjected to uniform inflation pressure. Numerical results are presented showing the sensitivity of the different response quantities to variations in the material characteristics of both the cord and the rubber.

  11. Identification of the most sensitive parameters in the activated sludge model implemented in BioWin software.

    PubMed

    Liwarska-Bizukojc, Ewa; Biernacki, Rafal

    2010-10-01

    In order to simulate biological wastewater treatment processes, data concerning wastewater and sludge composition, process kinetics and stoichiometry are required. Selection of the most sensitive parameters is an important step of model calibration. The aim of this work is to verify the predictability of the activated sludge model, which is implemented in BioWin software, and select its most influential kinetic and stoichiometric parameters with the help of sensitivity analysis approach. Two different measures of sensitivity are applied: the normalised sensitivity coefficient (S(i,j)) and the mean square sensitivity measure (delta(j)(msqr)). It occurs that 17 kinetic and stoichiometric parameters of the BioWin activated sludge (AS) model can be regarded as influential on the basis of S(i,j) calculations. Half of the influential parameters are associated with growth and decay of phosphorus accumulating organisms (PAOs). The identification of the set of the most sensitive parameters should support the users of this model and initiate the elaboration of determination procedures for the parameters, for which it has not been done yet. Copyright 2010 Elsevier Ltd. All rights reserved.

  12. Adjusting the specificity of an engine map based on the sensitivity of an engine control parameter relative to a performance variable

    DOEpatents

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2014-10-28

    Methods and systems for engine control optimization are provided. A first and a second operating condition of a vehicle engine are detected. An initial value is identified for a first and a second engine control parameter corresponding to a combination of the detected operating conditions according to a first and a second engine map look-up table. The initial values for the engine control parameters are adjusted based on a detected engine performance variable to cause the engine performance variable to approach a target value. A first and a second sensitivity of the engine performance variable are determined in response to changes in the engine control parameters. The first engine map look-up table is adjusted when the first sensitivity is greater than a threshold, and the second engine map look-up table is adjusted when the second sensitivity is greater than a threshold.

  13. A Bayesian Network Based Global Sensitivity Analysis Method for Identifying Dominant Processes in a Multi-physics Model

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2016-12-01

    Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can

  14. Sensitivity of Beam Parameters to a Station C Solenoid Scan on Axis II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulze, Martin E.

    Magnet scans are a standard technique for determining beam parameters in accelerators. Beam parameters are inferred from spot size measurements using a model of the beam optics. The sensitivity of the measured beam spot size to the beam parameters is investigated for typical DARHT Axis II beam energies and currents. In a typical S4 solenoid scan, the downstream transport is tuned to achieve a round beam at Station C with an envelope radius of about 1.5 cm with a very small divergence with S4 off. The typical beam energy and current are 16.0 MeV and 1.625 kA. Figures 1-3 showmore » the sensitivity of the bean size at Station C to the emittance, initial radius and initial angle respectively. To better understand the relative sensitivity of the beam size to the emittance, initial radius and initial angle, linear regressions were performed for each parameter as a function of the S4 setting. The results are shown in Figure 4. The measured slope was scaled to have a maximum value of 1 in order to present the relative sensitivities in a single plot. Figure 4 clearly shows the beam size at the minimum of the S4 scan is most sensitive to emittance and relatively insensitive to initial radius and angle as expected. The beam emittance is also very sensitive to the beam size of the converging beam and becomes insensitive to the beam size of the diverging beam. Measurements of the beam size of the diverging beam provide the greatest sensitivity to the initial beam radius and to a lesser extent the initial beam angle. The converging beam size is initially very sensitive to the emittance and initial angle at low S4 currents. As the S4 current is increased the sensitivity to the emittance remains strong while the sensitivity to the initial angle diminishes.« less

  15. Assessment of Wind Parameter Sensitivity on Ultimate and Fatigue Wind Turbine Loads: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Amy N; Sethuraman, Latha; Jonkman, Jason

    Wind turbines are designed using a set of simulations to ascertain the structural loads that the turbine could encounter. While mean hub-height wind speed is considered to vary, other wind parameters such as turbulence spectra, sheer, veer, spatial coherence, and component correlation are fixed or conditional values that, in reality, could have different characteristics at different sites and have a significant effect on the resulting loads. This paper therefore seeks to assess the sensitivity of different wind parameters on the resulting ultimate and fatigue loads on the turbine during normal operational conditions. Eighteen different wind parameters are screened using anmore » Elementary Effects approach with radial points. As expected, the results show a high sensitivity of the loads to the turbulence standard deviation in the primary wind direction, but the sensitivity to wind shear is often much greater. To a lesser extent, other wind parameters that drive loads include the coherence in the primary wind direction and veer.« less

  16. A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters

    NASA Astrophysics Data System (ADS)

    Ren, Luchuan

    2015-04-01

    A Global Sensitivity Analysis Method on Maximum Tsunami Wave Heights to Potential Seismic Source Parameters Luchuan Ren, Jianwei Tian, Mingli Hong Institute of Disaster Prevention, Sanhe, Heibei Province, 065201, P.R. China It is obvious that the uncertainties of the maximum tsunami wave heights in offshore area are partly from uncertainties of the potential seismic tsunami source parameters. A global sensitivity analysis method on the maximum tsunami wave heights to the potential seismic source parameters is put forward in this paper. The tsunami wave heights are calculated by COMCOT ( the Cornell Multi-grid Coupled Tsunami Model), on the assumption that an earthquake with magnitude MW8.0 occurred at the northern fault segment along the Manila Trench and triggered a tsunami in the South China Sea. We select the simulated results of maximum tsunami wave heights at specific sites in offshore area to verify the validity of the method proposed in this paper. For ranking importance order of the uncertainties of potential seismic source parameters (the earthquake's magnitude, the focal depth, the strike angle, dip angle and slip angle etc..) in generating uncertainties of the maximum tsunami wave heights, we chose Morris method to analyze the sensitivity of the maximum tsunami wave heights to the aforementioned parameters, and give several qualitative descriptions of nonlinear or linear effects of them on the maximum tsunami wave heights. We quantitatively analyze the sensitivity of the maximum tsunami wave heights to these parameters and the interaction effects among these parameters on the maximum tsunami wave heights by means of the extended FAST method afterward. The results shows that the maximum tsunami wave heights are very sensitive to the earthquake magnitude, followed successively by the epicenter location, the strike angle and dip angle, the interactions effect between the sensitive parameters are very obvious at specific site in offshore area, and there

  17. Comparative Sensitivity Analysis of Muscle Activation Dynamics

    PubMed Central

    Günther, Michael; Götz, Thomas

    2015-01-01

    We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379

  18. A three-dimensional cohesive sediment transport model with data assimilation: Model development, sensitivity analysis and parameter estimation

    NASA Astrophysics Data System (ADS)

    Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue

    2018-06-01

    Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.

  19. The structure of binding curves and practical identifiability of equilibrium ligand-binding parameters

    PubMed Central

    Middendorf, Thomas R.

    2017-01-01

    A critical but often overlooked question in the study of ligands binding to proteins is whether the parameters obtained from analyzing binding data are practically identifiable (PI), i.e., whether the estimates obtained from fitting models to noisy data are accurate and unique. Here we report a general approach to assess and understand binding parameter identifiability, which provides a toolkit to assist experimentalists in the design of binding studies and in the analysis of binding data. The partial fraction (PF) expansion technique is used to decompose binding curves for proteins with n ligand-binding sites exactly and uniquely into n components, each of which has the form of a one-site binding curve. The association constants of the PF component curves, being the roots of an n-th order polynomial, may be real or complex. We demonstrate a fundamental connection between binding parameter identifiability and the nature of these one-site association constants: all binding parameters are identifiable if the constants are all real and distinct; otherwise, at least some of the parameters are not identifiable. The theory is used to construct identifiability maps from which the practical identifiability of binding parameters for any two-, three-, or four-site binding curve can be assessed. Instructions for extending the method to generate identifiability maps for proteins with more than four binding sites are also given. Further analysis of the identifiability maps leads to the simple rule that the maximum number of structurally identifiable binding parameters (shown in the previous paper to be equal to n) will also be PI only if the binding curve line shape contains n resolved components. PMID:27993951

  20. Rainfall or parameter uncertainty? The power of sensitivity analysis on grouped factors

    NASA Astrophysics Data System (ADS)

    Nossent, Jiri; Pereira, Fernando; Bauwens, Willy

    2017-04-01

    Hydrological models are typically used to study and represent (a part of) the hydrological cycle. In general, the output of these models mostly depends on their input rainfall and parameter values. Both model parameters and input precipitation however, are characterized by uncertainties and, therefore, lead to uncertainty on the model output. Sensitivity analysis (SA) allows to assess and compare the importance of the different factors for this output uncertainty. Hereto, the rainfall uncertainty can be incorporated in the SA by representing it as a probabilistic multiplier. Such multiplier can be defined for the entire time series, or several of these factors can be determined for every recorded rainfall pulse or for hydrological independent storm events. As a consequence, the number of parameters included in the SA related to the rainfall uncertainty can be (much) lower or (much) higher than the number of model parameters. Although such analyses can yield interesting results, it remains challenging to determine which type of uncertainty will affect the model output most due to the different weight both types will have within the SA. In this study, we apply the variance based Sobol' sensitivity analysis method to two different hydrological simulators (NAM and HyMod) for four diverse watersheds. Besides the different number of model parameters (NAM: 11 parameters; HyMod: 5 parameters), the setup of our sensitivity and uncertainty analysis-combination is also varied by defining a variety of scenarios including diverse numbers of rainfall multipliers. To overcome the issue of the different number of factors and, thus, the different weights of the two types of uncertainty, we build on one of the advantageous properties of the Sobol' SA, i.e. treating grouped parameters as a single parameter. The latter results in a setup with a single factor for each uncertainty type and allows for a straightforward comparison of their importance. In general, the results show a clear

  1. Sensitivity of DIVWAG to Variations in Weather Parameters

    DTIC Science & Technology

    1976-04-01

    1 18. SUPPLEMENTARY NOTES 1 19. KEY WORDS (Continue on reverse aide if necessary and Identify by block number) DIVWAG WAR GAME SIMULATION...simulation of a Division Level War Game , to determine the signif- icance of varying battlefield parameters; i.e., artillery parameters, troop and...The only Red artillery weapons doing better in bad weather are the 130MM guns , but this statistic is tempered by the few casualties occuring in

  2. Identifying tectonic parameters that influence tsunamigenesis

    NASA Astrophysics Data System (ADS)

    van Zelst, Iris; Brizzi, Silvia; van Dinther, Ylona; Heuret, Arnauld; Funiciello, Francesca

    2017-04-01

    The role of tectonics in tsunami generation is at present poorly understood. However, the fact that some regions produce more tsunamis than others indicates that tectonics could influence tsunamigenesis. Here, we complement a global earthquake database that contains geometrical, mechanical, and seismicity parameters of subduction zones with tsunami data. We statistically analyse the database to identify the tectonic parameters that affect tsunamigenesis. The Pearson's product-moment correlation coefficients reveal high positive correlations of 0.65 between, amongst others, the maximum water height of tsunamis and the seismic coupling in a subduction zone. However, these correlations are mainly caused by outliers. The Spearman's rank correlation coefficient results in more robust correlations of 0.60 between the number of tsunamis in a subduction zone and subduction velocity (positive correlation) and the sediment thickness at the trench (negative correlation). Interestingly, there is a positive correlation between the latter and tsunami magnitude. In an effort towards multivariate statistics, a binary decision tree analysis is conducted with one variable. However, this shows that the amount of data is too scarce. To complement this limited amount of data and to assess physical causality of the tectonic parameters with regard to tsunamigenesis, we conduct a numerical study of the most promising parameters using a geodynamic seismic cycle model. We show that an increase in sediment thickness on the subducting plate results in a shift in seismic activity from outerrise normal faults to splay faults. We also show that the splay fault is the preferred rupture path for a strongly velocity strengthening friction regime in the shallow part of the subduction zone, which increases the tsunamigenic potential. A larger updip limit of the seismogenic zone results in larger vertical surface displacement.

  3. Are quantitative sensitivity analysis methods always reliable?

    NASA Astrophysics Data System (ADS)

    Huang, X.

    2016-12-01

    Physical parameterizations developed to represent subgrid-scale physical processes include various uncertain parameters, leading to large uncertainties in today's Earth System Models (ESMs). Sensitivity Analysis (SA) is an efficient approach to quantitatively determine how the uncertainty of the evaluation metric can be apportioned to each parameter. Also, SA can identify the most influential parameters, as a result to reduce the high dimensional parametric space. In previous studies, some SA-based approaches, such as Sobol' and Fourier amplitude sensitivity testing (FAST), divide the parameters into sensitive and insensitive groups respectively. The first one is reserved but the other is eliminated for certain scientific study. However, these approaches ignore the disappearance of the interactive effects between the reserved parameters and the eliminated ones, which are also part of the total sensitive indices. Therefore, the wrong sensitive parameters might be identified by these traditional SA approaches and tools. In this study, we propose a dynamic global sensitivity analysis method (DGSAM), which iteratively removes the least important parameter until there are only two parameters left. We use the CLM-CASA, a global terrestrial model, as an example to verify our findings with different sample sizes ranging from 7000 to 280000. The result shows DGSAM has abilities to identify more influential parameters, which is confirmed by parameter calibration experiments using four popular optimization methods. For example, optimization using Top3 parameters filtered by DGSAM could achieve substantial improvement against Sobol' by 10%. Furthermore, the current computational cost for calibration has been reduced to 1/6 of the original one. In future, it is necessary to explore alternative SA methods emphasizing parameter interactions.

  4. Parameters sensitivity on mooring loads of ship-shaped FPSOs

    NASA Astrophysics Data System (ADS)

    Hasan, Mohammad Saidee

    2017-12-01

    The work in this paper is focused on special assessment and evaluation of mooring system of ship-shaped FPSO unit. In particular, the purpose of the study is to find the impact on mooring loads for the variation in different parameters using MIMOSA software. First, a selected base case was designed for an intact mooring system in a typical ultimate limit state (ULS) condition, and then the sensitivity to mooring loads on parameters e.g. location of the turret, analysis method (quasi-static vs. dynamic analysis), low-frequency damping level in the surge, pretension and drag coefficients on chain and steel wire has been performed. It is found that mooring loads change due to the change of these parameters. Especially, pretension has a large impact on the maximum tension of mooring lines and low-frequency damping can change surge offset significantly.

  5. Mechanical performance and parameter sensitivity analysis of 3D braided composites joints.

    PubMed

    Wu, Yue; Nan, Bo; Chen, Liang

    2014-01-01

    3D braided composite joints are the important components in CFRP truss, which have significant influence on the reliability and lightweight of structures. To investigate the mechanical performance of 3D braided composite joints, a numerical method based on the microscopic mechanics is put forward, the modeling technologies, including the material constants selection, element type, grid size, and the boundary conditions, are discussed in detail. Secondly, a method for determination of ultimate bearing capacity is established, which can consider the strength failure. Finally, the effect of load parameters, geometric parameters, and process parameters on the ultimate bearing capacity of joints is analyzed by the global sensitivity analysis method. The results show that the main pipe diameter thickness ratio γ, the main pipe diameter D, and the braided angle α are sensitive to the ultimate bearing capacity N.

  6. Impact parameter smearing effects on isospin sensitive observables in heavy ion collisions

    NASA Astrophysics Data System (ADS)

    Li, Li; Zhang, Yingxun; Li, Zhuxia; Wang, Nan; Cui, Ying; Winkelbauer, Jack

    2018-04-01

    The validity of impact parameter estimation from the multiplicity of charged particles at low-intermediate energies is checked within the framework of the improved quantum molecular dynamics model. The simulations show that the multiplicity of charged particles cannot estimate the impact parameter of heavy ion collisions very well, especially for central collisions at the beam energies lower than ˜70 MeV/u due to the large fluctuations of the multiplicity of charged particles. The simulation results for the central collisions defined by the charged particle multiplicity are compared to those by using impact parameter b =2 fm and it shows that the charge distribution for 112Sn+112Sn at the beam energy of 50 MeV/u is different evidently for two cases; and the chosen isospin sensitive observable, the coalescence invariant single neutron to proton yield ratio, reduces less than 15% for neutron-rich systems Sn,132124+124Sn at Ebeam=50 MeV/u, while the coalescence invariant double neutron to proton yield ratio does not have obvious difference. The sensitivity of the chosen isospin sensitive observables to effective mass splitting is studied for central collisions defined by the multiplicity of charged particles. Our results show that the sensitivity is enhanced for 132Sn+124Sn relative to that for 124Sn+124Sn , and this reaction system should be measured in future experiments to study the effective mass splitting by heavy ion collisions.

  7. Sensitivity Analysis of Genetic Algorithm Parameters for Optimal Groundwater Monitoring Network Design

    NASA Astrophysics Data System (ADS)

    Abdeh-Kolahchi, A.; Satish, M.; Datta, B.

    2004-05-01

    A state art groundwater monitoring network design is introduced. The method combines groundwater flow and transport results with optimization Genetic Algorithm (GA) to identify optimal monitoring well locations. Optimization theory uses different techniques to find a set of parameter values that minimize or maximize objective functions. The suggested groundwater optimal monitoring network design is based on the objective of maximizing the probability of tracking a transient contamination plume by determining sequential monitoring locations. The MODFLOW and MT3DMS models included as separate modules within the Groundwater Modeling System (GMS) are used to develop three dimensional groundwater flow and contamination transport simulation. The groundwater flow and contamination simulation results are introduced as input to the optimization model, using Genetic Algorithm (GA) to identify the groundwater optimal monitoring network design, based on several candidate monitoring locations. The groundwater monitoring network design model is used Genetic Algorithms with binary variables representing potential monitoring location. As the number of decision variables and constraints increase, the non-linearity of the objective function also increases which make difficulty to obtain optimal solutions. The genetic algorithm is an evolutionary global optimization technique, which is capable of finding the optimal solution for many complex problems. In this study, the GA approach capable of finding the global optimal solution to a groundwater monitoring network design problem involving 18.4X 1018 feasible solutions will be discussed. However, to ensure the efficiency of the solution process and global optimality of the solution obtained using GA, it is necessary that appropriate GA parameter values be specified. The sensitivity analysis of genetic algorithms parameters such as random number, crossover probability, mutation probability, and elitism are discussed for solution of

  8. Uncertainty Quantification and Regional Sensitivity Analysis of Snow-related Parameters in the Canadian LAnd Surface Scheme (CLASS)

    NASA Astrophysics Data System (ADS)

    Badawy, B.; Fletcher, C. G.

    2017-12-01

    The parameterization of snow processes in land surface models is an important source of uncertainty in climate simulations. Quantifying the importance of snow-related parameters, and their uncertainties, may therefore lead to better understanding and quantification of uncertainty within integrated earth system models. However, quantifying the uncertainty arising from parameterized snow processes is challenging due to the high-dimensional parameter space, poor observational constraints, and parameter interaction. In this study, we investigate the sensitivity of the land simulation to uncertainty in snow microphysical parameters in the Canadian LAnd Surface Scheme (CLASS) using an uncertainty quantification (UQ) approach. A set of training cases (n=400) from CLASS is used to sample each parameter across its full range of empirical uncertainty, as determined from available observations and expert elicitation. A statistical learning model using support vector regression (SVR) is then constructed from the training data (CLASS output variables) to efficiently emulate the dynamical CLASS simulations over a much larger (n=220) set of cases. This approach is used to constrain the plausible range for each parameter using a skill score, and to identify the parameters with largest influence on the land simulation in CLASS at global and regional scales, using a random forest (RF) permutation importance algorithm. Preliminary sensitivity tests indicate that snow albedo refreshment threshold and the limiting snow depth, below which bare patches begin to appear, have the highest impact on snow output variables. The results also show a considerable reduction of the plausible ranges of the parameters values and hence reducing their uncertainty ranges, which can lead to a significant reduction of the model uncertainty. The implementation and results of this study will be presented and discussed in details.

  9. Sensitivity of Austempering Heat Treatment of Ductile Irons to Changes in Process Parameters

    NASA Astrophysics Data System (ADS)

    Boccardo, A. D.; Dardati, P. M.; Godoy, L. A.; Celentano, D. J.

    2018-06-01

    Austempered ductile iron (ADI) is frequently obtained by means of a three-step austempering heat treatment. The parameters of this process play a crucial role on the microstructure of the final product. This paper considers the influence of some process parameters ( i.e., the initial microstructure of ductile iron and the thermal cycle) on key features of the heat treatment (such as minimum required time for austenitization and austempering and microstructure of the final product). A computational simulation of the austempering heat treatment is reported in this work, which accounts for a coupled thermo-metallurgical behavior in terms of the evolution of temperature at the scale of the part being investigated (the macroscale) and the evolution of phases at the scale of microconstituents (the microscale). The paper focuses on the sensitivity of the process by looking at a sensitivity index and scatter plots. The sensitivity indices are determined by using a technique based on the variance of the output. The results of this study indicate that both the initial microstructure and the thermal cycle parameters play a key role in the production of ADI. This work also provides a guideline to help selecting values of the appropriate process parameters to obtain parts with a required microstructural characteristic.

  10. Design sensitivity analysis using EAL. Part 1: Conventional design parameters

    NASA Technical Reports Server (NTRS)

    Dopker, B.; Choi, Kyung K.; Lee, J.

    1986-01-01

    A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.

  11. Uncertainty Analysis of Runoff Simulations and Parameter Identifiability in the Community Land Model – Evidence from MOPEX Basins

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Maoyi; Hou, Zhangshuan; Leung, Lai-Yung R.

    2013-12-01

    With the emergence of earth system models as important tools for understanding and predicting climate change and implications to mitigation and adaptation, it has become increasingly important to assess the fidelity of the land component within earth system models to capture realistic hydrological processes and their response to the changing climate and quantify the associated uncertainties. This study investigates the sensitivity of runoff simulations to major hydrologic parameters in version 4 of the Community Land Model (CLM4) by integrating CLM4 with a stochastic exploratory sensitivity analysis framework at 20 selected watersheds from the Model Parameter Estimation Experiment (MOPEX) spanning amore » wide range of climate and site conditions. We found that for runoff simulations, the most significant parameters are those related to the subsurface runoff parameterizations. Soil texture related parameters and surface runoff parameters are of secondary significance. Moreover, climate and soil conditions play important roles in the parameter sensitivity. In general, site conditions within water-limited hydrologic regimes and with finer soil texture result in stronger sensitivity of output variables, such as runoff and its surface and subsurface components, to the input parameters in CLM4. This study demonstrated the feasibility of parameter inversion for CLM4 using streamflow observations to improve runoff simulations. By ranking the significance of the input parameters, we showed that the parameter set dimensionality could be reduced for CLM4 parameter calibration under different hydrologic and climatic regimes so that the inverse problem is less ill posed.« less

  12. MOESHA: A genetic algorithm for automatic calibration and estimation of parameter uncertainty and sensitivity of hydrologic models

    EPA Science Inventory

    Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...

  13. Calculating the sensitivity and robustness of binding free energy calculations to force field parameters

    PubMed Central

    Rocklin, Gabriel J.; Mobley, David L.; Dill, Ken A.

    2013-01-01

    Binding free energy calculations offer a thermodynamically rigorous method to compute protein-ligand binding, and they depend on empirical force fields with hundreds of parameters. We examined the sensitivity of computed binding free energies to the ligand’s electrostatic and van der Waals parameters. Dielectric screening and cancellation of effects between ligand-protein and ligand-solvent interactions reduce the parameter sensitivity of binding affinity by 65%, compared with interaction strengths computed in the gas-phase. However, multiple changes to parameters combine additively on average, which can lead to large changes in overall affinity from many small changes to parameters. Using these results, we estimate that random, uncorrelated errors in force field nonbonded parameters must be smaller than 0.02 e per charge, 0.06 Å per radius, and 0.01 kcal/mol per well depth in order to obtain 68% (one standard deviation) confidence that a computed affinity for a moderately-sized lead compound will fall within 1 kcal/mol of the true affinity, if these are the only sources of error considered. PMID:24015114

  14. Behavioral metabolomics analysis identifies novel neurochemical signatures in methamphetamine sensitization

    PubMed Central

    Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.

    2014-01-01

    Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544

  15. Sensitivity of Space Station alpha joint robust controller to structural modal parameter variations

    NASA Technical Reports Server (NTRS)

    Kumar, Renjith R.; Cooper, Paul A.; Lim, Tae W.

    1991-01-01

    The photovoltaic array sun tracking control system of Space Station Freedom is described. A synthesis procedure for determining optimized values of the design variables of the control system is developed using a constrained optimization technique. The synthesis is performed to provide a given level of stability margin, to achieve the most responsive tracking performance, and to meet other design requirements. Performance of the baseline design, which is synthesized using predicted structural characteristics, is discussed and the sensitivity of the stability margin is examined for variations of the frequencies, mode shapes and damping ratios of dominant structural modes. The design provides enough robustness to tolerate a sizeable error in the predicted modal parameters. A study was made of the sensitivity of performance indicators as the modal parameters of the dominant modes vary. The design variables are resynthesized for varying modal parameters in order to achieve the most responsive tracking performance while satisfying the design requirements. This procedure of reoptimization design parameters would be useful in improving the control system performance if accurate model data are provided.

  16. Local sensitivity analysis for inverse problems solved by singular value decomposition

    USGS Publications Warehouse

    Hill, M.C.; Nolan, B.T.

    2010-01-01

    Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by

  17. Effect of parameters in moving average method for event detection enhancement using phase sensitive OTDR

    NASA Astrophysics Data System (ADS)

    Kwon, Yong-Seok; Naeem, Khurram; Jeon, Min Yong; Kwon, Il-bum

    2017-04-01

    We analyze the relations of parameters in moving average method to enhance the event detectability of phase sensitive optical time domain reflectometer (OTDR). If the external events have unique frequency of vibration, then the control parameters of moving average method should be optimized in order to detect these events efficiently. A phase sensitive OTDR was implemented by a pulsed light source, which is composed of a laser diode, a semiconductor optical amplifier, an erbium-doped fiber amplifier, a fiber Bragg grating filter, and a light receiving part, which has a photo-detector and high speed data acquisition system. The moving average method is operated with the control parameters: total number of raw traces, M, number of averaged traces, N, and step size of moving, n. The raw traces are obtained by the phase sensitive OTDR with sound signals generated by a speaker. Using these trace data, the relation of the control parameters is analyzed. In the result, if the event signal has one frequency, then the optimal values of N, n are existed to detect the event efficiently.

  18. Hot deformation characteristics of AZ80 magnesium alloy: Work hardening effect and processing parameter sensitivities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Y.; Wan, L.; Guo, Z. H.

    Isothermal compression experiment of AZ80 magnesium alloy was conducted by Gleeble thermo-mechanical simulator in order to quantitatively investigate the work hardening (WH), strain rate sensitivity (SRS) and temperature sensitivity (TS) during hot processing of magnesium alloys. The WH, SRS and TS were described by Zener-Hollomon parameter (Z) coupling of deformation parameters. The relationships between WH rate and true strain as well as true stress were derived from Kocks-Mecking dislocation model and validated by our measurement data. The slope defined through the linear relationship of WH rate and true stress was only related to the annihilation coefficient Ω. Obvious WH behaviormore » could be exhibited at a higher Z condition. Furthermore, we have identified the correlation between the microstructural evolution including β-Mg17Al12 precipitation and the SRS and TS variations. Intensive dynamic recrystallization and homogeneous distribution of β-Mg17Al12 precipitates resulted in greater SRS coefficient at higher temperature. The deformation heat effect and β-Mg17Al12 precipitate content can be regarded as the major factors determining the TS behavior. At low Z condition, the SRS becomes stronger, in contrast to the variation of TS. The optimum hot processing window was validated based on the established SRS and TS values distribution maps for AZ80 magnesium alloy.« less

  19. Parameter estimation and sensitivity analysis in an agent-based model of Leishmania major infection

    PubMed Central

    Jones, Douglas E.; Dorman, Karin S.

    2009-01-01

    Computer models of disease take a systems biology approach toward understanding host-pathogen interactions. In particular, data driven computer model calibration is the basis for inference of immunological and pathogen parameters, assessment of model validity, and comparison between alternative models of immune or pathogen behavior. In this paper we describe the calibration and analysis of an agent-based model of Leishmania major infection. A model of macrophage loss following uptake of necrotic tissue is proposed to explain macrophage depletion following peak infection. Using Gaussian processes to approximate the computer code, we perform a sensitivity analysis to identify important parameters and to characterize their influence on the simulated infection. The analysis indicates that increasing growth rate can favor or suppress pathogen loads, depending on the infection stage and the pathogen’s ability to avoid detection. Subsequent calibration of the model against previously published biological observations suggests that L. major has a relatively slow growth rate and can replicate for an extended period of time before damaging the host cell. PMID:19837088

  20. Further comments on sensitivities, parameter estimation, and sampling design in one-dimensional analysis of solute transport in porous media

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1988-01-01

    Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined in one-dimensional analytical models of transient solute transport in porous media. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Sensitivity analysis is important because minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models of solute transport were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. On the basis of the sensitivity analysis, several statements may be made about design of sampling for parameter estimation for the models examined: (1) estimation of parameters associated with solute transport in the individual layers of a multilayer system is possible even when solute concentrations in the individual layers are mixed in an observation well; (2) when estimating parameters in a decaying upstream boundary condition, observations are best made late in the passage of the front near a time chosen by adding the inverse of an hypothesized value of the source decay parameter to the estimated mean travel time at a given downstream location; (3) estimation of a first-order chemical decay parameter requires observations to be made late in the passage of the front, preferably near a location corresponding to a travel time of √2 times the half-life of the solute; and (4) estimation of a parameter relating to spatial variability in an initial condition requires observations to be made early in time relative to passage of the solute front.

  1. Sensitivity of breeding parameters to food supply in Black-legged Kittiwakes Rissa tridactyla

    USGS Publications Warehouse

    Gill, Verena A.; Hatch, Scott A.; Lanctot, Richard B.

    2002-01-01

    We fed Herring Clupea pallasi to pairs of Black-legged Kittiwakes Rissa tridactyla throughout the breeding season in two years at a colony in the northern Gulf of Alaska. We measured responses to supplemental feeding in a wide array of breeding parameters to gauge their relative sensitivity to food supply, and thus their potential as indicators of natural foraging conditions. Conventional measures of success (hatching, fledging and overall productivity) were more effective as indicators of food supply than behavioural attributes such as courtship feeding, chick provisioning rates and sibling aggression. However, behaviour such as nest relief during incubation and adult attendance with older chicks were also highly responsive to supplemental food and may be useful for monitoring environmental conditions in studies of shorter duration. On average, the chick-rearing stage contained more sensitive indicators of food availability than prelaying or incubation stages. Overall, rates of hatching and fledging success, and the mean duration of incubation shifts were the most food-sensitive parameters studied.

  2. Investigation of uncertainty in CO 2 reservoir models: A sensitivity analysis of relative permeability parameter values

    DOE PAGES

    Yoshida, Nozomu; Levine, Jonathan S.; Stauffer, Philip H.

    2016-03-22

    Numerical reservoir models of CO 2 injection in saline formations rely on parameterization of laboratory-measured pore-scale processes. Here, we have performed a parameter sensitivity study and Monte Carlo simulations to determine the normalized change in total CO 2 injected using the finite element heat and mass-transfer code (FEHM) numerical reservoir simulator. Experimentally measured relative permeability parameter values were used to generate distribution functions for parameter sampling. The parameter sensitivity study analyzed five different levels for each of the relative permeability model parameters. All but one of the parameters changed the CO 2 injectivity by <10%, less than the geostatistical uncertainty that applies to all large subsurface systems due to natural geophysical variability and inherently small sample sizes. The exception was the end-point CO 2 relative permeability, kmore » $$0\\atop{r}$$ CO2, the maximum attainable effective CO 2 permeability during CO 2 invasion, which changed CO2 injectivity by as much as 80%. Similarly, Monte Carlo simulation using 1000 realizations of relative permeability parameters showed no relationship between CO 2 injectivity and any of the parameters but k$$0\\atop{r}$$ CO2, which had a very strong (R 2 = 0.9685) power law relationship with total CO 2 injected. Model sensitivity to k$$0\\atop{r}$$ CO2 points to the importance of accurate core flood and wettability measurements.« less

  3. Investigation of uncertainty in CO 2 reservoir models: A sensitivity analysis of relative permeability parameter values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoshida, Nozomu; Levine, Jonathan S.; Stauffer, Philip H.

    Numerical reservoir models of CO 2 injection in saline formations rely on parameterization of laboratory-measured pore-scale processes. Here, we have performed a parameter sensitivity study and Monte Carlo simulations to determine the normalized change in total CO 2 injected using the finite element heat and mass-transfer code (FEHM) numerical reservoir simulator. Experimentally measured relative permeability parameter values were used to generate distribution functions for parameter sampling. The parameter sensitivity study analyzed five different levels for each of the relative permeability model parameters. All but one of the parameters changed the CO 2 injectivity by <10%, less than the geostatistical uncertainty that applies to all large subsurface systems due to natural geophysical variability and inherently small sample sizes. The exception was the end-point CO 2 relative permeability, kmore » $$0\\atop{r}$$ CO2, the maximum attainable effective CO 2 permeability during CO 2 invasion, which changed CO2 injectivity by as much as 80%. Similarly, Monte Carlo simulation using 1000 realizations of relative permeability parameters showed no relationship between CO 2 injectivity and any of the parameters but k$$0\\atop{r}$$ CO2, which had a very strong (R 2 = 0.9685) power law relationship with total CO 2 injected. Model sensitivity to k$$0\\atop{r}$$ CO2 points to the importance of accurate core flood and wettability measurements.« less

  4. FEAST: sensitive local alignment with multiple rates of evolution.

    PubMed

    Hudek, Alexander K; Brown, Daniel G

    2011-01-01

    We present a pairwise local aligner, FEAST, which uses two new techniques: a sensitive extension algorithm for identifying homologous subsequences, and a descriptive probabilistic alignment model. We also present a new procedure for training alignment parameters and apply it to the human and mouse genomes, producing a better parameter set for these sequences. Our extension algorithm identifies homologous subsequences by considering all evolutionary histories. It has higher maximum sensitivity than Viterbi extensions, and better balances specificity. We model alignments with several submodels, each with unique statistical properties, describing strongly similar and weakly similar regions of homologous DNA. Training parameters using two submodels produces superior alignments, even when we align with only the parameters from the weaker submodel. Our extension algorithm combined with our new parameter set achieves sensitivity 0.59 on synthetic tests. In contrast, LASTZ with default settings achieves sensitivity 0.35 with the same false positive rate. Using the weak submodel as parameters for LASTZ increases its sensitivity to 0.59 with high error. FEAST is available at http://monod.uwaterloo.ca/feast/.

  5. Reliability analysis of a sensitive and independent stabilometry parameter set

    PubMed Central

    Nagymáté, Gergely; Orlovits, Zsanett

    2018-01-01

    Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54–0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals. PMID:29664938

  6. Reliability analysis of a sensitive and independent stabilometry parameter set.

    PubMed

    Nagymáté, Gergely; Orlovits, Zsanett; Kiss, Rita M

    2018-01-01

    Recent studies have suggested reduced independent and sensitive parameter sets for stabilometry measurements based on correlation and variance analyses. However, the reliability of these recommended parameter sets has not been studied in the literature or not in every stance type used in stabilometry assessments, for example, single leg stances. The goal of this study is to evaluate the test-retest reliability of different time-based and frequency-based parameters that are calculated from the center of pressure (CoP) during bipedal and single leg stance for 30- and 60-second measurement intervals. Thirty healthy subjects performed repeated standing trials in a bipedal stance with eyes open and eyes closed conditions and in a single leg stance with eyes open for 60 seconds. A force distribution measuring plate was used to record the CoP. The reliability of the CoP parameters was characterized by using the intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change (MDC), coefficient of variation (CV) and CV compliance rate (CVCR). Based on the ICC, SEM and MDC results, many parameters yielded fair to good reliability values, while the CoP path length yielded the highest reliability (smallest ICC > 0.67 (0.54-0.79), largest SEM% = 19.2%). Usually, frequency type parameters and extreme value parameters yielded poor reliability values. There were differences in the reliability of the maximum CoP velocity (better with 30 seconds) and mean power frequency (better with 60 seconds) parameters between the different sampling intervals.

  7. Ultra-sensitive PSA Following Prostatectomy Reliably Identifies Patients Requiring Post-Op Radiotherapy

    PubMed Central

    Kang, Jung Julie; Reiter, Robert; Steinberg, Michael; King, Christopher R.

    2015-01-01

    PURPOSE Integrating ultra-sensitive PSA (uPSA) into surveillance of high-risk patients following radical prostatectomy (RP) potentially optimizes management by correctly identifying actual recurrences, promoting an early salvage strategy and minimizing overtreatment. The power of uPSA following surgery to identify eventual biochemical failures is tested. PATIENTS AND METHODS From 1991–2013, 247 high-risk patients with a median follow-up was 44 months after RP were identified (extraprostatic extension and/or positive margin). Surgical technique, initial PSA (iPSA), pathology and post-op PSA were analyzed. The uPSA assay threshold was 0.01 ng/mL. Conventional biochemical relapse (cBCR) was defined as PSA ≥0.2 ng/mL. Kaplan Meier and Cox multivariate analyses (MVA) compared uPSA recurrence vs. cBCR rates. RESULTS Sensitivity analysis identified uPSA ≥0.03 as the optimal threshold identifying recurrence. First post-op uPSA ≥0.03, Gleason grade, T-stage, iPSA, and margin status predicted cBCR. On MVA, only first post-op uPSA ≥0.03, Gleason grade, and T-stage independently predicted cBCR. First post-op uPSA ≥0.03 conferred the highest risk (HR 8.5, p<0.0001) and discerned cBCR with greater sensitivity than undetectable first conventional PSA (70% vs. 46%). Any post-op PSA ≥0.03 captured all failures missed by first post-op value (100% sensitivity) with accuracy (96% specificity). Defining failure at uPSA ≥0.03 yielded a median lead-time advantage of 18 months (mean 24 months) over the conventional PSA ≥0.2 definition. CONCLUSION uPSA ≥0.03 is an independent factor, identifies BCR more accurately than any traditional risk factors, and confers a significant lead-time advantage. uPSA enables critical decisions regarding timing and indication for post-op RT among high-risk patients following RP. PMID:25463990

  8. Sensitivity of subject-specific models to Hill muscle-tendon model parameters in simulations of gait.

    PubMed

    Carbone, V; van der Krogt, M M; Koopman, H F J M; Verdonschot, N

    2016-06-14

    Subject-specific musculoskeletal (MS) models of the lower extremity are essential for applications such as predicting the effects of orthopedic surgery. We performed an extensive sensitivity analysis to assess the effects of potential errors in Hill muscle-tendon (MT) model parameters for each of the 56 MT parts contained in a state-of-the-art MS model. We used two metrics, namely a Local Sensitivity Index (LSI) and an Overall Sensitivity Index (OSI), to distinguish the effect of the perturbation on the predicted force produced by the perturbed MT parts and by all the remaining MT parts, respectively, during a simulated gait cycle. Results indicated that sensitivity of the model depended on the specific role of each MT part during gait, and not merely on its size and length. Tendon slack length was the most sensitive parameter, followed by maximal isometric muscle force and optimal muscle fiber length, while nominal pennation angle showed very low sensitivity. The highest sensitivity values were found for the MT parts that act as prime movers of gait (Soleus: average OSI=5.27%, Rectus Femoris: average OSI=4.47%, Gastrocnemius: average OSI=3.77%, Vastus Lateralis: average OSI=1.36%, Biceps Femoris Caput Longum: average OSI=1.06%) and hip stabilizers (Gluteus Medius: average OSI=3.10%, Obturator Internus: average OSI=1.96%, Gluteus Minimus: average OSI=1.40%, Piriformis: average OSI=0.98%), followed by the Peroneal muscles (average OSI=2.20%) and Tibialis Anterior (average OSI=1.78%) some of which were not included in previous sensitivity studies. Finally, the proposed priority list provides quantitative information to indicate which MT parts and which MT parameters should be estimated most accurately to create detailed and reliable subject-specific MS models. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Reliability of a new biokinetic model of zirconium in internal dosimetry: part II, parameter sensitivity analysis.

    PubMed

    Li, Wei Bo; Greiter, Matthias; Oeh, Uwe; Hoeschen, Christoph

    2011-12-01

    The reliability of biokinetic models is essential for the assessment of internal doses and a radiation risk analysis for the public and occupational workers exposed to radionuclides. In the present study, a method for assessing the reliability of biokinetic models by means of uncertainty and sensitivity analysis was developed. In the first part of the paper, the parameter uncertainty was analyzed for two biokinetic models of zirconium (Zr); one was reported by the International Commission on Radiological Protection (ICRP), and one was developed at the Helmholtz Zentrum München-German Research Center for Environmental Health (HMGU). In the second part of the paper, the parameter uncertainties and distributions of the Zr biokinetic models evaluated in Part I are used as the model inputs for identifying the most influential parameters in the models. Furthermore, the most influential model parameter on the integral of the radioactivity of Zr over 50 y in source organs after ingestion was identified. The results of the systemic HMGU Zr model showed that over the first 10 d, the parameters of transfer rates between blood and other soft tissues have the largest influence on the content of Zr in the blood and the daily urinary excretion; however, after day 1,000, the transfer rate from bone to blood becomes dominant. For the retention in bone, the transfer rate from blood to bone surfaces has the most influence out to the endpoint of the simulation; the transfer rate from blood to the upper larger intestine contributes a lot in the later days; i.e., after day 300. The alimentary tract absorption factor (fA) influences mostly the integral of radioactivity of Zr in most source organs after ingestion.

  10. Factors affecting the sensitivity and specificity of the Heidelberg Retina Tomograph parameters to glaucomatous progression in disc photographs.

    PubMed

    Saarela, Ville; Falck, Aura; Airaksinen, P Juhani; Tuulonen, Anja

    2012-03-01

    To evaluate the factors affecting the sensitivity and specificity of the stereometric optic nerve head (ONH) parameters of the Heidelberg Retina Tomograph (HRT) to glaucomatous progression in stereoscopic ONH photographs. The factors affecting the sensitivity and specificity of the vertical cup : disc ratio, the cup : disc area ratio, the cup volume, the rim area and a linear discriminant function to progression were analysed. These parameters were the best indicators of progression in a retrospective study of 476 eyes. The reference standard for progression was the masked evaluation of stereoscopic ONH photographs. The factors having the most significant effect on the sensitivity and specificity of the stereometric ONH parameters were the reference height difference and the mean topography standard deviation (TSD), indicating image quality. Also, the change in the TSD and age showed consistent, but variably significant, influence on all parameters tested. The sensitivity and specificity improved when there was little change in the reference height, the image quality was good and stable, and the patients were younger. The sensitivity and specificity of the vertical cup : disc ratio was improved by a large disc area and high baseline cup : disc area ratio. The rim area showed a better sensitivity and specificity for progression with a small disc area and low baseline cup : disc area ratio. The factors affecting the sensitivity and specificity of the stereometric ONH parameters to glaucomatous progression in disc photographs are essentially the same as those affecting the measurement variability of the HRT. © 2010 The Authors. Acta Ophthalmologica © 2010 Acta Ophthalmologica Scandinavica Foundation.

  11. Monte Carlo sensitivity analysis of unknown parameters in hazardous materials transportation risk assessment.

    PubMed

    Pet-Armacost, J J; Sepulveda, J; Sakude, M

    1999-12-01

    The US Department of Transportation was interested in the risks associated with transporting Hydrazine in tanks with and without relief devices. Hydrazine is both highly toxic and flammable, as well as corrosive. Consequently, there was a conflict as to whether a relief device should be used or not. Data were not available on the impact of relief devices on release probabilities or the impact of Hydrazine on the likelihood of fires and explosions. In this paper, a Monte Carlo sensitivity analysis of the unknown parameters was used to assess the risks associated with highway transport of Hydrazine. To help determine whether or not relief devices should be used, fault trees and event trees were used to model the sequences of events that could lead to adverse consequences during transport of Hydrazine. The event probabilities in the event trees were derived as functions of the parameters whose effects were not known. The impacts of these parameters on the risk of toxic exposures, fires, and explosions were analyzed through a Monte Carlo sensitivity analysis and analyzed statistically through an analysis of variance. The analysis allowed the determination of which of the unknown parameters had a significant impact on the risks. It also provided the necessary support to a critical transportation decision even though the values of several key parameters were not known.

  12. Modelling suspended-sediment propagation and related heavy metal contamination in floodplains: a parameter sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Hostache, R.; Hissler, C.; Matgen, P.; Guignard, C.; Bates, P.

    2014-09-01

    Fine sediments represent an important vector of pollutant diffusion in rivers. When deposited in floodplains and riverbeds, they can be responsible for soil pollution. In this context, this paper proposes a modelling exercise aimed at predicting transport and diffusion of fine sediments and dissolved pollutants. The model is based upon the Telemac hydro-informatic system (dynamical coupling Telemac-2D-Sysiphe). As empirical and semiempirical parameters need to be calibrated for such a modelling exercise, a sensitivity analysis is proposed. An innovative point in this study is the assessment of the usefulness of dissolved trace metal contamination information for model calibration. Moreover, for supporting the modelling exercise, an extensive database was set up during two flood events. It includes water surface elevation records, discharge measurements and geochemistry data such as time series of dissolved/particulate contaminants and suspended-sediment concentrations. The most sensitive parameters were found to be the hydraulic friction coefficients and the sediment particle settling velocity in water. It was also found that model calibration did not benefit from dissolved trace metal contamination information. Using the two monitored hydrological events as calibration and validation, it was found that the model is able to satisfyingly predict suspended sediment and dissolve pollutant transport in the river channel. In addition, a qualitative comparison between simulated sediment deposition in the floodplain and a soil contamination map shows that the preferential zones for deposition identified by the model are realistic.

  13. [Temporal and spatial heterogeneity analysis of optimal value of sensitive parameters in ecological process model: The BIOME-BGC model as an example.

    PubMed

    Li, Yi Zhe; Zhang, Ting Long; Liu, Qiu Yu; Li, Ying

    2018-01-01

    The ecological process models are powerful tools for studying terrestrial ecosystem water and carbon cycle at present. However, there are many parameters for these models, and weather the reasonable values of these parameters were taken, have important impact on the models simulation results. In the past, the sensitivity and the optimization of model parameters were analyzed and discussed in many researches. But the temporal and spatial heterogeneity of the optimal parameters is less concerned. In this paper, the BIOME-BGC model was used as an example. In the evergreen broad-leaved forest, deciduous broad-leaved forest and C3 grassland, the sensitive parameters of the model were selected by constructing the sensitivity judgment index with two experimental sites selected under each vegetation type. The objective function was constructed by using the simulated annealing algorithm combined with the flux data to obtain the monthly optimal values of the sensitive parameters at each site. Then we constructed the temporal heterogeneity judgment index, the spatial heterogeneity judgment index and the temporal and spatial heterogeneity judgment index to quantitatively analyze the temporal and spatial heterogeneity of the optimal values of the model sensitive parameters. The results showed that the sensitivity of BIOME-BGC model parameters was different under different vegetation types, but the selected sensitive parameters were mostly consistent. The optimal values of the sensitive parameters of BIOME-BGC model mostly presented time-space heterogeneity to different degrees which varied with vegetation types. The sensitive parameters related to vegetation physiology and ecology had relatively little temporal and spatial heterogeneity while those related to environment and phenology had generally larger temporal and spatial heterogeneity. In addition, the temporal heterogeneity of the optimal values of the model sensitive parameters showed a significant linear correlation

  14. Sensitivity analysis of geometrical parameters to study haemodynamics and thrombus formation in the left atrial appendage.

    PubMed

    García-Isla, Guadalupe; Olivares, Andy Luis; Silva, Etelvino; Nuñez-Garcia, Marta; Butakoff, Constantine; Sanchez-Quintana, Damian; G Morales, Hernán; Freixa, Xavier; Noailly, Jérôme; De Potter, Tom; Camara, Oscar

    2018-05-08

    The left atrial appendage (LAA) is a complex and heterogeneous protruding structure of the left atrium (LA). In atrial fibrillation patients, it is the location where 90% of the thrombi are formed. However, the role of the LAA in thrombus formation is not fully known yet. The main goal of this work is to perform a sensitivity analysis to identify the most relevant LA and LAA morphological parameters in atrial blood flow dynamics. Simulations were run on synthetic ellipsoidal left atria models where different parameters were individually studied: pulmonary veins and mitral valve dimensions; LAA shape; and LA volume. Our computational analysis confirmed the relation between large LAA ostia, low blood flow velocities and thrombus formation. Additionally, we found that pulmonary vein configuration exerted a critical influence on LAA blood flow patterns. These findings contribute to a better understanding of the LAA and to support clinical decisions for atrial fibrillation patients. Copyright © 2018 John Wiley & Sons, Ltd.

  15. Sensitivity of predicted bioaerosol exposure from open windrow composting facilities to ADMS dispersion model parameters.

    PubMed

    Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H

    2016-12-15

    Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.

  16. Predictive Uncertainty And Parameter Sensitivity Of A Sediment-Flux Model: Nitrogen Flux and Sediment Oxygen Demand

    EPA Science Inventory

    Estimating model predictive uncertainty is imperative to informed environmental decision making and management of water resources. This paper applies the Generalized Sensitivity Analysis (GSA) to examine parameter sensitivity and the Generalized Likelihood Uncertainty Estimation...

  17. Sensitivity of geological, geochemical and hydrologic parameters in complex reactive transport systems for in-situ uranium bioremediation

    NASA Astrophysics Data System (ADS)

    Yang, G.; Maher, K.; Caers, J.

    2015-12-01

    Groundwater contamination associated with remediated uranium mill tailings is a challenging environmental problem, particularly within the Colorado River Basin. To examine the effectiveness of in-situ bioremediation of U(VI), acetate injection has been proposed and tested at the Rifle pilot site. There have been several geologic modeling and simulated contaminant transport investigations, to evaluate the potential outcomes of the process and identify crucial factors for successful uranium reduction. Ultimately, findings from these studies would contribute to accurate predictions of the efficacy of uranium reduction. However, all these previous studies have considered limited model complexities, either because of the concern that data is too sparse to resolve such complex systems or because some parameters are assumed to be less important. Such simplified initial modeling, however, limits the predictive power of the model. Moreover, previous studies have not yet focused on spatial heterogeneity of various modeling components and its impact on the spatial distribution of the immobilized uranium (U(IV)). In this study, we study the impact of uncertainty on 21 parameters on model responses by means of recently developed distance-based global sensitivity analysis (DGSA), to study the main effects and interactions of parameters of various types. The 21 parameters include, for example, spatial variability of initial uranium concentration, mean hydraulic conductivity, and variogram structures of hydraulic conductivity. DGSA allows for studying multi-variate model responses based on spatial and non-spatial model parameters. When calculating the distances between model responses, in addition to the overall uranium reduction efficacy, we also considered the spatial profiles of the immobilized uranium concentration as target response. Results show that the mean hydraulic conductivity and the mineral reaction rate are the two most sensitive parameters with regard to the overall

  18. On the identifiability of inertia parameters of planar Multi-Body Space Systems

    NASA Astrophysics Data System (ADS)

    Nabavi-Chashmi, Seyed Yaser; Malaek, Seyed Mohammad-Bagher

    2018-04-01

    This work describes a new formulation to study the identifiability characteristics of Serially Linked Multi-body Space Systems (SLMBSS). The process exploits the so called "Lagrange Formulation" to develop a linear form of Equations of Motion w.r.t the system Inertia Parameters (IPs). Having developed a specific form of regressor matrix, we aim to expedite the identification process. The new approach allows analytical as well as numerical identification and identifiability analysis for different SLMBSSs' configurations. Moreover, the explicit forms of SLMBSSs identifiable parameters are derived by analyzing the identifiability characteristics of the robot. We further show that any SLMBSS designed with Variable Configurations Joint allows all IPs to be identifiable through comparing two successive identification outcomes. This feature paves the way to design new class of SLMBSS for which accurate identification of all IPs is at hand. Different case studies reveal that proposed formulation provides fast and accurate results, as required by the space applications. Further studies might be necessary for cases where planar-body assumption becomes inaccurate.

  19. Primary production sensitivity to phytoplankton light attenuation parameter increases with transient forcing

    NASA Astrophysics Data System (ADS)

    Kvale, Karin F.; Meissner, Katrin J.

    2017-10-01

    Treatment of the underwater light field in ocean biogeochemical models has been attracting increasing interest, with some models moving towards more complex parameterisations. We conduct a simple sensitivity study of a typical, highly simplified parameterisation. In our study, we vary the phytoplankton light attenuation parameter over a range constrained by data during both pre-industrial equilibrated and future climate scenario RCP8.5. In equilibrium, lower light attenuation parameters (weaker self-shading) shift net primary production (NPP) towards the high latitudes, while higher values of light attenuation (stronger shelf-shading) shift NPP towards the low latitudes. Climate forcing magnifies this relationship through changes in the distribution of nutrients both within and between ocean regions. Where and how NPP responds to climate forcing can determine the magnitude and sign of global NPP trends in this high CO2 future scenario. Ocean oxygen is particularly sensitive to parameter choice. Under higher CO2 concentrations, two simulations establish a strong biogeochemical feedback between the Southern Ocean and low-latitude Pacific that highlights the potential for regional teleconnection. Our simulations serve as a reminder that shifts in fundamental properties (e.g. light attenuation by phytoplankton) over deep time have the potential to alter global biogeochemistry.

  20. Sensitivity of the model error parameter specification in weak-constraint four-dimensional variational data assimilation

    NASA Astrophysics Data System (ADS)

    Shaw, Jeremy A.; Daescu, Dacian N.

    2017-08-01

    This article presents the mathematical framework to evaluate the sensitivity of a forecast error aspect to the input parameters of a weak-constraint four-dimensional variational data assimilation system (w4D-Var DAS), extending the established theory from strong-constraint 4D-Var. Emphasis is placed on the derivation of the equations for evaluating the forecast sensitivity to parameters in the DAS representation of the model error statistics, including bias, standard deviation, and correlation structure. A novel adjoint-based procedure for adaptive tuning of the specified model error covariance matrix is introduced. Results from numerical convergence tests establish the validity of the model error sensitivity equations. Preliminary experiments providing a proof-of-concept are performed using the Lorenz multi-scale model to illustrate the theoretical concepts and potential benefits for practical applications.

  1. Velocity sensitivity of seismic body waves to the anisotropic parameters of a TTI-medium

    NASA Astrophysics Data System (ADS)

    Zhou, Bing; Greenhalgh, Stewart

    2008-09-01

    We formulate the derivatives of the phase and group velocities for each of the anisotropic parameters in a tilted transversely isotropic medium (TTI-medium). This is a common geological model in seismic exploration and has five elastic moduli or related Thomsen parameters and two orientation angles defining the axis of symmetry of the rock. We present two independent methods to compute the derivatives and examine the formulae with real anisotropic rocks. The formulations and numerical computations do not encounter any singularity problem when applied to the two quasi shear waves, which is a problem with other approaches. The two methods yield the same results, which show in a quantitative way the sensitivity behaviour of the phase and the group velocities to all of the elastic moduli or Thomsen's anisotropic parameters as well as the orientation angles in the 2D and 3D cases. One can recognize the dominant (strong effect) and weak (or 'dummy') parameters for the three seismic body-wave modes (qP, qSV, qSH) and their effective domains over the whole range of phase-slowness directions. These sensitivity patterns indicate the possibility of nonlinear kinematic inversion with the three wave modes for determining the anisotropic parameters and imaging an anisotropic medium.

  2. Parameter identifiability and regional calibration for reservoir inflow prediction

    NASA Astrophysics Data System (ADS)

    Kolberg, Sjur; Engeland, Kolbjørn; Tøfte, Lena S.; Bruland, Oddbjørn

    2013-04-01

    The large hydropower producer Statkraft is currently testing regional, distributed models for operational reservoir inflow prediction. The need for simultaneous forecasts and consistent updating in a large number of catchments supports the shift from catchment-oriented to regional models. Low-quality naturalized inflow series in the reservoir catchments further encourages the use of donor catchments and regional simulation for calibration purposes. MCMC based parameter estimation (the Dream algorithm; Vrugt et al, 2009) is adapted to regional parameter estimation, and implemented within the open source ENKI framework. The likelihood is based on the concept of effectively independent number of observations, spatially as well as in time. Marginal and conditional (around an optimum) parameter distributions for each catchment may be extracted, even though the MCMC algorithm itself is guided only by the regional likelihood surface. Early results indicate that the average performance loss associated with regional calibration (difference in Nash-Sutcliffe R2 between regionally and locally optimal parameters) is in the range of 0.06. The importance of the seasonal snow storage and melt in Norwegian mountain catchments probably contributes to the high degree of similarity among catchments. The evaluation continues for several regions, focusing on posterior parameter uncertainty and identifiability. Vrugt, J. A., C. J. F. ter Braak, C. G. H. Diks, B. A. Robinson, J. M. Hyman and D. Higdon: Accelerating Markov Chain Monte Carlo Simulation by Differential Evolution with Self-Adaptive Randomized Subspace Sampling. Int. J. of nonlinear sciences and numerical simulation 10, 3, 273-290, 2009.

  3. Sensitivity of land surface modeling to parameters: An uncertainty quantification method applied to the Community Land Model

    NASA Astrophysics Data System (ADS)

    Ricciuto, D. M.; Mei, R.; Mao, J.; Hoffman, F. M.; Kumar, J.

    2015-12-01

    Uncertainties in land parameters could have important impacts on simulated water and energy fluxes and land surface states, which will consequently affect atmospheric and biogeochemical processes. Therefore, quantification of such parameter uncertainties using a land surface model is the first step towards better understanding of predictive uncertainty in Earth system models. In this study, we applied a random-sampling, high-dimensional model representation (RS-HDMR) method to analyze the sensitivity of simulated photosynthesis, surface energy fluxes and surface hydrological components to selected land parameters in version 4.5 of the Community Land Model (CLM4.5). Because of the large computational expense of conducting ensembles of global gridded model simulations, we used the results of a previous cluster analysis to select one thousand representative land grid cells for simulation. Plant functional type (PFT)-specific uniform prior ranges for land parameters were determined using expert opinion and literature survey, and samples were generated with a quasi-Monte Carlo approach-Sobol sequence. Preliminary analysis of 1024 simulations suggested that four PFT-dependent parameters (including slope of the conductance-photosynthesis relationship, specific leaf area at canopy top, leaf C:N ratio and fraction of leaf N in RuBisco) are the dominant sensitive parameters for photosynthesis, surface energy and water fluxes across most PFTs, but with varying importance rankings. On the other hand, for surface ans sub-surface runoff, PFT-independent parameters, such as the depth-dependent decay factors for runoff, play more important roles than the previous four PFT-dependent parameters. Further analysis by conditioning the results on different seasons and years are being conducted to provide guidance on how climate variability and change might affect such sensitivity. This is the first step toward coupled simulations including biogeochemical processes, atmospheric processes

  4. On-orbit identifying the inertia parameters of space robotic systems using simple equivalent dynamics

    NASA Astrophysics Data System (ADS)

    Xu, Wenfu; Hu, Zhonghua; Zhang, Yu; Liang, Bin

    2017-03-01

    After being launched into space to perform some tasks, the inertia parameters of a space robotic system may change due to fuel consumption, hardware reconfiguration, target capturing, and so on. For precision control and simulation, it is required to identify these parameters on orbit. This paper proposes an effective method for identifying the complete inertia parameters (including the mass, inertia tensor and center of mass position) of a space robotic system. The key to the method is to identify two types of simple dynamics systems: equivalent single-body and two-body systems. For the former, all of the joints are locked into a designed configuration and the thrusters are used for orbital maneuvering. The object function for optimization is defined in terms of acceleration and velocity of the equivalent single body. For the latter, only one joint is unlocked and driven to move along a planned (exiting) trajectory in free-floating mode. The object function is defined based on the linear and angular momentum equations. Then, the parameter identification problems are transformed into non-linear optimization problems. The Particle Swarm Optimization (PSO) algorithm is applied to determine the optimal parameters, i.e. the complete dynamic parameters of the two equivalent systems. By sequentially unlocking the 1st to nth joints (or unlocking the nth to 1st joints), the mass properties of body 0 to n (or n to 0) are completely identified. For the proposed method, only simple dynamics equations are needed for identification. The excitation motion (orbit maneuvering and joint motion) is also easily realized. Moreover, the method does not require prior knowledge of the mass properties of any body. It is general and practical for identifying a space robotic system on-orbit.

  5. STUDY TO IDENTIFY IMPORTANT PARAMETERS FOR CHARACTERIZING PESTICIDE RESIDUE TRANSFER EFFICIENCIES

    EPA Science Inventory

    To reduce the uncertainty associated with current estimates of children's exposure to pesticides by dermal contact and non-dietary ingestion, residue transfer data are required. Prior to conducting exhaustive studies, a screening study to identify the important parameters for...

  6. Finding identifiable parameter combinations in nonlinear ODE models and the rational reparameterization of their input-output equations.

    PubMed

    Meshkat, Nicolette; Anderson, Chris; Distefano, Joseph J

    2011-09-01

    When examining the structural identifiability properties of dynamic system models, some parameters can take on an infinite number of values and yet yield identical input-output data. These parameters and the model are then said to be unidentifiable. Finding identifiable combinations of parameters with which to reparameterize the model provides a means for quantitatively analyzing the model and computing solutions in terms of the combinations. In this paper, we revisit and explore the properties of an algorithm for finding identifiable parameter combinations using Gröbner Bases and prove useful theoretical properties of these parameter combinations. We prove a set of M algebraically independent identifiable parameter combinations can be found using this algorithm and that there exists a unique rational reparameterization of the input-output equations over these parameter combinations. We also demonstrate application of the procedure to a nonlinear biomodel. Copyright © 2011 Elsevier Inc. All rights reserved.

  7. Net thrust calculation sensitivity of an afterburning turbofan engine to variations in input parameters

    NASA Technical Reports Server (NTRS)

    Hughes, D. L.; Ray, R. J.; Walton, J. T.

    1985-01-01

    The calculated value of net thrust of an aircraft powered by a General Electric F404-GE-400 afterburning turbofan engine was evaluated for its sensitivity to various input parameters. The effects of a 1.0-percent change in each input parameter on the calculated value of net thrust with two calculation methods are compared. This paper presents the results of these comparisons and also gives the estimated accuracy of the overall net thrust calculation as determined from the influence coefficients and estimated parameter measurement accuracies.

  8. Inverse modeling for seawater intrusion in coastal aquifers: Insights about parameter sensitivities, variances, correlations and estimation procedures derived from the Henry problem

    USGS Publications Warehouse

    Sanz, E.; Voss, C.I.

    2006-01-01

    Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only

  9. Sensitivity of acoustic nonlinearity parameter to the microstructural changes in cement-based materials

    NASA Astrophysics Data System (ADS)

    Kim, Gun; Kim, Jin-Yeon; Kurtis, Kimberly E.; Jacobs, Laurence J.

    2015-03-01

    This research experimentally investigates the sensitivity of the acoustic nonlinearity parameter to microcracks in cement-based materials. Based on the second harmonic generation (SHG) technique, an experimental setup using non-contact, air-coupled detection is used to receive the consistent Rayleigh surface waves. To induce variations in the extent of microscale cracking in two types of specimens (concrete and mortar), shrinkage reducing admixture (SRA), is used in one set, while a companion specimen is prepared without SRA. A 50 kHz wedge transducer and a 100 kHz air-coupled transducer are implemented for the generation and detection of nonlinear Rayleigh waves. It is shown that the air-coupled detection method provides more repeatable fundamental and second harmonic amplitudes of the propagating Rayleigh waves. The obtained amplitudes are then used to calculate the relative nonlinearity parameter βre, the ratio of the second harmonic amplitude to the square of the fundamental amplitude. The experimental results clearly demonstrate that the nonlinearity parameter (βre) is highly sensitive to the microstructural changes in cement-based materials than the Rayleigh phase velocity and attenuation and that SRA has great potential to avoid shrinkage cracking in cement-based materials.

  10. 3D segmentation of lung CT data with graph-cuts: analysis of parameter sensitivities

    NASA Astrophysics Data System (ADS)

    Cha, Jung won; Dunlap, Neal; Wang, Brian; Amini, Amir

    2016-03-01

    Lung boundary image segmentation is important for many tasks including for example in development of radiation treatment plans for subjects with thoracic malignancies. In this paper, we describe a method and parameter settings for accurate 3D lung boundary segmentation based on graph-cuts from X-ray CT data1. Even though previously several researchers have used graph-cuts for image segmentation, to date, no systematic studies have been performed regarding the range of parameter that give accurate results. The energy function in the graph-cuts algorithm requires 3 suitable parameter settings: K, a large constant for assigning seed points, c, the similarity coefficient for n-links, and λ, the terminal coefficient for t-links. We analyzed the parameter sensitivity with four lung data sets from subjects with lung cancer using error metrics. Large values of K created artifacts on segmented images, and relatively much larger value of c than the value of λ influenced the balance between the boundary term and the data term in the energy function, leading to unacceptable segmentation results. For a range of parameter settings, we performed 3D image segmentation, and in each case compared the results with the expert-delineated lung boundaries. We used simple 6-neighborhood systems for n-link in 3D. The 3D image segmentation took 10 minutes for a 512x512x118 ~ 512x512x190 lung CT image volume. Our results indicate that the graph-cuts algorithm was more sensitive to the K and λ parameter settings than to the C parameter and furthermore that amongst the range of parameters tested, K=5 and λ=0.5 yielded good results.

  11. Temperature Sensitivity as a Microbial Trait Using Parameters from Macromolecular Rate Theory

    PubMed Central

    Alster, Charlotte J.; Baas, Peter; Wallenstein, Matthew D.; Johnson, Nels G.; von Fischer, Joseph C.

    2016-01-01

    The activity of soil microbial extracellular enzymes is strongly controlled by temperature, yet the degree to which temperature sensitivity varies by microbe and enzyme type is unclear. Such information would allow soil microbial enzymes to be incorporated in a traits-based framework to improve prediction of ecosystem response to global change. If temperature sensitivity varies for specific soil enzymes, then determining the underlying causes of variation in temperature sensitivity of these enzymes will provide fundamental insights for predicting nutrient dynamics belowground. In this study, we characterized how both microbial taxonomic variation as well as substrate type affects temperature sensitivity. We measured β-glucosidase, leucine aminopeptidase, and phosphatase activities at six temperatures: 4, 11, 25, 35, 45, and 60°C, for seven different soil microbial isolates. To calculate temperature sensitivity, we employed two models, Arrhenius, which predicts an exponential increase in reaction rate with temperature, and Macromolecular Rate Theory (MMRT), which predicts rate to peak and then decline as temperature increases. We found MMRT provided a more accurate fit and allowed for more nuanced interpretation of temperature sensitivity in all of the enzyme × isolate combinations tested. Our results revealed that both the enzyme type and soil isolate type explain variation in parameters associated with temperature sensitivity. Because we found temperature sensitivity to be an inherent and variable property of an enzyme, we argue that it can be incorporated as a microbial functional trait, but only when using the MMRT definition of temperature sensitivity. We show that the Arrhenius metrics of temperature sensitivity are overly sensitive to test conditions, with activation energy changing depending on the temperature range it was calculated within. Thus, we propose the use of the MMRT definition of temperature sensitivity for accurate interpretation of

  12. Sensitivity analysis of TRX-2 lattice parameters with emphasis on epithermal /sup 238/U capture. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tomlinson, E.T.; deSaussure, G.; Weisbin, C.R.

    1977-03-01

    The main purpose of the study is the determination of the sensitivity of TRX-2 thermal lattice performance parameters to nuclear cross section data, particularly the epithermal resonance capture cross section of /sup 238/U. An energy-dependent sensitivity profile was generated for each of the performance parameters, to the most important cross sections of the various isotopes in the lattice. Uncertainties in the calculated values of the performance parameters due to estimated uncertainties in the basic nuclear data, deduced in this study, were shown to be small compared to the uncertainties in the measured values of the performance parameter and compared tomore » differences among calculations based upon the same data but with different methodologies.« less

  13. Sensitivity analysis of pulse pileup model parameter in photon counting detectors

    NASA Astrophysics Data System (ADS)

    Shunhavanich, Picha; Pelc, Norbert J.

    2017-03-01

    Photon counting detectors (PCDs) may provide several benefits over energy-integrating detectors (EIDs), including spectral information for tissue characterization and the elimination of electronic noise. PCDs, however, suffer from pulse pileup, which distorts the detected spectrum and degrades the accuracy of material decomposition. Several analytical models have been proposed to address this problem. The performance of these models are dependent on the assumptions used, including the estimated pulse shape whose parameter values could differ from the actual physical ones. As the incident flux increases and the corrections become more significant the needed parameter value accuracy may be more crucial. In this work, the sensitivity of model parameter accuracies is analyzed for the pileup model of Taguchi et al. The spectra distorted by pileup at different count rates are simulated using either the model or Monte Carlo simulations, and the basis material thicknesses are estimated by minimizing the negative log-likelihood with Poisson or multivariate Gaussian distributions. From simulation results, we find that the accuracy of the deadtime, the height of pulse negative tail, and the timing to the end of the pulse are more important than most other parameters, and they matter more with increasing count rate. This result can help facilitate further work on parameter calibrations.

  14. Global Sensitivity Analysis of OnGuard Models Identifies Key Hubs for Transport Interaction in Stomatal Dynamics1[CC-BY

    PubMed Central

    Vialet-Chabrand, Silvere; Griffiths, Howard

    2017-01-01

    The physical requirement for charge to balance across biological membranes means that the transmembrane transport of each ionic species is interrelated, and manipulating solute flux through any one transporter will affect other transporters at the same membrane, often with unforeseen consequences. The OnGuard systems modeling platform has helped to resolve the mechanics of stomatal movements, uncovering previously unexpected behaviors of stomata. To date, however, the manual approach to exploring model parameter space has captured little formal information about the emergent connections between parameters that define the most interesting properties of the system as a whole. Here, we introduce global sensitivity analysis to identify interacting parameters affecting a number of outputs commonly accessed in experiments in Arabidopsis (Arabidopsis thaliana). The analysis highlights synergies between transporters affecting the balance between Ca2+ sequestration and Ca2+ release pathways, notably those associated with internal Ca2+ stores and their turnover. Other, unexpected synergies appear, including with the plasma membrane anion channels and H+-ATPase and with the tonoplast TPK K+ channel. These emergent synergies, and the core hubs of interaction that they define, identify subsets of transporters associated with free cytosolic Ca2+ concentration that represent key targets to enhance plant performance in the future. They also highlight the importance of interactions between the voltage regulation of the plasma membrane and tonoplast in coordinating transport between the different cellular compartments. PMID:28432256

  15. Effects of turbulence on hydraulic heads and parameter sensitivities in preferential groundwater flow layers

    USGS Publications Warehouse

    Shoemaker, W. Barclay; Cunningham, Kevin J.; Kuniansky, Eve L.; Dixon, Joann F.

    2008-01-01

    A conduit flow process (CFP) for the Modular Finite Difference Ground‐Water Flow model, MODFLOW‐2005, has been created by the U.S. Geological Survey. An application of the CFP on a carbonate aquifer in southern Florida is described; this application examines (1) the potential for turbulent groundwater flow and (2) the effects of turbulent flow on hydraulic heads and parameter sensitivities. Turbulent flow components were spatially extensive in preferential groundwater flow layers, with horizontal hydraulic conductivities of about 5,000,000 m d−1, mean void diameters equal to about 3.5 cm, groundwater temperature equal to about 25°C, and critical Reynolds numbers less than or equal to 400. Turbulence either increased or decreased simulated heads from their laminar elevations. Specifically, head differences from laminar elevations ranged from about −18 to +27 cm and were explained by the magnitude of net flow to the finite difference model cell. Turbulence also affected the sensitivities of model parameters. Specifically, the composite‐scaled sensitivities of horizontal hydraulic conductivities decreased by as much as 70% when turbulence was essentially removed. These hydraulic head and sensitivity differences due to turbulent groundwater flow highlight potential errors in models based on the equivalent porous media assumption, which assumes laminar flow in uniformly distributed void spaces.

  16. A sensitivity analysis of cloud properties to CLUBB parameters in the single-column Community Atmosphere Model (SCAM5)

    DOE PAGES

    Guo, Zhun; Wang, Minghuai; Qian, Yun; ...

    2014-08-13

    In this study, we investigate the sensitivity of simulated shallow cumulus and stratocumulus clouds to selected tunable parameters of Cloud Layers Unified by Binormals (CLUBB) in the single column version of Community Atmosphere Model version 5 (SCAM5). A quasi-Monte Carlo (QMC) sampling approach is adopted to effectively explore the high-dimensional parameter space and a generalized linear model is adopted to study the responses of simulated cloud fields to tunable parameters. One stratocumulus and two shallow convection cases are configured at both coarse and fine vertical resolutions in this study.. Our results show that most of the variance in simulated cloudmore » fields can be explained by a small number of tunable parameters. The parameters related to Newtonian and buoyancy-damping terms of total water flux are found to be the most influential parameters for stratocumulus. For shallow cumulus, the most influential parameters are those related to skewness of vertical velocity, reflecting the strong coupling between cloud properties and dynamics in this regime. The influential parameters in the stratocumulus case are sensitive to the choice of the vertical resolution while little sensitivity is found for the shallow convection cases, as eddy mixing length (or dissipation time scale) plays a more important role and depends more strongly on the vertical resolution in stratocumulus than in shallow convections. The influential parameters remain almost unchanged when the number of tunable parameters increases from 16 to 35. This study improves understanding of the CLUBB behavior associated with parameter uncertainties.« less

  17. Breast tumor oxygenation in response to carbogen intervention assessed simultaneously by three oxygen-sensitive parameters

    NASA Astrophysics Data System (ADS)

    Gu, Yueqing; Bourke, Vincent; Kim, Jae Gwan; Xia, Mengna; Constantinescu, Anca; Mason, Ralph P.; Liu, Hanli

    2003-07-01

    Three oxygen-sensitive parameters (arterial hemoglobin oxygen saturation SaO2, tumor vascular oxygenated hemoglobin concentration [HbO2], and tumor oxygen tension pO2) were measured simultaneously by three different optical techniques (pulse oximeter, near infrared spectroscopy, and FOXY) to evaluate dynamic responses of breast tumors to carbogen (5% CO2 and 95% O2) intervention. All three parameters displayed similar trends in dynamic response to carbogen challenge, but with different response times. These response times were quantified by the time constants of the exponential fitting curves, revealing the immediate and the fastest response from the arterial SaO2, followed by changes in global tumor vascular [HbO2], and delayed responses for pO2. The consistency of the three oxygen-sensitive parameters demonstrated the ability of NIRS to monitor therapeutic interventions for rat breast tumors in-vivo in real time.

  18. Sensitivity derivatives for advanced CFD algorithm and viscous modelling parameters via automatic differentiation

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Newman, Perry A.; Haigler, Kara J.

    1993-01-01

    The computational technique of automatic differentiation (AD) is applied to a three-dimensional thin-layer Navier-Stokes multigrid flow solver to assess the feasibility and computational impact of obtaining exact sensitivity derivatives typical of those needed for sensitivity analyses. Calculations are performed for an ONERA M6 wing in transonic flow with both the Baldwin-Lomax and Johnson-King turbulence models. The wing lift, drag, and pitching moment coefficients are differentiated with respect to two different groups of input parameters. The first group consists of the second- and fourth-order damping coefficients of the computational algorithm, whereas the second group consists of two parameters in the viscous turbulent flow physics modelling. Results obtained via AD are compared, for both accuracy and computational efficiency with the results obtained with divided differences (DD). The AD results are accurate, extremely simple to obtain, and show significant computational advantage over those obtained by DD for some cases.

  19. A Fault Alarm and Diagnosis Method Based on Sensitive Parameters and Support Vector Machine

    NASA Astrophysics Data System (ADS)

    Zhang, Jinjie; Yao, Ziyun; Lv, Zhiquan; Zhu, Qunxiong; Xu, Fengtian; Jiang, Zhinong

    2015-08-01

    Study on the extraction of fault feature and the diagnostic technique of reciprocating compressor is one of the hot research topics in the field of reciprocating machinery fault diagnosis at present. A large number of feature extraction and classification methods have been widely applied in the related research, but the practical fault alarm and the accuracy of diagnosis have not been effectively improved. Developing feature extraction and classification methods to meet the requirements of typical fault alarm and automatic diagnosis in practical engineering is urgent task. The typical mechanical faults of reciprocating compressor are presented in the paper, and the existing data of online monitoring system is used to extract fault feature parameters within 15 types in total; the inner sensitive connection between faults and the feature parameters has been made clear by using the distance evaluation technique, also sensitive characteristic parameters of different faults have been obtained. On this basis, a method based on fault feature parameters and support vector machine (SVM) is developed, which will be applied to practical fault diagnosis. A better ability of early fault warning has been proved by the experiment and the practical fault cases. Automatic classification by using the SVM to the data of fault alarm has obtained better diagnostic accuracy.

  20. Parameter estimation and sensitivity analysis for a mathematical model with time delays of leukemia

    NASA Astrophysics Data System (ADS)

    Cândea, Doina; Halanay, Andrei; Rǎdulescu, Rodica; Tǎlmaci, Rodica

    2017-01-01

    We consider a system of nonlinear delay differential equations that describes the interaction between three competing cell populations: healthy, leukemic and anti-leukemia T cells involved in Chronic Myeloid Leukemia (CML) under treatment with Imatinib. The aim of this work is to establish which model parameters are the most important in the success or failure of leukemia remission under treatment using a sensitivity analysis of the model parameters. For the most significant parameters of the model which affect the evolution of CML disease during Imatinib treatment we try to estimate the realistic values using some experimental data. For these parameters, steady states are calculated and their stability is analyzed and biologically interpreted.

  1. Sensitivity analysis of infectious disease models: methods, advances and their application

    PubMed Central

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.

    2013-01-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  2. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    USGS Publications Warehouse

    Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  3. Aerobic stabilization of biological sludge characterized by an extremely low decay rate: modeling, identifiability analysis and parameter estimation.

    PubMed

    Martínez-García, C G; Olguín, M T; Fall, C

    2014-08-01

    Aerobic digestion batch tests were run on a sludge model that contained only two fractions, the heterotrophic biomass (XH) and its endogenous residue (XP). The objective was to describe the stabilization of the sludge and estimate the endogenous decay parameters. Modeling was performed with Aquasim, based on long-term data of volatile suspended solids and chemical oxygen demand (VSS, COD). Sensitivity analyses were carried out to determine the conditions for unique identifiability of the parameters. Importantly, it was found that the COD/VSS ratio of the endogenous residues (1.06) was significantly lower than for the active biomass fraction (1.48). The decay rate constant of the studied sludge (low bH, 0.025 d(-1)) was one-tenth that usually observed (0.2d(-1)), which has two main practical significances. Digestion time required is much more long; also the oxygen uptake rate might be <1.5 mg O₂/gTSSh (biosolids standards), without there being significant decline in the biomass. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. The Spaeth/Richman contrast sensitivity test (SPARCS): design, reproducibility and ability to identify patients with glaucoma.

    PubMed

    Richman, Jesse; Zangalli, Camila; Lu, Lan; Wizov, Sheryl S; Spaeth, Eric; Spaeth, George L

    2015-01-01

    (1) To determine the ability of a novel, internet-based contrast sensitivity test titled the Spaeth/Richman Contrast Sensitivity Test (SPARCS) to identify patients with glaucoma. (2) To determine the test-retest reliability of SPARCS. A prospective, cross-sectional study of patients with glaucoma and controls was performed. Subjects were assessed by SPARCS and the Pelli-Robson chart. Reliability of each test was assessed by the intraclass correlation coefficient and the coefficient of repeatability. Sensitivity and specificity for identifying glaucoma was also evaluated. The intraclass correlation coefficient for SPARCS was 0.97 and 0.98 for Pelli-Robson. The coefficient of repeatability for SPARCS was ±6.7% and ±6.4% for Pelli-Robson. SPARCS identified patients with glaucoma with 79% sensitivity and 93% specificity. SPARCS has high test-retest reliability. It is easily accessible via the internet and identifies patients with glaucoma well. NCT01300949. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  5. Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias

    2015-04-01

    Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage

  6. Relative sensitivity of developmental and immune parameters in juvenile versus adult male rats after exposure to di(2-ethylhexyl) phthalate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonk, Elisa C.M., E-mail: ilse.tonk@rivm.nl; Laboratory for Health Protection Research, National Institute for Public Health and the Environment; Verhoef, Aart

    The developing immune system displays a relatively high sensitivity as compared to both general toxicity parameters and to the adult immune system. In this study we have performed such comparisons using di(2-ethylhexyl) phthalate (DEHP) as a model compound. DEHP is the most abundant phthalate in the environment and perinatal exposure to DEHP has been shown to disrupt male sexual differentiation. In addition, phthalate exposure has been associated with immune dysfunction as evidenced by effects on the expression of allergy. Male wistar rats were dosed with corn oil or DEHP by gavage from postnatal day (PND) 10–50 or PND 50–90 atmore » doses between 1 and 1000 mg/kg/day. Androgen-dependent organ weights showed effects at lower dose levels in juvenile versus adult animals. Immune parameters affected included TDAR parameters in both age groups, NK activity in juvenile animals and TNF-α production by adherent splenocytes in adult animals. Immune parameters were affected at lower dose levels compared to developmental parameters. Overall, more immune parameters were affected in juvenile animals compared to adult animals and effects were observed at lower dose levels. The results of this study show a relatively higher sensitivity of juvenile versus adult rats. Furthermore, they illustrate the relative sensitivity of the developing immune system in juvenile animals as compared to general toxicity and developmental parameters. This study therefore provides further argumentation for performing dedicated developmental immune toxicity testing as a default in regulatory toxicology. -- Highlights: ► In this study we evaluate the relative sensitivities for DEHP induced effects. ► Results of this study demonstrate the age-dependency of DEHP toxicity. ► Functional immune parameters were more sensitive than structural immune parameters. ► Immune parameters were affected at lower dose levels than developmental parameters. ► Findings demonstrate the susceptibility of

  7. Parameter Sensitivity and Laboratory Benchmarking of a Biogeochemical Process Model for Enhanced Anaerobic Dechlorination

    NASA Astrophysics Data System (ADS)

    Kouznetsova, I.; Gerhard, J. I.; Mao, X.; Barry, D. A.; Robinson, C.; Brovelli, A.; Harkness, M.; Fisher, A.; Mack, E. E.; Payne, J. A.; Dworatzek, S.; Roberts, J.

    2008-12-01

    A detailed model to simulate trichloroethene (TCE) dechlorination in anaerobic groundwater systems has been developed and implemented through PHAST, a robust and flexible geochemical modeling platform. The approach is comprehensive but retains flexibility such that models of varying complexity can be used to simulate TCE biodegradation in the vicinity of nonaqueous phase liquid (NAPL) source zones. The complete model considers a full suite of biological (e.g., dechlorination, fermentation, sulfate and iron reduction, electron donor competition, toxic inhibition, pH inhibition), physical (e.g., flow and mass transfer) and geochemical processes (e.g., pH modulation, gas formation, mineral interactions). Example simulations with the model demonstrated that the feedback between biological, physical, and geochemical processes is critical. Successful simulation of a thirty-two-month column experiment with site soil, complex groundwater chemistry, and exhibiting both anaerobic dechlorination and endogenous respiration, provided confidence in the modeling approach. A comprehensive suite of batch simulations was then conducted to estimate the sensitivity of predicted TCE degradation to the 36 model input parameters. A local sensitivity analysis was first employed to rank the importance of parameters, revealing that 5 parameters consistently dominated model predictions across a range of performance metrics. A global sensitivity analysis was then performed to evaluate the influence of a variety of full parameter data sets available in the literature. The modeling study was performed as part of the SABRE (Source Area BioREmediation) project, a public/private consortium whose charter is to determine if enhanced anaerobic bioremediation can result in effective and quantifiable treatment of chlorinated solvent DNAPL source areas. The modelling conducted has provided valuable insight into the complex interactions between processes in the evolving biogeochemical systems

  8. MODFLOW-2000, the U.S. Geological Survey modular ground-water model; user guide to the observation, sensitivity, and parameter-estimation processes and three post-processing programs

    USGS Publications Warehouse

    Hill, Mary C.; Banta, E.R.; Harbaugh, A.W.; Anderman, E.R.

    2000-01-01

    This report documents the Observation, Sensitivity, and Parameter-Estimation Processes of the ground-water modeling computer program MODFLOW-2000. The Observation Process generates model-calculated values for comparison with measured, or observed, quantities. A variety of statistics is calculated to quantify this comparison, including a weighted least-squares objective function. In addition, a number of files are produced that can be used to compare the values graphically. The Sensitivity Process calculates the sensitivity of hydraulic heads throughout the model with respect to specified parameters using the accurate sensitivity-equation method. These are called grid sensitivities. If the Observation Process is active, it uses the grid sensitivities to calculate sensitivities for the simulated values associated with the observations. These are called observation sensitivities. Observation sensitivities are used to calculate a number of statistics that can be used (1) to diagnose inadequate data, (2) to identify parameters that probably cannot be estimated by regression using the available observations, and (3) to evaluate the utility of proposed new data. The Parameter-Estimation Process uses a modified Gauss-Newton method to adjust values of user-selected input parameters in an iterative procedure to minimize the value of the weighted least-squares objective function. Statistics produced by the Parameter-Estimation Process can be used to evaluate estimated parameter values; statistics produced by the Observation Process and post-processing program RESAN-2000 can be used to evaluate how accurately the model represents the actual processes; statistics produced by post-processing program YCINT-2000 can be used to quantify the uncertainty of model simulated values. Parameters are defined in the Ground-Water Flow Process input files and can be used to calculate most model inputs, such as: for explicitly defined model layers, horizontal hydraulic conductivity

  9. High-Sensitivity GaN Microchemical Sensors

    NASA Technical Reports Server (NTRS)

    Son, Kyung-ah; Yang, Baohua; Liao, Anna; Moon, Jeongsun; Prokopuk, Nicholas

    2009-01-01

    Systematic studies have been performed on the sensitivity of GaN HEMT (high electron mobility transistor) sensors using various gate electrode designs and operational parameters. The results here show that a higher sensitivity can be achieved with a larger W/L ratio (W = gate width, L = gate length) at a given D (D = source-drain distance), and multi-finger gate electrodes offer a higher sensitivity than a one-finger gate electrode. In terms of operating conditions, sensor sensitivity is strongly dependent on transconductance of the sensor. The highest sensitivity can be achieved at the gate voltage where the slope of the transconductance curve is the largest. This work provides critical information about how the gate electrode of a GaN HEMT, which has been identified as the most sensitive among GaN microsensors, needs to be designed, and what operation parameters should be used for high sensitivity detection.

  10. Sensitivity-Based Guided Model Calibration

    NASA Astrophysics Data System (ADS)

    Semnani, M.; Asadzadeh, M.

    2017-12-01

    A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.

  11. Capsaicin Cough Sensitivity and the Association with Clinical Parameters in Bronchiectasis

    PubMed Central

    Lin, Zhi-ya; Tang, Yan; Li, Hui-min; Lin, Zhi-min; Zheng, Jin-ping; Chen, Rong-chang; Zhong, Nan-shan

    2014-01-01

    Background Cough hypersensitivity has been common among respiratory diseases. Objective To determine associations of capsaicin cough sensitivity and clinical parameters in adults with clinically stable bronchiectasis. Methods We recruited 135 consecutive adult bronchiectasis patients and 22 healthy subjects. History inquiry, sputum culture, spirometry, chest high-resolution computed tomography (HRCT), Leicester Cough Questionnaire scoring, Bronchiectasis Severity Index (BSI) assessment and capsaicin inhalation challenge were performed. Cough sensitivity was measured as the capsaicin concentration eliciting at least 2 (C2) and 5 coughs (C5). Results Despite significant overlap between healthy subjects and bronchiectasis patients, both C2 and C5 were significantly lower in the latter group (all P<0.01). Lower levels of C5 were associated with a longer duration of bronchiectasis symptoms, worse HRCT score, higher 24-hour sputum volume, BSI and sputum purulence score, and sputum culture positive for P. aeruginosa. Determinants associated with increased capsaicin cough sensitivity, defined as C5 being 62.5 µmol/L or less, encompassed female gender (OR: 3.25, 95%CI: 1.35–7.83, P<0.01), HRCT total score between 7–12 (OR: 2.57, 95%CI: 1.07–6.173, P = 0.04), BSI between 5–8 (OR: 4.05, 95%CI: 1.48–11.06, P<0.01) and 9 or greater (OR: 4.38, 95%CI: 1.48–12.93, P<0.01). Conclusion Capsaicin cough sensitivity is heightened in a subgroup of bronchiectasis patients and associated with the disease severity. Gender and disease severity, but not sputum purulence, are independent determinants of heightened capsaicin cough sensitivity. Current testing for cough sensitivity diagnosis may be limited because of overlap with healthy subjects but might provide an objective index for assessment of cough in future clinical trials. PMID:25409316

  12. An efficient framework for optimization and parameter sensitivity analysis in arterial growth and remodeling computations

    PubMed Central

    Sankaran, Sethuraman; Humphrey, Jay D.; Marsden, Alison L.

    2013-01-01

    Computational models for vascular growth and remodeling (G&R) are used to predict the long-term response of vessels to changes in pressure, flow, and other mechanical loading conditions. Accurate predictions of these responses are essential for understanding numerous disease processes. Such models require reliable inputs of numerous parameters, including material properties and growth rates, which are often experimentally derived, and inherently uncertain. While earlier methods have used a brute force approach, systematic uncertainty quantification in G&R models promises to provide much better information. In this work, we introduce an efficient framework for uncertainty quantification and optimal parameter selection, and illustrate it via several examples. First, an adaptive sparse grid stochastic collocation scheme is implemented in an established G&R solver to quantify parameter sensitivities, and near-linear scaling with the number of parameters is demonstrated. This non-intrusive and parallelizable algorithm is compared with standard sampling algorithms such as Monte-Carlo. Second, we determine optimal arterial wall material properties by applying robust optimization. We couple the G&R simulator with an adaptive sparse grid collocation approach and a derivative-free optimization algorithm. We show that an artery can achieve optimal homeostatic conditions over a range of alterations in pressure and flow; robustness of the solution is enforced by including uncertainty in loading conditions in the objective function. We then show that homeostatic intramural and wall shear stress is maintained for a wide range of material properties, though the time it takes to achieve this state varies. We also show that the intramural stress is robust and lies within 5% of its mean value for realistic variability of the material parameters. We observe that prestretch of elastin and collagen are most critical to maintaining homeostasis, while values of the material properties are

  13. A single-index threshold Cox proportional hazard model for identifying a treatment-sensitive subset based on multiple biomarkers.

    PubMed

    He, Ye; Lin, Huazhen; Tu, Dongsheng

    2018-06-04

    In this paper, we introduce a single-index threshold Cox proportional hazard model to select and combine biomarkers to identify patients who may be sensitive to a specific treatment. A penalized smoothed partial likelihood is proposed to estimate the parameters in the model. A simple, efficient, and unified algorithm is presented to maximize this likelihood function. The estimators based on this likelihood function are shown to be consistent and asymptotically normal. Under mild conditions, the proposed estimators also achieve the oracle property. The proposed approach is evaluated through simulation analyses and application to the analysis of data from two clinical trials, one involving patients with locally advanced or metastatic pancreatic cancer and one involving patients with resectable lung cancer. Copyright © 2018 John Wiley & Sons, Ltd.

  14. Impact parameter sensitive study of inner-shell atomic processes in the experimental storage ring

    NASA Astrophysics Data System (ADS)

    Gumberidze, A.; Kozhuharov, C.; Zhang, R. T.; Trotsenko, S.; Kozhedub, Y. S.; DuBois, R. D.; Beyer, H. F.; Blumenhagen, K.-H.; Brandau, C.; Bräuning-Demian, A.; Chen, W.; Forstner, O.; Gao, B.; Gassner, T.; Grisenti, R. E.; Hagmann, S.; Hillenbrand, P.-M.; Indelicato, P.; Kumar, A.; Lestinsky, M.; Litvinov, Yu. A.; Petridis, N.; Schury, D.; Spillmann, U.; Trageser, C.; Trassinelli, M.; Tu, X.; Stöhlker, Th.

    2017-10-01

    In this work, we present a pilot experiment in the experimental storage ring (ESR) at GSI devoted to impact parameter sensitive studies of inner shell atomic processes for low-energy (heavy-) ion-atom collisions. The experiment was performed with bare and He-like xenon ions (Xe54+, Xe52+) colliding with neutral xenon gas atoms, resulting in a symmetric collision system. This choice of the projectile charge states was made in order to compare the effect of a filled K-shell with the empty one. The projectile and target X-rays have been measured at different observation angles for all impact parameters as well as for the impact parameter range of ∼35-70 fm.

  15. A sensitivity analysis method for the body segment inertial parameters based on ground reaction and joint moment regressor matrices.

    PubMed

    Futamure, Sumire; Bonnet, Vincent; Dumas, Raphael; Venture, Gentiane

    2017-11-07

    This paper presents a method allowing a simple and efficient sensitivity analysis of dynamics parameters of complex whole-body human model. The proposed method is based on the ground reaction and joint moment regressor matrices, developed initially in robotics system identification theory, and involved in the equations of motion of the human body. The regressor matrices are linear relatively to the segment inertial parameters allowing us to use simple sensitivity analysis methods. The sensitivity analysis method was applied over gait dynamics and kinematics data of nine subjects and with a 15 segments 3D model of the locomotor apparatus. According to the proposed sensitivity indices, 76 segments inertial parameters out the 150 of the mechanical model were considered as not influent for gait. The main findings were that the segment masses were influent and that, at the exception of the trunk, moment of inertia were not influent for the computation of the ground reaction forces and moments and the joint moments. The same method also shows numerically that at least 90% of the lower-limb joint moments during the stance phase can be estimated only from a force-plate and kinematics data without knowing any of the segment inertial parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Identifying School Psychologists' Intercultural Sensitivity

    ERIC Educational Resources Information Center

    Puyana, Olivia E.; Edwards, Oliver W.

    2016-01-01

    School psychologists are encouraged to analyze their intercultural sensitivity because they may be subject to personal attitudes and beliefs that pejoratively influence their work with students and clients who are culturally and linguistically diverse (CLD). However, gaps remain in the literature regarding whether school psychologists are prepared…

  17. Practical identifiability analysis of a minimal cardiovascular system model.

    PubMed

    Pironet, Antoine; Docherty, Paul D; Dauby, Pierre C; Chase, J Geoffrey; Desaive, Thomas

    2017-01-17

    Parameters of mathematical models of the cardiovascular system can be used to monitor cardiovascular state, such as total stressed blood volume status, vessel elastance and resistance. To do so, the model parameters have to be estimated from data collected at the patient's bedside. This work considers a seven-parameter model of the cardiovascular system and investigates whether these parameters can be uniquely determined using indices derived from measurements of arterial and venous pressures, and stroke volume. An error vector defined the residuals between the simulated and reference values of the seven clinically available haemodynamic indices. The sensitivity of this error vector to each model parameter was analysed, as well as the collinearity between parameters. To assess practical identifiability of the model parameters, profile-likelihood curves were constructed for each parameter. Four of the seven model parameters were found to be practically identifiable from the selected data. The remaining three parameters were practically non-identifiable. Among these non-identifiable parameters, one could be decreased as much as possible. The other two non-identifiable parameters were inversely correlated, which prevented their precise estimation. This work presented the practical identifiability analysis of a seven-parameter cardiovascular system model, from limited clinical data. The analysis showed that three of the seven parameters were practically non-identifiable, thus limiting the use of the model as a monitoring tool. Slight changes in the time-varying function modeling cardiac contraction and use of larger values for the reference range of venous pressure made the model fully practically identifiable. Copyright © 2017. Published by Elsevier B.V.

  18. Physically-based slope stability modelling and parameter sensitivity: a case study in the Quitite and Papagaio catchments, Rio de Janeiro, Brazil

    NASA Astrophysics Data System (ADS)

    de Lima Neves Seefelder, Carolina; Mergili, Martin

    2016-04-01

    We use the software tools r.slope.stability and TRIGRS to produce factor of safety and slope failure susceptibility maps for the Quitite and Papagaio catchments, Rio de Janeiro, Brazil. The key objective of the work consists in exploring the sensitivity of the geotechnical (r.slope.stability) and geohydraulic (TRIGRS) parameterization on the model outcomes in order to define suitable parameterization strategies for future slope stability modelling. The two landslide-prone catchments Quitite and Papagaio together cover an area of 4.4 km², extending between 12 and 995 m a.s.l. The study area is dominated by granitic bedrock and soil depths of 1-3 m. Ranges of geotechnical and geohydraulic parameters are derived from literature values. A landslide inventory related to a rainfall event in 1996 (250 mm in 48 hours) is used for model evaluation. We attempt to identify those combinations of effective cohesion and effective internal friction angle yielding the best correspondence with the observed landslide release areas in terms of the area under the ROC Curve (AUCROC), and in terms of the fraction of the area affected by the release of landslides. Thereby we test multiple parameter combinations within defined ranges to derive the slope failure susceptibility (fraction of tested parameter combinations yielding a factor of safety smaller than 1). We use the tool r.slope.stability (comparing the infinite slope stability model and an ellipsoid-based sliding surface model) to test and to optimize the geotechnical parameters, and TRIGRS (a coupled hydraulic-infinite slope stability model) to explore the sensitivity of the model results to the geohydraulic parameters. The model performance in terms of AUCROC is insensitive to the variation of the geotechnical parameterization within much of the tested ranges. Assuming fully saturated soils, r.slope.stability produces rather conservative predictions, whereby the results yielded with the sliding surface model are more

  19. Rapid Debris Analysis Project Task 3 Final Report - Sensitivity of Fallout to Source Parameters, Near-Detonation Environment Material Properties, Topography, and Meteorology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldstein, Peter

    2014-01-24

    This report describes the sensitivity of predicted nuclear fallout to a variety of model input parameters, including yield, height of burst, particle and activity size distribution parameters, wind speed, wind direction, topography, and precipitation. We investigate sensitivity over a wide but plausible range of model input parameters. In addition, we investigate a specific example with a relatively narrow range to illustrate the potential for evaluating uncertainties in predictions when there are more precise constraints on model parameters.

  20. Development and validation of a highly sensitive urine-based test to identify patients with colonic adenomatous polyps.

    PubMed

    Wang, Haili; Tso, Victor; Wong, Clarence; Sadowski, Dan; Fedorak, Richard N

    2014-03-20

    Adenomatous polyps are precursors of colorectal cancer; their detection and removal is the goal of colon cancer screening programs. However, fecal-based methods identify patients with adenomatous polyps with low levels of sensitivity. The aim or this study was to develop a highly accurate, prototypic, proof-of-concept, spot urine-based diagnostic test using metabolomic technology to distinguish persons with adenomatous polyps from those without polyps. Prospective urine and stool samples were collected from 876 participants undergoing colonoscopy examination in a colon cancer screening program, from April 2008 to October 2009 at the University of Alberta. Colonoscopy reference standard identified 633 participants with no colonic polyps and 243 with colonic adenomatous polyps. One-dimensional nuclear magnetic resonance spectra of urine metabolites were analyzed to define a diagnostic metabolomic profile for colonic adenomas. A urine metabolomic diagnostic test for colonic adenomatous polyps was established using 67% of the samples (un-blinded training set) and validated using the other 33% of the samples (blinded testing set). The urine metabolomic diagnostic test's specificity and sensitivity were compared with those of fecal-based tests. Using a two-component, orthogonal, partial least-squares model of the metabolomic profile, the un-blinded training set identified patients with colonic adenomatous polyps with 88.9% sensitivity and 50.2% specificity. Validation using the blinded testing set confirmed sensitivity and specificity values of 82.7% and 51.2%, respectively. Sensitivities of fecal-based tests to identify colonic adenomas ranged from 2.5 to 11.9%. We describe a proof-of-concept spot urine-based metabolomic diagnostic test that identifies patients with colonic adenomatous polyps with a greater level of sensitivity (83%) than fecal-based tests.

  1. Study of parameter degeneracy and hierarchy sensitivity of NO ν A in presence of sterile neutrino

    NASA Astrophysics Data System (ADS)

    Ghosh, Monojit; Gupta, Shivani; Matthews, Zachary M.; Sharma, Pankaj; Williams, Anthony G.

    2017-10-01

    The first hint of the neutrino mass hierarchy is believed to come from the long-baseline experiment NO ν A . Recent results from NO ν A shows a mild preference towards the C P phase δ13=-9 0 ° and normal hierarchy. Fortunately this is the favorable area of the parameter space which does not suffer from the hierarchy-δ13 degeneracy and thus NO ν A can have good hierarchy sensitivity for this true combination of hierarchy and δ13. Apart from the hierarchy-δ13 degeneracy there is also the octant-δ13 degeneracy. But this does not affect the favorable parameter space of NO ν A as this degeneracy can be resolved with a balanced neutrino and antineutrino run. However, if we consider the existence of a light sterile neutrino then there may be additional degeneracies which can spoil the hierarchy sensitivity of NO ν A even in the favorable parameter space. In the present work we find that apart from the degeneracies mentioned above, there are additional hierarchy and octant degeneracies that appear with the new phase δ14 in the presence of a light sterile neutrino in the eV scale. In contrast to the hierarchy and octant degeneracies appearing with δ13, the parameter space for hierarchy-δ14 degeneracy is different in neutrinos and antineutrinos though the octant-δ14 degeneracy behaves similarly in neutrinos and antineutrinos. We study the effect of these degeneracies on the hierarchy sensitivity of NO ν A for the true normal hierarchy.

  2. Sensitivities of Tropical Cyclones to Surface Friction and the Coriolis Parameter in a 2-D Cloud-Resolving Model

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.; Chen, Baode; Tao, Wei-Kuo; Lau, William K. M. (Technical Monitor)

    2002-01-01

    The sensitivities to surface friction and the Coriolis parameter in tropical cyclogenesis are studied using an axisymmetric version of the Goddard cloud ensemble model. Our experiments demonstrate that tropical cyclogenesis can still occur without surface friction. However, the resulting tropical cyclone has very unrealistic structure. Surface friction plays an important role of giving the tropical cyclones their observed smaller size and diminished intensity. Sensitivity of the cyclogenesis process to surface friction. in terms of kinetic energy growth, has different signs in different phases of the tropical cyclone. Contrary to the notion of Ekman pumping efficiency, which implies a preference for the highest Coriolis parameter in the growth rate if all other parameters are unchanged, our experiments show no such preference.

  3. Parameter Sensitivity Study of the Wall Interference Correction System (WICS)

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Everhart, Joel L.; Iyer, Venkit

    2001-01-01

    An off-line version of the Wall Interference Correction System (WICS) has been implemented for the "NASA Langley National Transonic Facility. The correction capability is currently restricted to corrections for solid wall interference in the model pitch plane for Mach numbers, less than 0.45 due to a limitation in tunnel calibration data. A study to assess output sensitivity to the aerodynamic parameters of Reynolds number and Mach number was conducted on this code to further ensure quality during the correction process. In addition, this paper includes all investigation into possible correction due to a semispan test technique using a non metric standoff and all improvement to the standard data rejection algorithm.

  4. Identifying arbitrary parameter zonation using multiple level set functions

    NASA Astrophysics Data System (ADS)

    Lu, Zhiming; Vesselinov, Velimir V.; Lei, Hongzhuan

    2018-07-01

    In this paper, we extended the analytical level set method [1,2] for identifying a piece-wisely heterogeneous (zonation) binary system to the case with an arbitrary number of materials with unknown material properties. In the developed level set approach, starting from an initial guess, the material interfaces are propagated through iterations such that the residuals between the simulated and observed state variables (hydraulic head) is minimized. We derived an expression for the propagation velocity of the interface between any two materials, which is related to the permeability contrast between the materials on two sides of the interface, the sensitivity of the head to permeability, and the head residual. We also formulated an expression for updating the permeability of all materials, which is consistent with the steepest descent of the objective function. The developed approach has been demonstrated through many examples, ranging from totally synthetic cases to a case where the flow conditions are representative of a groundwater contaminant site at the Los Alamos National Laboratory. These examples indicate that the level set method can successfully identify zonation structures, even if the number of materials in the model domain is not exactly known in advance. Although the evolution of the material zonation depends on the initial guess field, inverse modeling runs starting with different initial guesses fields may converge to the similar final zonation structure. These examples also suggest that identifying interfaces of spatially distributed heterogeneities is more important than estimating their permeability values.

  5. Sensitivity-based virtual fields for the non-linear virtual fields method

    NASA Astrophysics Data System (ADS)

    Marek, Aleksander; Davis, Frances M.; Pierron, Fabrice

    2017-09-01

    The virtual fields method is an approach to inversely identify material parameters using full-field deformation data. In this manuscript, a new set of automatically-defined virtual fields for non-linear constitutive models has been proposed. These new sensitivity-based virtual fields reduce the influence of noise on the parameter identification. The sensitivity-based virtual fields were applied to a numerical example involving small strain plasticity; however, the general formulation derived for these virtual fields is applicable to any non-linear constitutive model. To quantify the improvement offered by these new virtual fields, they were compared with stiffness-based and manually defined virtual fields. The proposed sensitivity-based virtual fields were consistently able to identify plastic model parameters and outperform the stiffness-based and manually defined virtual fields when the data was corrupted by noise.

  6. Sensitivity analysis of helicopter IMC decelerating steep approach and landing performance to navigation system parameters

    NASA Technical Reports Server (NTRS)

    Karmali, M. S.; Phatak, A. V.

    1982-01-01

    Results of a study to investigate, by means of a computer simulation, the performance sensitivity of helicopter IMC DSAL operations as a function of navigation system parameters are presented. A mathematical model representing generically a navigation system is formulated. The scenario simulated consists of a straight in helicopter approach to landing along a 6 deg glideslope. The deceleration magnitude chosen is 03g. The navigation model parameters are varied and the statistics of the total system errors (TSE) computed. These statistics are used to determine the critical navigation system parameters that affect the performance of the closed-loop navigation, guidance and control system of a UH-1H helicopter.

  7. Sensitivity of MRI parameters within intervertebral discs to the severity of adolescent idiopathic scoliosis.

    PubMed

    Huber, Maxime; Gilbert, Guillaume; Roy, Julien; Parent, Stefan; Labelle, Hubert; Périé, Delphine

    2016-11-01

    To measure magnetic resonance imaging (MRI) parameters including relaxation times (T 1 ρ, T 2 ), magnetization transfer (MT) and diffusion parameters (mean diffusivity [MD], fractional anisotropy [FA]) of intervertebral discs in adolescents with idiopathic scoliosis, and to investigate the sensitivity of these MR parameters to the severity of the spine deformities. Thirteen patients with adolescent idiopathic scoliosis and three control volunteers with no history of spine disease underwent an MRI acquisition at 3T including the mapping of T 1 ρ, T 2 , MT, MD, and FA. The apical zone included all discs within the scoliotic curve while the control zone was composed of other discs. The severity was analyzed through low (<32°) versus high (>40°) Cobb angles. One-way analysis of variance (ANOVA) and agglomerative hierarchical clustering (AHC) were performed. Significant differences were found between the apical zone and the control zone for T 2 (P = 0.047), and between low and high Cobb angles for T 2 (P = 0.014) and MT (P = 0.002). AHC showed two distinct clusters, one with mainly low Cobb angles and one with mainly high Cobb angles, for the MRI parameters measured within the apical zone, with an accuracy of 0.9 and a Matthews correlation coefficient (MCC) of 0.8. Within the control zone, the AHC showed no clear classification (accuracy of 0.6 and MCC of 0.2). We successfully performed an in vivo multiparametric MRI investigation of young patients with adolescent idiopathic scoliosis. The MRI parameters measured within the intervertebral discs were found to be sensitive to intervertebral disc degeneration occurring with scoliosis and to the severity of scoliosis. J. Magn. Reson. Imaging 2016;44:1123-1131. © 2016 International Society for Magnetic Resonance in Medicine.

  8. Citation searches are more sensitive than keyword searches to identify studies using specific measurement instruments.

    PubMed

    Linder, Suzanne K; Kamath, Geetanjali R; Pratt, Gregory F; Saraykar, Smita S; Volk, Robert J

    2015-04-01

    To compare the effectiveness of two search methods in identifying studies that used the Control Preferences Scale (CPS), a health care decision-making instrument commonly used in clinical settings. We searched the literature using two methods: (1) keyword searching using variations of "Control Preferences Scale" and (2) cited reference searching using two seminal CPS publications. We searched three bibliographic databases [PubMed, Scopus, and Web of Science (WOS)] and one full-text database (Google Scholar). We report precision and sensitivity as measures of effectiveness. Keyword searches in bibliographic databases yielded high average precision (90%) but low average sensitivity (16%). PubMed was the most precise, followed closely by Scopus and WOS. The Google Scholar keyword search had low precision (54%) but provided the highest sensitivity (70%). Cited reference searches in all databases yielded moderate sensitivity (45-54%), but precision ranged from 35% to 75% with Scopus being the most precise. Cited reference searches were more sensitive than keyword searches, making it a more comprehensive strategy to identify all studies that use a particular instrument. Keyword searches provide a quick way of finding some but not all relevant articles. Goals, time, and resources should dictate the combination of which methods and databases are used. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Citation searches are more sensitive than keyword searches to identify studies using specific measurement instruments

    PubMed Central

    Linder, Suzanne K.; Kamath, Geetanjali R.; Pratt, Gregory F.; Saraykar, Smita S.; Volk, Robert J.

    2015-01-01

    Objective To compare the effectiveness of two search methods in identifying studies that used the Control Preferences Scale (CPS), a healthcare decision-making instrument commonly used in clinical settings. Study Design & Setting We searched the literature using two methods: 1) keyword searching using variations of “control preferences scale” and 2) cited reference searching using two seminal CPS publications. We searched three bibliographic databases [PubMed, Scopus, Web of Science (WOS)] and one full-text database (Google Scholar). We report precision and sensitivity as measures of effectiveness. Results Keyword searches in bibliographic databases yielded high average precision (90%), but low average sensitivity (16%). PubMed was the most precise, followed closely by Scopus and WOS. The Google Scholar keyword search had low precision (54%) but provided the highest sensitivity (70%). Cited reference searches in all databases yielded moderate sensitivity (45–54%), but precision ranged from 35–75% with Scopus being the most precise. Conclusion Cited reference searches were more sensitive than keyword searches, making it a more comprehensive strategy to identify all studies that use a particular instrument. Keyword searches provide a quick way of finding some but not all relevant articles. Goals, time and resources should dictate the combination of which methods and databases are used. PMID:25554521

  10. Identifiability, reducibility, and adaptability in allosteric macromolecules.

    PubMed

    Bohner, Gergő; Venkataraman, Gaurav

    2017-05-01

    The ability of macromolecules to transduce stimulus information at one site into conformational changes at a distant site, termed "allostery," is vital for cellular signaling. Here, we propose a link between the sensitivity of allosteric macromolecules to their underlying biophysical parameters, the interrelationships between these parameters, and macromolecular adaptability. We demonstrate that the parameters of a canonical model of the mSlo large-conductance Ca 2+ -activated K + (BK) ion channel are non-identifiable with respect to the equilibrium open probability-voltage relationship, a common functional assay. We construct a reduced model with emergent parameters that are identifiable and expressed as combinations of the original mechanistic parameters. These emergent parameters indicate which coordinated changes in mechanistic parameters can leave assay output unchanged. We predict that these coordinated changes are used by allosteric macromolecules to adapt, and we demonstrate how this prediction can be tested experimentally. We show that these predicted parameter compensations are used in the first reported allosteric phenomena: the Bohr effect, by which hemoglobin adapts to varying pH. © 2017 Bohner and Venkataraman.

  11. Identifiability, reducibility, and adaptability in allosteric macromolecules

    PubMed Central

    Bohner, Gergő

    2017-01-01

    The ability of macromolecules to transduce stimulus information at one site into conformational changes at a distant site, termed “allostery,” is vital for cellular signaling. Here, we propose a link between the sensitivity of allosteric macromolecules to their underlying biophysical parameters, the interrelationships between these parameters, and macromolecular adaptability. We demonstrate that the parameters of a canonical model of the mSlo large-conductance Ca2+-activated K+ (BK) ion channel are non-identifiable with respect to the equilibrium open probability-voltage relationship, a common functional assay. We construct a reduced model with emergent parameters that are identifiable and expressed as combinations of the original mechanistic parameters. These emergent parameters indicate which coordinated changes in mechanistic parameters can leave assay output unchanged. We predict that these coordinated changes are used by allosteric macromolecules to adapt, and we demonstrate how this prediction can be tested experimentally. We show that these predicted parameter compensations are used in the first reported allosteric phenomena: the Bohr effect, by which hemoglobin adapts to varying pH. PMID:28416647

  12. Plausibility and parameter sensitivity of micro-finite element-based joint load prediction at the proximal femur.

    PubMed

    Synek, Alexander; Pahr, Dieter H

    2018-06-01

    A micro-finite element-based method to estimate the bone loading history based on bone architecture was recently presented in the literature. However, a thorough investigation of the parameter sensitivity and plausibility of this method to predict joint loads is still missing. The goals of this study were (1) to analyse the parameter sensitivity of the joint load predictions at one proximal femur and (2) to assess the plausibility of the results by comparing load predictions of ten proximal femora to in vivo hip joint forces measured with instrumented prostheses (available from www.orthoload.com ). Joint loads were predicted by optimally scaling the magnitude of four unit loads (inclined [Formula: see text] to [Formula: see text] with respect to the vertical axis) applied to micro-finite element models created from high-resolution computed tomography scans ([Formula: see text]m voxel size). Parameter sensitivity analysis was performed by varying a total of nine parameters and showed that predictions of the peak load directions (range 10[Formula: see text]-[Formula: see text]) are more robust than the predicted peak load magnitudes (range 2344.8-4689.5 N). Comparing the results of all ten femora with the in vivo loading data of ten subjects showed that peak loads are plausible both in terms of the load direction (in vivo: [Formula: see text], predicted: [Formula: see text]) and magnitude (in vivo: [Formula: see text], predicted: [Formula: see text]). Overall, this study suggests that micro-finite element-based joint load predictions are both plausible and robust in terms of the predicted peak load direction, but predicted load magnitudes should be interpreted with caution.

  13. Coupling an EML4-ALK centric interactome with RNA interference identifies sensitizers to ALK inhibitors

    PubMed Central

    Zhang, Guolin; Scarborough, Hannah; Kim, Jihye; Rozhok, Andrii I.; Chen, Y. Ann; Zhang, Xiaohui; Song, Lanxi; Bai, Yun; Fang, Bin; Liu, Richard Z.; Koomen, John; Tan, Aik Choon; Degregori, James; Haura, Eric B.

    2017-01-01

    Patients with lung cancers harboring anaplastic lymphoma kinase (ALK) gene fusions benefit from treatment with ALK kinase inhibitors but acquired resistance inevitably arises. A better understanding of proximal ALK signaling mechanisms may identify sensitizers to ALK inhibitors that disrupt the balance between pro-survival and pro-apoptotic effector signals. Using affinity purification coupled with mass spectrometry in an ALK fusion lung cancer cell line (H3122), we generated an ALK signaling network and investigated signaling activity using tyrosine phosphoproteomics. We identified a network of 464 proteins composed of subnetworks with differential response to ALK inhibitors. A small hairpin RNA screen targeting 407 proteins in this network revealed 64 and 9 proteins whose loss sensitized cells to crizotinib and alectinib, respectively. Among these, knocking down fibroblast growth factor receptor substrate 2 (FRS2) or coiled-coil and C2 domain-containing protein 1A (CC2D1A, both scaffolding proteins, sensitized multiple ALK fusion cell lines to the ALK inhibitors crizotinib and alectinib. Collectively, our data provides a resource that enhances our understanding of signaling and drug resistance networks consequent to ALK fusions, and identifies potential targets to improve the efficacy of ALK inhibitors in patients. PMID:27811184

  14. Sensitivity analysis of respiratory parameter uncertainties: impact of criterion function form and constraints.

    PubMed

    Lutchen, K R

    1990-08-01

    A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.

  15. Identifying arbitrary parameter zonation using multiple level set functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Zhiming; Vesselinov, Velimir Valentinov; Lei, Hongzhuan

    In this paper, we extended the analytical level set method [1, 2] for identifying a piece-wisely heterogeneous (zonation) binary system to the case with an arbitrary number of materials with unknown material properties. In the developed level set approach, starting from an initial guess, the material interfaces are propagated through iterations such that the residuals between the simulated and observed state variables (hydraulic head) is minimized. We derived an expression for the propagation velocity of the interface between any two materials, which is related to the permeability contrast between the materials on two sides of the interface, the sensitivity ofmore » the head to permeability, and the head residual. We also formulated an expression for updating the permeability of all materials, which is consistent with the steepest descent of the objective function. The developed approach has been demonstrated through many examples, ranging from totally synthetic cases to a case where the flow conditions are representative of a groundwater contaminant site at the Los Alamos National Laboratory. These examples indicate that the level set method can successfully identify zonation structures, even if the number of materials in the model domain is not exactly known in advance. Although the evolution of the material zonation depends on the initial guess field, inverse modeling runs starting with different initial guesses fields may converge to the similar final zonation structure. These examples also suggest that identifying interfaces of spatially distributed heterogeneities is more important than estimating their permeability values.« less

  16. Identifying arbitrary parameter zonation using multiple level set functions

    DOE PAGES

    Lu, Zhiming; Vesselinov, Velimir Valentinov; Lei, Hongzhuan

    2018-03-14

    In this paper, we extended the analytical level set method [1, 2] for identifying a piece-wisely heterogeneous (zonation) binary system to the case with an arbitrary number of materials with unknown material properties. In the developed level set approach, starting from an initial guess, the material interfaces are propagated through iterations such that the residuals between the simulated and observed state variables (hydraulic head) is minimized. We derived an expression for the propagation velocity of the interface between any two materials, which is related to the permeability contrast between the materials on two sides of the interface, the sensitivity ofmore » the head to permeability, and the head residual. We also formulated an expression for updating the permeability of all materials, which is consistent with the steepest descent of the objective function. The developed approach has been demonstrated through many examples, ranging from totally synthetic cases to a case where the flow conditions are representative of a groundwater contaminant site at the Los Alamos National Laboratory. These examples indicate that the level set method can successfully identify zonation structures, even if the number of materials in the model domain is not exactly known in advance. Although the evolution of the material zonation depends on the initial guess field, inverse modeling runs starting with different initial guesses fields may converge to the similar final zonation structure. These examples also suggest that identifying interfaces of spatially distributed heterogeneities is more important than estimating their permeability values.« less

  17. Sensitivity of drainage efficiency of cranberry fields to edaphic conditions

    NASA Astrophysics Data System (ADS)

    Periard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean; Hallema, Dennis W.

    2014-05-01

    Water management on a cranberry farm requires intelligent irrigation and drainage strategies to sustain strong productivity and minimize environmental impact. For example, to avoid propagation of disease and meet evapotranspiration demand, it is imperative to maintain optimal moisture conditions in the root zone, which depends on an efficient drainage system. However, several drainage problems have been identified in cranberry fields. Most of these drainage problems are due to the presence of a restrictive layer in the soil profile (Gumiere et al., 2014). The objective of this work is to evaluate the effects of a restrictive layer on the drainage efficiency by the bias of a multi-local sensitivity analysis. We have tested the sensitivity of the drainage efficiency to different input parameters set of soil hydraulic properties, geometrical parameters and climatic conditions. Soil water flux dynamic for every input parameters set was simulated with finite element model Hydrus 1D (Simanek et al., 2008). Multi-local sensitivity was calculated with the Gâteaux directional derivatives with the procedure described by Cheviron et al. (2010). Results indicate that drainage efficiency is more sensitive to soil hydraulic properties than geometrical parameters and climatic conditions. Then, the geometrical parameters of the depth are more sensitive than the thickness. The drainage efficiency was very insensitive to the climatic conditions. Understanding the sensitivity of drainage efficiency according to soil hydraulic properties, geometrical and climatic conditions are essential for diagnosis drainage problems. However, it becomes important to identify the mechanisms involved in the genesis of anthropogenic soils cranberry to identify conditions that may lead to the formation of a restrictive layer. References: Cheviron, B., S.J. Gumiere, Y. Le Bissonnais, R. Moussa and D. Raclot. 2010. Sensitivity analysis of distributed erosion models: Framework. Water Resources Research

  18. Synthetic lethal RNAi screening identifies sensitizing targets for gemcitabine therapy in pancreatic cancer

    PubMed Central

    Azorsa, David O; Gonzales, Irma M; Basu, Gargi D; Choudhary, Ashish; Arora, Shilpi; Bisanz, Kristen M; Kiefer, Jeffrey A; Henderson, Meredith C; Trent, Jeffrey M; Von Hoff, Daniel D; Mousses, Spyro

    2009-01-01

    Background Pancreatic cancer retains a poor prognosis among the gastrointestinal cancers. It affects 230,000 individuals worldwide, has a very high mortality rate, and remains one of the most challenging malignancies to treat successfully. Treatment with gemcitabine, the most widely used chemotherapeutic against pancreatic cancer, is not curative and resistance may occur. Combinations of gemcitabine with other chemotherapeutic drugs or biological agents have resulted in limited improvement. Methods In order to improve gemcitabine response in pancreatic cancer cells, we utilized a synthetic lethal RNAi screen targeting 572 known kinases to identify genes that when silenced would sensitize pancreatic cancer cells to gemcitabine. Results Results from the RNAi screens identified several genes that, when silenced, potentiated the growth inhibitory effects of gemcitabine in pancreatic cancer cells. The greatest potentiation was shown by siRNA targeting checkpoint kinase 1 (CHK1). Validation of the screening results was performed in MIA PaCa-2 and BxPC3 pancreatic cancer cells by examining the dose response of gemcitabine treatment in the presence of either CHK1 or CHK2 siRNA. These results showed a three to ten-fold decrease in the EC50 for CHK1 siRNA-treated cells versus control siRNA-treated cells while treatment with CHK2 siRNA resulted in no change compared to controls. CHK1 was further targeted with specific small molecule inhibitors SB 218078 and PD 407824 in combination with gemcitabine. Results showed that treatment of MIA PaCa-2 cells with either of the CHK1 inhibitors SB 218078 or PD 407824 led to sensitization of the pancreatic cancer cells to gemcitabine. Conclusion These findings demonstrate the effectiveness of synthetic lethal RNAi screening as a tool for identifying sensitizing targets to chemotherapeutic agents. These results also indicate that CHK1 could serve as a putative therapeutic target for sensitizing pancreatic cancer cells to gemcitabine. PMID

  19. Techno-economic sensitivity study of heliostat field parameters for micro-gas turbine CSP

    NASA Astrophysics Data System (ADS)

    Landman, Willem A.; Gauché, Paul; Dinter, Frank; Myburgh, J. T.

    2017-06-01

    Concentrating solar power systems based on micro-gas turbines potentially offer numerous benefits should they become commercially viable. Heliostat fields for such systems have unique requirements in that the number of heliostats and the focal ratios are typically much lower than conventional central receiver systems. This paper presents a techno-economic sensitivity study of heliostat field parameters for a micro-gas turbine central receiver system. A 100 kWe minitower system is considered for the base case and a one-at-a-time strategy is used to investigate parameter sensitivities. Increasing heliostat focal ratios are found to have significant optical performance benefits due to both a reduction in astigmatic aberrations and a reduction in the number of facet focal lengths required; confirming the hypothesis that smaller heliostats offer a techno-economic advantage. Fixed Horizontal Axis tracking mechanism is shown to outperform the conventional Azimuth Zenith tracking mechanism in high density heliostat fields. Although several improvements to heliostat field performance are discussed, the capex fraction of the heliostat field for such system is shown to be almost half that of a conventional central receiver system and optimum utilization of the higher capex components, namely; the receiver and turbine subsystems, are more rewarding than that of the heliostat field.

  20. A Multi-sensor Approach to Identify Crop Sensitivity Related to Climate Variability in Central India

    NASA Astrophysics Data System (ADS)

    Mondal, P.; DeFries, R. S.; Jain, M.; Robertson, A. W.; Galford, G. L.; Small, C.

    2012-12-01

    Agriculture is a primary source of livelihood for over 70% of India's population, with staple crops (e.g. winter wheat) playing a pivotal role in satisfying an ever-increasing food-demand of a growing population. Agricultural yield in India has been reported to be highly correlated with the timing and total amount of monsoon rainfall and/or temperature depending on crop type. With expected change in future climate (temperature and precipitation), significant fluctuations in crop yields are projected for near future. To date, little work has identified the sensitivity of cropping intensity, or the number of crops planted in a given year, to climate variability. The objective of this study is to shed light on relative importance of different climate parameters through a statistical analysis of inter-annual variations in cropping intensity at a regional scale, which may help identify adaptive strategies in response to future climate anomalies. Our study focuses on a highly human-modified landscape in central India, and uses a multi-sensor approach to determine the sensitivity of agriculture to climate variability. First, we assembled the 16-day time-series of 250m Moderate Resolution Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI), and applied a spline function-based smoothing algorithm to develop maps of monsoon and winter crops in Central India for a decadal time-span. A hierarchical model involving moderate resolution Landsat (30m) data was used to estimate the heterogeneity of the spectral signature within the MODIS dataset (250m). We then compared the season-specific cropping patterns with spatio-temporal variability in climate parameters derived from the Tropical Rainfall Measuring Mission (TRMM) data. Initial data indicates that the existence of a monsoon crop has moderate to strong correlation with wet season end date (ρ = .522), wet season length (ρ = .522), and the number of rainy days during wet season (ρ = .829). Existence of a winter

  1. Sensitivity analysis of the parameters of an HIV/AIDS model with condom campaign and antiretroviral therapy

    NASA Astrophysics Data System (ADS)

    Marsudi, Hidayat, Noor; Wibowo, Ratno Bagus Edy

    2017-12-01

    In this article, we present a deterministic model for the transmission dynamics of HIV/AIDS in which condom campaign and antiretroviral therapy are both important for the disease management. We calculate the effective reproduction number using the next generation matrix method and investigate the local and global stability of the disease-free equilibrium of the model. Sensitivity analysis of the effective reproduction number with respect to the model parameters were carried out. Our result shows that efficacy rate of condom campaign, transmission rate for contact with the asymptomatic infective, progression rate from the asymptomatic infective to the pre-AIDS infective, transmission rate for contact with the pre-AIDS infective, ARV therapy rate, proportion of the susceptible receiving condom campaign and proportion of the pre-AIDS receiving ARV therapy are highly sensitive parameters that effect the transmission dynamics of HIV/AIDS infection.

  2. Modelling the effect of heterogeneity of shedding on the within herd Coxiella burnetii spread and identification of key parameters by sensitivity analysis.

    PubMed

    Courcoul, Aurélie; Monod, Hervé; Nielen, Mirjam; Klinkenberg, Don; Hogerwerf, Lenny; Beaudeau, François; Vergu, Elisabeta

    2011-09-07

    Coxiella burnetii is the bacterium responsible for Q fever, a worldwide zoonosis. Ruminants, especially cattle, are recognized as the most important source of human infections. Although a great heterogeneity between shedder cows has been described, no previous studies have determined which features such as shedding route and duration or the quantity of bacteria shed have the strongest impact on the environmental contamination and thus on the zoonotic risk. Our objective was to identify key parameters whose variation highly influences C. burnetii spread within a dairy cattle herd, especially those related to the heterogeneity of shedding. To compare the impact of epidemiological parameters on different dynamical aspects of C. burnetii infection, we performed a sensitivity analysis on an original stochastic model describing the bacterium spread and representing the individual variability of the shedding duration, routes and intensity as well as herd demography. This sensitivity analysis consisted of a principal component analysis followed by an ANOVA. Our findings show that the most influential parameters are the probability distribution governing the levels of shedding, especially in vaginal mucus and faeces, the characteristics of the bacterium in the environment (i.e. its survival and the fraction of bacteria shed reaching the environment), and some physiological parameters related to the intermittency of shedding (transition probability from a non-shedding infected state to a shedding state) or to the transition from one type of shedder to another one (transition probability from a seronegative shedding state to a seropositive shedding state). Our study is crucial for the understanding of the dynamics of C. burnetii infection and optimization of control measures. Indeed, as control measures should impact the parameters influencing the bacterium spread most, our model can now be used to assess the effectiveness of different control strategies of Q fever within

  3. Sensitivity Analysis of Mechanical Parameters of Different Rock Layers to the Stability of Coal Roadway in Soft Rock Strata

    PubMed Central

    Zhao, Zeng-hui; Wang, Wei-ming; Gao, Xin; Yan, Ji-xing

    2013-01-01

    According to the geological characteristics of Xinjiang Ili mine in western area of China, a physical model of interstratified strata composed of soft rock and hard coal seam was established. Selecting the tunnel position, deformation modulus, and strength parameters of each layer as influencing factors, the sensitivity coefficient of roadway deformation to each parameter was firstly analyzed based on a Mohr-Columb strain softening model and nonlinear elastic-plastic finite element analysis. Then the effect laws of influencing factors which showed high sensitivity were further discussed. Finally, a regression model for the relationship between roadway displacements and multifactors was obtained by equivalent linear regression under multiple factors. The results show that the roadway deformation is highly sensitive to the depth of coal seam under the floor which should be considered in the layout of coal roadway; deformation modulus and strength of coal seam and floor have a great influence on the global stability of tunnel; on the contrary, roadway deformation is not sensitive to the mechanical parameters of soft roof; roadway deformation under random combinations of multi-factors can be deduced by the regression model. These conclusions provide theoretical significance to the arrangement and stability maintenance of coal roadway. PMID:24459447

  4. Sensitive kinase assay linked with phosphoproteomics for identifying direct kinase substrates

    PubMed Central

    Xue, Liang; Wang, Wen-Horng; Iliuk, Anton; Hu, Lianghai; Galan, Jacob A.; Yu, Shuai; Hans, Michael; Geahlen, Robert L.; Tao, W. Andy

    2012-01-01

    Our understanding of the molecular control of many disease pathologies requires the identification of direct substrates targeted by specific protein kinases. Here we describe an integrated proteomic strategy, termed kinase assay linked with phosphoproteomics, which combines a sensitive kinase reaction with endogenous kinase-dependent phosphoproteomics to identify direct substrates of protein kinases. The unique in vitro kinase reaction is carried out in a highly efficient manner using a pool of peptides derived directly from cellular kinase substrates and then dephosphorylated as substrate candidates. The resulting newly phosphorylated peptides are then isolated and identified by mass spectrometry. A further comparison of these in vitro phosphorylated peptides with phosphopeptides derived from endogenous proteins isolated from cells in which the kinase is either active or inhibited reveals new candidate protein substrates. The kinase assay linked with phosphoproteomics strategy was applied to identify unique substrates of spleen tyrosine kinase (Syk), a protein-tyrosine kinase with duel properties of an oncogene and a tumor suppressor in distinctive cell types. We identified 64 and 23 direct substrates of Syk specific to B cells and breast cancer cells, respectively. Both known and unique substrates, including multiple centrosomal substrates for Syk, were identified, supporting a unique mechanism that Syk negatively affects cell division through its centrosomal kinase activity. PMID:22451900

  5. Sensitivity of Asteroid Impact Risk to Uncertainty in Asteroid Properties and Entry Parameters

    NASA Astrophysics Data System (ADS)

    Wheeler, Lorien; Mathias, Donovan; Dotson, Jessie L.; NASA Asteroid Threat Assessment Project

    2017-10-01

    A central challenge in assessing the threat posed by asteroids striking Earth is the large amount of uncertainty inherent throughout all aspects of the problem. Many asteroid properties are not well characterized and can range widely from strong, dense, monolithic irons to loosely bound, highly porous rubble piles. Even for an object of known properties, the specific entry velocity, angle, and impact location can swing the potential consequence from no damage to causing millions of casualties. Due to the extreme rarity of large asteroid strikes, there are also large uncertainties in how different types of asteroids will interact with the atmosphere during entry, how readily they may break up or ablate, and how much surface damage will be caused by the resulting airbursts or impacts.In this work, we use our Probabilistic Asteroid Impact Risk (PAIR) model to investigate the sensitivity of asteroid impact damage to uncertainties in key asteroid properties, entry parameters, or modeling assumptions. The PAIR model combines physics-based analytic models of asteroid entry and damage in a probabilistic Monte Carlo framework to assess the risk posed by a wide range of potential impacts. The model samples from uncertainty distributions of asteroid properties and entry parameters to generate millions of specific impact cases, and models the atmospheric entry and damage for each case, including blast overpressure, thermal radiation, tsunami inundation, and global effects. To assess the risk sensitivity, we alternately fix and vary the different input parameters and compare the effect on the resulting range of damage produced. The goal of these studies is to help guide future efforts in asteroid characterization and model refinement by determining which properties most significantly affect the potential risk.

  6. Sensitivity and Specificity of Eustachian Tube Function Tests in Adults

    PubMed Central

    Doyle, William J.; Swarts, J. Douglas; Banks, Julianne; Casselbrant, Margaretha L; Mandel, Ellen M; Alper, Cuneyt M.

    2013-01-01

    Objective Determine if Eustachian Tube (ET) function (ETF) tests can identify ears with physician-diagnosed ET dysfunction (ETD) in a mixed population at high sensitivity and specificity and define the inter-relatedness of ETF test parameters. Methods ETF was evaluated using the Forced-Response, Inflation-Deflation, Valsalva and Sniffing tests in 15 control ears of adult subjects after unilateral myringotomy (Group I) and in 23 ears of 19 adult subjects with ventilation tubes inserted for ETD (Group II). Data were analyzed using logistic regression including each parameter independently and then a step-down Discriminant Analysis including all ETF test parameters to predict group assignment. Factor Analysis operating over all parameters was used to explore relatedness. Results The Discriminant Analysis identified 4 ETF test parameters (Valsalva, ET opening pressure, dilatory efficiency and % positive pressure equilibrated) that together correctly assigned ears to Group II at a sensitivity of 95% and a specificity of 83%. Individual parameters representing the efficiency of ET opening during swallowing showed moderately accurate assignments of ears to their respective groups. Three factors captured approximately 98% of the variance among parameters, the first had negative loadings of the ETF structural parameters, the second had positive loadings of the muscle-assisted ET opening parameters and the third had negative loadings of the muscle-assisted ET opening parameters and positive loadings of the structural parameters. Discussion These results show that ETF tests can correctly assign individual ears to physician-diagnosed ETD with high sensitivity and specificity and that ETF test parameters can be grouped into structural-functional categories. PMID:23868429

  7. Sensitivity and specificity of administrative mortality data for identifying prescription opioid–related deaths

    PubMed Central

    Gladstone, Emilie; Smolina, Kate; Morgan, Steven G.; Fernandes, Kimberly A.; Martins, Diana; Gomes, Tara

    2016-01-01

    Background: Comprehensive systems for surveilling prescription opioid–related harms provide clear evidence that deaths from prescription opioids have increased dramatically in the United States. However, these harms are not systematically monitored in Canada. In light of a growing public health crisis, accessible, nationwide data sources to examine prescription opioid–related harms in Canada are needed. We sought to examine the performance of 5 algorithms to identify prescription opioid–related deaths from vital statistics data against data abstracted from the Office of the Chief Coroner of Ontario as a gold standard. Methods: We identified all prescription opioid–related deaths from Ontario coroners’ data that occurred between Jan. 31, 2003, and Dec. 31, 2010. We then used 5 different algorithms to identify prescription opioid–related deaths from vital statistics death data in 2010. We selected the algorithm with the highest sensitivity and a positive predictive value of more than 80% as the optimal algorithm for identifying prescription opioid–related deaths. Results: Four of the 5 algorithms had positive predictive values of more than 80%. The algorithm with the highest sensitivity (75%) in 2010 improved slightly in its predictive performance from 2003 to 2010. Interpretation: In the absence of specific systems for monitoring prescription opioid–related deaths in Canada, readily available national vital statistics data can be used to study prescription opioid–related mortality with considerable accuracy. Despite some limitations, these data may facilitate the implementation of national surveillance and monitoring strategies. PMID:26622006

  8. Sensitivity and specificity of administrative mortality data for identifying prescription opioid-related deaths.

    PubMed

    Gladstone, Emilie; Smolina, Kate; Morgan, Steven G; Fernandes, Kimberly A; Martins, Diana; Gomes, Tara

    2016-03-01

    Comprehensive systems for surveilling prescription opioid-related harms provide clear evidence that deaths from prescription opioids have increased dramatically in the United States. However, these harms are not systematically monitored in Canada. In light of a growing public health crisis, accessible, nationwide data sources to examine prescription opioid-related harms in Canada are needed. We sought to examine the performance of 5 algorithms to identify prescription opioid-related deaths from vital statistics data against data abstracted from the Office of the Chief Coroner of Ontario as a gold standard. We identified all prescription opioid-related deaths from Ontario coroners' data that occurred between Jan. 31, 2003, and Dec. 31, 2010. We then used 5 different algorithms to identify prescription opioid-related deaths from vital statistics death data in 2010. We selected the algorithm with the highest sensitivity and a positive predictive value of more than 80% as the optimal algorithm for identifying prescription opioid-related deaths. Four of the 5 algorithms had positive predictive values of more than 80%. The algorithm with the highest sensitivity (75%) in 2010 improved slightly in its predictive performance from 2003 to 2010. In the absence of specific systems for monitoring prescription opioid-related deaths in Canada, readily available national vital statistics data can be used to study prescription opioid-related mortality with considerable accuracy. Despite some limitations, these data may facilitate the implementation of national surveillance and monitoring strategies. © 2016 Canadian Medical Association or its licensors.

  9. AN OVERVIEW OF THE UNCERTAINTY ANALYSIS, SENSITIVITY ANALYSIS, AND PARAMETER ESTIMATION (UA/SA/PE) API AND HOW TO IMPLEMENT IT

    EPA Science Inventory

    The Application Programming Interface (API) for Uncertainty Analysis, Sensitivity Analysis, and
    Parameter Estimation (UA/SA/PE API) (also known as Calibration, Optimization and Sensitivity and Uncertainty (CUSO)) was developed in a joint effort between several members of both ...

  10. System parameter identification from projection of inverse analysis

    NASA Astrophysics Data System (ADS)

    Liu, K.; Law, S. S.; Zhu, X. Q.

    2017-05-01

    The output of a system due to a change of its parameters is often approximated with the sensitivity matrix from the first order Taylor series. The system output can be measured in practice, but the perturbation in the system parameters is usually not available. Inverse sensitivity analysis can be adopted to estimate the unknown system parameter perturbation from the difference between the observation output data and corresponding analytical output data calculated from the original system model. The inverse sensitivity analysis is re-visited in this paper with improvements based on the Principal Component Analysis on the analytical data calculated from the known system model. The identification equation is projected into a subspace of principal components of the system output, and the sensitivity of the inverse analysis is improved with an iterative model updating procedure. The proposed method is numerical validated with a planar truss structure and dynamic experiments with a seven-storey planar steel frame. Results show that it is robust to measurement noise, and the location and extent of stiffness perturbation can be identified with better accuracy compared with the conventional response sensitivity-based method.

  11. Identifying group-sensitive physical activities: a differential item functioning analysis of NHANES data.

    PubMed

    Gao, Yong; Zhu, Weimo

    2011-05-01

    The purpose of this study was to identify subgroup-sensitive physical activities (PA) using differential item functioning (DIF) analysis. A sub-unweighted sample of 1857 (men=923 and women=934) from the 2003-2004 National Health and Nutrition Examination Survey PA questionnaire data was used for the analyses. Using the Mantel-Haenszel, the simultaneous item bias test, and the ANOVA DIF methods, 33 specific leisure-time moderate and/or vigorous PA (MVPA) items were analyzed for DIF across race/ethnicity, gender, education, income, and age groups. Many leisure-time MVPA items were identified as large DIF items. When participating in the same amount of leisure-time MVPA, non-Hispanic blacks were more likely to participate in basketball and dance activities than non-Hispanic whites (NHW); NHW were more likely to participated in golf and hiking than non-Hispanic blacks; Hispanics were more likely to participate in dancing, hiking, and soccer than NHW, whereas NHW were more likely to engage in bicycling, golf, swimming, and walking than Hispanics; women were more likely to participate in aerobics, dancing, stretching, and walking than men, whereas men were more likely to engage in basketball, fishing, golf, running, soccer, weightlifting, and hunting than women; educated persons were more likely to participate in jogging and treadmill exercise than less educated persons; persons with higher incomes were more likely to engage in golf than those with lower incomes; and adults (20-59 yr) were more likely to participate in basketball, dancing, jogging, running, and weightlifting than older adults (60+ yr), whereas older adults were more likely to participate in walking and golf than younger adults. DIF methods are able to identify subgroup-sensitive PA and thus provide useful information to help design group-sensitive, targeted interventions for disadvantaged PA subgroups. © 2011 by the American College of Sports Medicine

  12. Uncertainty, Sensitivity Analysis, and Causal Identification in the Arctic using a Perturbed Parameter Ensemble of the HiLAT Climate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunke, Elizabeth Clare; Urrego Blanco, Jorge Rolando; Urban, Nathan Mark

    Coupled climate models have a large number of input parameters that can affect output uncertainty. We conducted a sensitivity analysis of sea ice proper:es and Arc:c related climate variables to 5 parameters in the HiLAT climate model: air-ocean turbulent exchange parameter (C), conversion of water vapor to clouds (cldfrc_rhminl) and of ice crystals to snow (micro_mg_dcs), snow thermal conduc:vity (ksno), and maximum snow grain size (rsnw_mlt). We used an elementary effect (EE) approach to rank their importance for output uncertainty. EE is an extension of one-at-a-time sensitivity analyses, but it is more efficient in sampling multi-dimensional parameter spaces. We lookedmore » for emerging relationships among climate variables across the model ensemble, and used causal discovery algorithms to establish potential pathways for those relationships.« less

  13. Gait cycle analysis: parameters sensitive for functional evaluation of peripheral nerve recovery in rat hind limbs.

    PubMed

    Rui, Jing; Runge, M Brett; Spinner, Robert J; Yaszemski, Michael J; Windebank, Anthony J; Wang, Huan

    2014-10-01

    Video-assisted gait kinetics analysis has been a sensitive method to assess rat sciatic nerve function after injury and repair. However, in conduit repair of sciatic nerve defects, previously reported kinematic measurements failed to be a sensitive indicator because of the inferior recovery and inevitable joint contracture. This study aimed to explore the role of physiotherapy in mitigating joint contracture and to seek motion analysis indices that can sensitively reflect motor function. Data were collected from 26 rats that underwent sciatic nerve transection and conduit repair. Regular postoperative physiotherapy was applied. Parameters regarding step length, phase duration, and ankle angle were acquired and analyzed from video recording of gait kinetics preoperatively and at regular postoperative intervals. Stride length ratio (step length of uninjured foot/step length of injured foot), percent swing of the normal paw (percentage of the total stride duration when the uninjured paw is in the air), propulsion angle (toe-off angle subtracted by midstance angle), and clearance angle (ankle angle change from toe off to midswing) decreased postoperatively comparing with baseline values. The gradual recovery of these measurements had a strong correlation with the post-nerve repair time course. Ankle joint contracture persisted despite rigorous physiotherapy. Parameters acquired from a 2-dimensional motion analysis system, that is, stride length ratio, percent swing of the normal paw, propulsion angle, and clearance angle, could sensitively reflect nerve function impairment and recovery in the rat sciatic nerve conduit repair model despite the existence of joint contractures.

  14. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling: GEOSTATISTICAL SENSITIVITY ANALYSIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Chen, Xingyuan; Ye, Ming

    Sensitivity analysis is an important tool for quantifying uncertainty in the outputs of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a hierarchical sensitivity analysis method that (1) constructs an uncertainty hierarchy by analyzing the input uncertainty sources, and (2) accounts for the spatial correlation among parameters at each level ofmore » the hierarchy using geostatistical tools. The contribution of uncertainty source at each hierarchy level is measured by sensitivity indices calculated using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport in model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally as driven by the dynamic interaction between groundwater and river water at the site. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed parameters.« less

  15. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    EPA Pesticide Factsheets

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  16. Identifying differentially expressed genes in cancer patients using a non-parameter Ising model.

    PubMed

    Li, Xumeng; Feltus, Frank A; Sun, Xiaoqian; Wang, James Z; Luo, Feng

    2011-10-01

    Identification of genes and pathways involved in diseases and physiological conditions is a major task in systems biology. In this study, we developed a novel non-parameter Ising model to integrate protein-protein interaction network and microarray data for identifying differentially expressed (DE) genes. We also proposed a simulated annealing algorithm to find the optimal configuration of the Ising model. The Ising model was applied to two breast cancer microarray data sets. The results showed that more cancer-related DE sub-networks and genes were identified by the Ising model than those by the Markov random field model. Furthermore, cross-validation experiments showed that DE genes identified by Ising model can improve classification performance compared with DE genes identified by Markov random field model. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. The sensitivity of conduit flow models to basic input parameters: there is no need for magma trolls!

    NASA Astrophysics Data System (ADS)

    Thomas, M. E.; Neuberg, J. W.

    2012-04-01

    Many conduit flow models now exist and some of these models are becoming extremely complicated, conducted in three dimensions and incorporating the physics of compressible three phase fluids (magmas), intricate conduit geometries and fragmentation processes, to name but a few examples. These highly specialised models are being used to explain observations of the natural system, and there is a danger that possible explanations may be getting needlessly complex. It is coherent, for instance, to propose the involvement of sub-surface dwelling magma trolls as an explanation for the change in a volcanoes eruptive style, but assuming the simplest explanation would prevent such additions, unless they were absolutely necessary. While the understanding of individual, often small scale conduit processes is increasing rapidly, is this level of detail necessary? How sensitive are these models to small changes in the most basic of governing parameters? Can these changes be used to explain observed behaviour? Here we will examine the sensitivity of conduit flow models to changes in the melt viscosity, one of the fundamental inputs to any such model. However, even addressing this elementary issue is not straight forward. There are several viscosity models in existence, how do they differ? Can models that use different viscosity models be realistically compared? Each of these viscosity models is also heavily dependent on the magma composition and/or temperature, and how well are these variables constrained? Magma temperatures and water contents are often assumed as "ball-park" figures, and are very rarely exactly known for the periods of observation the models are attempting to explain, yet they exhibit a strong controlling factor on the melt viscosity. The role of both these variables will be discussed. For example, using one of the available viscosity models a 20 K decrease in temperature of the melt results in a greater than 100% increase in the melt viscosity. With changes of

  18. Identifying key sources of uncertainty in the modelling of greenhouse gas emissions from wastewater treatment.

    PubMed

    Sweetapple, Christine; Fu, Guangtao; Butler, David

    2013-09-01

    This study investigates sources of uncertainty in the modelling of greenhouse gas emissions from wastewater treatment, through the use of local and global sensitivity analysis tools, and contributes to an in-depth understanding of wastewater treatment modelling by revealing critical parameters and parameter interactions. One-factor-at-a-time sensitivity analysis is used to screen model parameters and identify those with significant individual effects on three performance indicators: total greenhouse gas emissions, effluent quality and operational cost. Sobol's method enables identification of parameters with significant higher order effects and of particular parameter pairs to which model outputs are sensitive. Use of a variance-based global sensitivity analysis tool to investigate parameter interactions enables identification of important parameters not revealed in one-factor-at-a-time sensitivity analysis. These interaction effects have not been considered in previous studies and thus provide a better understanding wastewater treatment plant model characterisation. It was found that uncertainty in modelled nitrous oxide emissions is the primary contributor to uncertainty in total greenhouse gas emissions, due largely to the interaction effects of three nitrogen conversion modelling parameters. The higher order effects of these parameters are also shown to be a key source of uncertainty in effluent quality. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Sensitivity of finite helical axis parameters to temporally varying realistic motion utilizing an idealized knee model.

    PubMed

    Johnson, T S; Andriacchi, T P; Erdman, A G

    2004-01-01

    Various uses of the screw or helical axis have previously been reported in the literature in an attempt to quantify the complex displacements and coupled rotations of in vivo human knee kinematics. Multiple methods have been used by previous authors to calculate the axis parameters, and it has been theorized that the mathematical stability and accuracy of the finite helical axis (FHA) is highly dependent on experimental variability and rotation increment spacing between axis calculations. Previous research has not addressed the sensitivity of the FHA for true in vivo data collection, as required for gait laboratory analysis. This research presents a controlled series of experiments simulating continuous data collection as utilized in gait analysis to investigate the sensitivity of the three-dimensional finite screw axis parameters of rotation, displacement, orientation and location with regard to time step increment spacing, utilizing two different methods for spatial location. Six-degree-of-freedom motion parameters are measured for an idealized rigid body knee model that is constrained to a planar motion profile for the purposes of error analysis. The kinematic data are collected using a multicamera optoelectronic system combined with an error minimization algorithm known as the point cluster method. Rotation about the screw axis is seen to be repeatable, accurate and time step increment insensitive. Displacement along the axis is highly dependent on time step increment sizing, with smaller rotation angles between calculations producing more accuracy. Orientation of the axis in space is accurate with only a slight filtering effect noticed during motion reversal. Locating the screw axis by a projected point onto the screw axis from the mid-point of the finite displacement is found to be less sensitive to motion reversal than finding the intersection of the axis with a reference plane. A filtering effect of the spatial location parameters was noted for larger time

  20. Identifying Cognitive Remediation Change Through Computational Modelling—Effects on Reinforcement Learning in Schizophrenia

    PubMed Central

    Cella, Matteo; Bishara, Anthony J.; Medin, Evelina; Swan, Sarah; Reeder, Clare; Wykes, Til

    2014-01-01

    Objective: Converging research suggests that individuals with schizophrenia show a marked impairment in reinforcement learning, particularly in tasks requiring flexibility and adaptation. The problem has been associated with dopamine reward systems. This study explores, for the first time, the characteristics of this impairment and how it is affected by a behavioral intervention—cognitive remediation. Method: Using computational modelling, 3 reinforcement learning parameters based on the Wisconsin Card Sorting Test (WCST) trial-by-trial performance were estimated: R (reward sensitivity), P (punishment sensitivity), and D (choice consistency). In Study 1 the parameters were compared between a group of individuals with schizophrenia (n = 100) and a healthy control group (n = 50). In Study 2 the effect of cognitive remediation therapy (CRT) on these parameters was assessed in 2 groups of individuals with schizophrenia, one receiving CRT (n = 37) and the other receiving treatment as usual (TAU, n = 34). Results: In Study 1 individuals with schizophrenia showed impairment in the R and P parameters compared with healthy controls. Study 2 demonstrated that sensitivity to negative feedback (P) and reward (R) improved in the CRT group after therapy compared with the TAU group. R and P parameter change correlated with WCST outputs. Improvements in R and P after CRT were associated with working memory gains and reduction of negative symptoms, respectively. Conclusion: Schizophrenia reinforcement learning difficulties negatively influence performance in shift learning tasks. CRT can improve sensitivity to reward and punishment. Identifying parameters that show change may be useful in experimental medicine studies to identify cognitive domains susceptible to improvement. PMID:24214932

  1. Origin of the sensitivity in modeling the glide behaviour of dislocations

    DOE PAGES

    Pei, Zongrui; Stocks, George Malcolm

    2018-03-26

    The sensitivity in predicting glide behaviour of dislocations has been a long-standing problem in the framework of the Peierls-Nabarro model. The predictions of both the model itself and the analytic formulas based on it are too sensitive to the input parameters. In order to reveal the origin of this important problem in materials science, a new empirical-parameter-free formulation is proposed in the same framework. Unlike previous formulations, it includes only a limited small set of parameters all of which can be determined by convergence tests. Under special conditions the new formulation is reduced to its classic counterpart. In the lightmore » of this formulation, new relationships between Peierls stresses and the input parameters are identified, where the sensitivity is greatly reduced or even removed.« less

  2. Thermal hydraulic simulations, error estimation and parameter sensitivity studies in Drekar::CFD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Thomas Michael; Shadid, John N.; Pawlowski, Roger P.

    2014-01-01

    This report describes work directed towards completion of the Thermal Hydraulics Methods (THM) CFD Level 3 Milestone THM.CFD.P7.05 for the Consortium for Advanced Simulation of Light Water Reactors (CASL) Nuclear Hub effort. The focus of this milestone was to demonstrate the thermal hydraulics and adjoint based error estimation and parameter sensitivity capabilities in the CFD code called Drekar::CFD. This milestone builds upon the capabilities demonstrated in three earlier milestones; THM.CFD.P4.02 [12], completed March, 31, 2012, THM.CFD.P5.01 [15] completed June 30, 2012 and THM.CFD.P5.01 [11] completed on October 31, 2012.

  3. Delineating parameter unidentifiabilities in complex models

    NASA Astrophysics Data System (ADS)

    Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis

    2017-03-01

    Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.

  4. Using citizen science data to identify the sensitivity of species to human land use.

    PubMed

    Todd, Brian D; Rose, Jonathan P; Price, Steven J; Dorcas, Michael E

    2016-12-01

    Conservation practitioners must contend with an increasing array of threats that affect biodiversity. Citizen scientists can provide timely and expansive information for addressing these threats across large scales, but their data may contain sampling biases. We used randomization procedures to account for possible sampling biases in opportunistically reported citizen science data to identify species' sensitivities to human land use. We analyzed 21,044 records of 143 native reptile and amphibian species reported to the Carolina Herp Atlas from North Carolina and South Carolina between 1 January 1990 and 12 July 2014. Sensitive species significantly associated with natural landscapes were 3.4 times more likely to be legally protected or treated as of conservation concern by state resource agencies than less sensitive species significantly associated with human-dominated landscapes. Many of the species significantly associated with natural landscapes occurred primarily in habitats that had been nearly eradicated or otherwise altered in the Carolinas, including isolated wetlands, longleaf pine savannas, and Appalachian forests. Rare species with few reports were more likely to be associated with natural landscapes and 3.2 times more likely to be legally protected or treated as of conservation concern than species with at least 20 reported occurrences. Our results suggest that opportunistically reported citizen science data can be used to identify sensitive species and that species currently restricted primarily to natural landscapes are likely at greatest risk of decline from future losses of natural habitat. Our approach demonstrates the usefulness of citizen science data in prioritizing conservation and in helping practitioners address species declines and extinctions at large extents. © 2016 Society for Conservation Biology.

  5. CXTFIT/Excel A modular adaptable code for parameter estimation, sensitivity analysis and uncertainty analysis for laboratory or field tracer experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; Mayes, Melanie; Parker, Jack C

    2010-01-01

    We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less

  6. Large-eddy simulations of surface roughness parameter sensitivity to canopy-structure characteristics

    NASA Astrophysics Data System (ADS)

    Maurer, K. D.; Bohrer, G.; Kenny, W. T.; Ivanov, V. Y.

    2015-04-01

    Surface roughness parameters, namely the roughness length and displacement height, are an integral input used to model surface fluxes. However, most models assume these parameters to be a fixed property of plant functional type and disregard the governing structural heterogeneity and dynamics. In this study, we use large-eddy simulations to explore, in silico, the effects of canopy-structure characteristics on surface roughness parameters. We performed a virtual experiment to test the sensitivity of resolved surface roughness to four axes of canopy structure: (1) leaf area index, (2) the vertical profile of leaf density, (3) canopy height, and (4) canopy gap fraction. We found roughness parameters to be highly variable, but uncovered positive relationships between displacement height and maximum canopy height, aerodynamic canopy height and maximum canopy height and leaf area index, and eddy-penetration depth and gap fraction. We also found negative relationships between aerodynamic canopy height and gap fraction, as well as between eddy-penetration depth and maximum canopy height and leaf area index. We generalized our model results into a virtual "biometric" parameterization that relates roughness length and displacement height to canopy height, leaf area index, and gap fraction. Using a decade of wind and canopy-structure observations in a site in Michigan, we tested the effectiveness of our model-driven biometric parameterization approach in predicting the friction velocity over heterogeneous and disturbed canopies. We compared the accuracy of these predictions with the friction-velocity predictions obtained from the common simple approximation related to canopy height, the values calculated with large-eddy simulations of the explicit canopy structure as measured by airborne and ground-based lidar, two other parameterization approaches that utilize varying canopy-structure inputs, and the annual and decadal means of the surface roughness parameters at the site

  7. Large-eddy simulations of surface roughness parameter sensitivity to canopy-structure characteristics

    DOE PAGES

    Maurer, K. D.; Bohrer, G.; Kenny, W. T.; ...

    2015-04-30

    Surface roughness parameters, namely the roughness length and displacement height, are an integral input used to model surface fluxes. However, most models assume these parameters to be a fixed property of plant functional type and disregard the governing structural heterogeneity and dynamics. In this study, we use large-eddy simulations to explore, in silico, the effects of canopy-structure characteristics on surface roughness parameters. We performed a virtual experiment to test the sensitivity of resolved surface roughness to four axes of canopy structure: (1) leaf area index, (2) the vertical profile of leaf density, (3) canopy height, and (4) canopy gap fraction.more » We found roughness parameters to be highly variable, but uncovered positive relationships between displacement height and maximum canopy height, aerodynamic canopy height and maximum canopy height and leaf area index, and eddy-penetration depth and gap fraction. We also found negative relationships between aerodynamic canopy height and gap fraction, as well as between eddy-penetration depth and maximum canopy height and leaf area index. We generalized our model results into a virtual "biometric" parameterization that relates roughness length and displacement height to canopy height, leaf area index, and gap fraction. Using a decade of wind and canopy-structure observations in a site in Michigan, we tested the effectiveness of our model-driven biometric parameterization approach in predicting the friction velocity over heterogeneous and disturbed canopies. We compared the accuracy of these predictions with the friction-velocity predictions obtained from the common simple approximation related to canopy height, the values calculated with large-eddy simulations of the explicit canopy structure as measured by airborne and ground-based lidar, two other parameterization approaches that utilize varying canopy-structure inputs, and the annual and decadal means of the surface roughness parameters at

  8. Comparison of Two Global Sensitivity Analysis Methods for Hydrologic Modeling over the Columbia River Basin

    NASA Astrophysics Data System (ADS)

    Hameed, M.; Demirel, M. C.; Moradkhani, H.

    2015-12-01

    Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.

  9. Distillation tray structural parameter study: Phase 1

    NASA Technical Reports Server (NTRS)

    Winter, J. Ronald

    1991-01-01

    The purpose here is to identify the structural parameters (plate thickness, liquid level, beam size, number of beams, tray diameter, etc.) that affect the structural integrity of distillation trays in distillation columns. Once the sensitivity of the trays' dynamic response to these parameters has been established, the designer will be able to use this information to prepare more accurate specifications for the construction of new trays. Information is given on both static and dynamic analysis, modal response, and tray failure details.

  10. Sensitivity analysis and nonlinearity assessment of steam cracking furnace process

    NASA Astrophysics Data System (ADS)

    Rosli, M. N.; Sudibyo, Aziz, N.

    2017-11-01

    In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.

  11. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  12. Sensitivity of tumor motion simulation accuracy to lung biomechanical modeling approaches and parameters.

    PubMed

    Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu; Wang, Jing

    2015-11-21

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right, anterior-posterior, and superior-inferior directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation.

  13. Uncertainty Quantification and Global Sensitivity Analysis of Subsurface Flow Parameters to Gravimetric Variations During Pumping Tests in Unconfined Aquifers

    NASA Astrophysics Data System (ADS)

    Maina, Fadji Zaouna; Guadagnini, Alberto

    2018-01-01

    We study the contribution of typically uncertain subsurface flow parameters to gravity changes that can be recorded during pumping tests in unconfined aquifers. We do so in the framework of a Global Sensitivity Analysis and quantify the effects of uncertainty of such parameters on the first four statistical moments of the probability distribution of gravimetric variations induced by the operation of the well. System parameters are grouped into two main categories, respectively, governing groundwater flow in the unsaturated and saturated portions of the domain. We ground our work on the three-dimensional analytical model proposed by Mishra and Neuman (2011), which fully takes into account the richness of the physical process taking place across the unsaturated and saturated zones and storage effects in a finite radius pumping well. The relative influence of model parameter uncertainties on drawdown, moisture content, and gravity changes are quantified through (a) the Sobol' indices, derived from a classical decomposition of variance and (b) recently developed indices quantifying the relative contribution of each uncertain model parameter to the (ensemble) mean, skewness, and kurtosis of the model output. Our results document (i) the importance of the effects of the parameters governing the unsaturated flow dynamics on the mean and variance of local drawdown and gravity changes; (ii) the marked sensitivity (as expressed in terms of the statistical moments analyzed) of gravity changes to the employed water retention curve model parameter, specific yield, and storage, and (iii) the influential role of hydraulic conductivity of the unsaturated and saturated zones to the skewness and kurtosis of gravimetric variation distributions. The observed temporal dynamics of the strength of the relative contribution of system parameters to gravimetric variations suggest that gravity data have a clear potential to provide useful information for estimating the key hydraulic

  14. Histogram analysis derived from apparent diffusion coefficient (ADC) is more sensitive to reflect serological parameters in myositis than conventional ADC analysis.

    PubMed

    Meyer, Hans Jonas; Emmer, Alexander; Kornhuber, Malte; Surov, Alexey

    2018-05-01

    Diffusion-weighted imaging (DWI) has the potential of being able to reflect histopathology architecture. A novel imaging approach, namely histogram analysis, is used to further characterize tissues on MRI. The aim of this study was to correlate histogram parameters derived from apparent diffusion coefficient (ADC) maps with serological parameters in myositis. 16 patients with autoimmune myositis were included in this retrospective study. DWI was obtained on a 1.5 T scanner by using the b-values of 0 and 1000 s mm - 2 . Histogram analysis was performed as a whole muscle measurement by using a custom-made Matlab-based application. The following ADC histogram parameters were estimated: ADCmean, ADCmax, ADCmin, ADCmedian, ADCmode, and the following percentiles ADCp10, ADCp25, ADCp75, ADCp90, as well histogram parameters kurtosis, skewness, and entropy. In all patients, the blood sample was acquired within 3 days to the MRI. The following serological parameters were estimated: alanine aminotransferase, aspartate aminotransferase, creatine kinase, lactate dehydrogenase, C-reactive protein (CRP) and myoglobin. All patients were screened for Jo1-autobodies. Kurtosis correlated inversely with CRP (p = -0.55 and 0.03). Furthermore, ADCp10 and ADCp90 values tended to correlate with creatine kinase (p = -0.43, 0.11, and p = -0.42, = 0.12 respectively). In addition, ADCmean, p10, p25, median, mode, and entropy were different between Jo1-positive and Jo1-negative patients. ADC histogram parameters are sensitive for detection of muscle alterations in myositis patients. Advances in knowledge: This study identified that kurtosis derived from ADC maps is associated with CRP in myositis patients. Furthermore, several ADC histogram parameters are statistically different between Jo1-positive and Jo1-negative patients.

  15. Identification of sensitive parameters of a tropical forest in Southern Mexico to improve the understanding of C-band radar images

    NASA Astrophysics Data System (ADS)

    Monsivais-Huertero, A.; Jimenez-Escalona, J. C.; Ramos, J.; Zempoaltecatl-Ramirez, E.

    2013-05-01

    validation of the models for heterogeneous forests with a high density of trees, such as Calakmul. This paper presents a methodology for identifying sensitive parameters governing the scenes backscatter vegetables reserve Calakmul Biosphere from a physical model.

  16. Detection of Independent Associations of Plasma Lipidomic Parameters with Insulin Sensitivity Indices Using Data Mining Methodology.

    PubMed

    Kopprasch, Steffi; Dheban, Srirangan; Schuhmann, Kai; Xu, Aimin; Schulte, Klaus-Martin; Simeonovic, Charmaine J; Schwarz, Peter E H; Bornstein, Stefan R; Shevchenko, Andrej; Graessler, Juergen

    2016-01-01

    Glucolipotoxicity is a major pathophysiological mechanism in the development of insulin resistance and type 2 diabetes mellitus (T2D). We aimed to detect subtle changes in the circulating lipid profile by shotgun lipidomics analyses and to associate them with four different insulin sensitivity indices. The cross-sectional study comprised 90 men with a broad range of insulin sensitivity including normal glucose tolerance (NGT, n = 33), impaired glucose tolerance (IGT, n = 32) and newly detected T2D (n = 25). Prior to oral glucose challenge plasma was obtained and quantitatively analyzed for 198 lipid molecular species from 13 different lipid classes including triacylglycerls (TAGs), phosphatidylcholine plasmalogen/ether (PC O-s), sphingomyelins (SMs), and lysophosphatidylcholines (LPCs). To identify a lipidomic signature of individual insulin sensitivity we applied three data mining approaches, namely least absolute shrinkage and selection operator (LASSO), Support Vector Regression (SVR) and Random Forests (RF) for the following insulin sensitivity indices: homeostasis model of insulin resistance (HOMA-IR), glucose insulin sensitivity index (GSI), insulin sensitivity index (ISI), and disposition index (DI). The LASSO procedure offers a high prediction accuracy and and an easier interpretability than SVR and RF. After LASSO selection, the plasma lipidome explained 3% (DI) to maximal 53% (HOMA-IR) variability of the sensitivity indexes. Among the lipid species with the highest positive LASSO regression coefficient were TAG 54:2 (HOMA-IR), PC O- 32:0 (GSI), and SM 40:3:1 (ISI). The highest negative regression coefficient was obtained for LPC 22:5 (HOMA-IR), TAG 51:1 (GSI), and TAG 58:6 (ISI). Although a substantial part of lipid molecular species showed a significant correlation with insulin sensitivity indices we were able to identify a limited number of lipid metabolites of particular importance based on the LASSO approach. These few selected lipids with the closest

  17. Global sensitivity analysis of DRAINMOD-FOREST, an integrated forest ecosystem model

    Treesearch

    Shiying Tian; Mohamed A. Youssef; Devendra M. Amatya; Eric D. Vance

    2014-01-01

    Global sensitivity analysis is a useful tool to understand process-based ecosystem models by identifying key parameters and processes controlling model predictions. This study reported a comprehensive global sensitivity analysis for DRAINMOD-FOREST, an integrated model for simulating water, carbon (C), and nitrogen (N) cycles and plant growth in lowland forests. The...

  18. A global sensitivity analysis approach for morphogenesis models.

    PubMed

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  19. Fine-tuning molecular acoustic models: sensitivity of the predicted attenuation to the Lennard-Jones parameters

    NASA Astrophysics Data System (ADS)

    Petculescu, Andi G.; Lueptow, Richard M.

    2005-01-01

    In a previous paper [Y. Dain and R. M. Lueptow, J. Acoust. Soc. Am. 109, 1955 (2001)], a model of acoustic attenuation due to vibration-translation and vibration-vibration relaxation in multiple polyatomic gas mixtures was developed. In this paper, the model is improved by treating binary molecular collisions via fully pairwise vibrational transition probabilities. The sensitivity of the model to small variations in the Lennard-Jones parameters-collision diameter (σ) and potential depth (ɛ)-is investigated for nitrogen-water-methane mixtures. For a N2(98.97%)-H2O(338 ppm)-CH4(1%) test mixture, the transition probabilities and acoustic absorption curves are much more sensitive to σ than they are to ɛ. Additionally, when the 1% methane is replaced by nitrogen, the resulting mixture [N2(99.97%)-H2O(338 ppm)] becomes considerably more sensitive to changes of σwater. The current model minimizes the underprediction of the acoustic absorption peak magnitudes reported by S. G. Ejakov et al. [J. Acoust. Soc. Am. 113, 1871 (2003)]. .

  20. The application of sensitivity analysis to models of large scale physiological systems

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  1. Parameter sensitivity analysis and optimization for a satellite-based evapotranspiration model across multiple sites using Moderate Resolution Imaging Spectroradiometer and flux data

    NASA Astrophysics Data System (ADS)

    Zhang, Kun; Ma, Jinzhu; Zhu, Gaofeng; Ma, Ting; Han, Tuo; Feng, Li Li

    2017-01-01

    Global and regional estimates of daily evapotranspiration are essential to our understanding of the hydrologic cycle and climate change. In this study, we selected the radiation-based Priestly-Taylor Jet Propulsion Laboratory (PT-JPL) model and assessed it at a daily time scale by using 44 flux towers. These towers distributed in a wide range of ecological systems: croplands, deciduous broadleaf forest, evergreen broadleaf forest, evergreen needleleaf forest, grasslands, mixed forests, savannas, and shrublands. A regional land surface evapotranspiration model with a relatively simple structure, the PT-JPL model largely uses ecophysiologically-based formulation and parameters to relate potential evapotranspiration to actual evapotranspiration. The results using the original model indicate that the model always overestimates evapotranspiration in arid regions. This likely results from the misrepresentation of water limitation and energy partition in the model. By analyzing physiological processes and determining the sensitive parameters, we identified a series of parameter sets that can increase model performance. The model with optimized parameters showed better performance (R2 = 0.2-0.87; Nash-Sutcliffe efficiency (NSE) = 0.1-0.87) at each site than the original model (R2 = 0.19-0.87; NSE = -12.14-0.85). The results of the optimization indicated that the parameter β (water control of soil evaporation) was much lower in arid regions than in relatively humid regions. Furthermore, the optimized value of parameter m1 (plant control of canopy transpiration) was mostly between 1 to 1.3, slightly lower than the original value. Also, the optimized parameter Topt correlated well to the actual environmental temperature at each site. We suggest that using optimized parameters with the PT-JPL model could provide an efficient way to improve the model performance.

  2. Three-dimensional optimization and sensitivity analysis of dental implant thread parameters using finite element analysis.

    PubMed

    Geramizadeh, Maryam; Katoozian, Hamidreza; Amid, Reza; Kadkhodazadeh, Mahdi

    2018-04-01

    This study aimed to optimize the thread depth and pitch of a recently designed dental implant to provide uniform stress distribution by means of a response surface optimization method available in finite element (FE) software. The sensitivity of simulation to different mechanical parameters was also evaluated. A three-dimensional model of a tapered dental implant with micro-threads in the upper area and V-shaped threads in the rest of the body was modeled and analyzed using finite element analysis (FEA). An axial load of 100 N was applied to the top of the implants. The model was optimized for thread depth and pitch to determine the optimal stress distribution. In this analysis, micro-threads had 0.25 to 0.3 mm depth and 0.27 to 0.33 mm pitch, and V-shaped threads had 0.405 to 0.495 mm depth and 0.66 to 0.8 mm pitch. The optimized depth and pitch were 0.307 and 0.286 mm for micro-threads and 0.405 and 0.808 mm for V-shaped threads, respectively. In this design, the most effective parameters on stress distribution were the depth and pitch of the micro-threads based on sensitivity analysis results. Based on the results of this study, the optimal implant design has micro-threads with 0.307 and 0.286 mm depth and pitch, respectively, in the upper area and V-shaped threads with 0.405 and 0.808 mm depth and pitch in the rest of the body. These results indicate that micro-thread parameters have a greater effect on stress and strain values.

  3. Identifying cognitive remediation change through computational modelling--effects on reinforcement learning in schizophrenia.

    PubMed

    Cella, Matteo; Bishara, Anthony J; Medin, Evelina; Swan, Sarah; Reeder, Clare; Wykes, Til

    2014-11-01

    Converging research suggests that individuals with schizophrenia show a marked impairment in reinforcement learning, particularly in tasks requiring flexibility and adaptation. The problem has been associated with dopamine reward systems. This study explores, for the first time, the characteristics of this impairment and how it is affected by a behavioral intervention-cognitive remediation. Using computational modelling, 3 reinforcement learning parameters based on the Wisconsin Card Sorting Test (WCST) trial-by-trial performance were estimated: R (reward sensitivity), P (punishment sensitivity), and D (choice consistency). In Study 1 the parameters were compared between a group of individuals with schizophrenia (n = 100) and a healthy control group (n = 50). In Study 2 the effect of cognitive remediation therapy (CRT) on these parameters was assessed in 2 groups of individuals with schizophrenia, one receiving CRT (n = 37) and the other receiving treatment as usual (TAU, n = 34). In Study 1 individuals with schizophrenia showed impairment in the R and P parameters compared with healthy controls. Study 2 demonstrated that sensitivity to negative feedback (P) and reward (R) improved in the CRT group after therapy compared with the TAU group. R and P parameter change correlated with WCST outputs. Improvements in R and P after CRT were associated with working memory gains and reduction of negative symptoms, respectively. Schizophrenia reinforcement learning difficulties negatively influence performance in shift learning tasks. CRT can improve sensitivity to reward and punishment. Identifying parameters that show change may be useful in experimental medicine studies to identify cognitive domains susceptible to improvement. © The Author 2013. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  4. Sensitivity of Tumor Motion Simulation Accuracy to Lung Biomechanical Modeling Approaches and Parameters

    PubMed Central

    Tehrani, Joubin Nasehi; Yang, Yin; Werner, Rene; Lu, Wei; Low, Daniel; Guo, Xiaohu

    2015-01-01

    Finite element analysis (FEA)-based biomechanical modeling can be used to predict lung respiratory motion. In this technique, elastic models and biomechanical parameters are two important factors that determine modeling accuracy. We systematically evaluated the effects of lung and lung tumor biomechanical modeling approaches and related parameters to improve the accuracy of motion simulation of lung tumor center of mass (TCM) displacements. Experiments were conducted with four-dimensional computed tomography (4D-CT). A Quasi-Newton FEA was performed to simulate lung and related tumor displacements between end-expiration (phase 50%) and other respiration phases (0%, 10%, 20%, 30%, and 40%). Both linear isotropic and non-linear hyperelastic materials, including the Neo-Hookean compressible and uncoupled Mooney-Rivlin models, were used to create a finite element model (FEM) of lung and tumors. Lung surface displacement vector fields (SDVFs) were obtained by registering the 50% phase CT to other respiration phases, using the non-rigid demons registration algorithm. The obtained SDVFs were used as lung surface displacement boundary conditions in FEM. The sensitivity of TCM displacement to lung and tumor biomechanical parameters was assessed in eight patients for all three models. Patient-specific optimal parameters were estimated by minimizing the TCM motion simulation errors between phase 50% and phase 0%. The uncoupled Mooney-Rivlin material model showed the highest TCM motion simulation accuracy. The average TCM motion simulation absolute errors for the Mooney-Rivlin material model along left-right (LR), anterior-posterior (AP), and superior-inferior (SI) directions were 0.80 mm, 0.86 mm, and 1.51 mm, respectively. The proposed strategy provides a reliable method to estimate patient-specific biomechanical parameters in FEM for lung tumor motion simulation. PMID:26531324

  5. Sensitivity and specificity of eustachian tube function tests in adults.

    PubMed

    Doyle, William J; Swarts, J Douglas; Banks, Julianne; Casselbrant, Margaretha L; Mandel, Ellen M; Alper, Cuneyt M

    2013-07-01

    The study demonstrates the utility of eustachian tube (ET) function (ETF) test results for accurately assigning ears to disease state. To determine if ETF tests can identify ears with physician-diagnosed ET dysfunction (ETD) in a mixed population at high sensitivity and specificity and to define the interrelatedness of ETF test parameters. Through use of the forced-response, inflation-deflation, Valsalva, and sniffing tests, ETF was evaluated in 15 control ears of adult subjects after unilateral myringotomy (group 1) and in 23 ears of 19 adult subjects with ventilation tubes inserted for ETD (group 2). Data were analyzed using logistic regression including each parameter independently and then a step-down discriminant analysis including all ETF test parameters to predict group assignment. Factor analysis operating over all parameters was used to explore relatedness. ETF testing. ETF parameters for the forced response, inflation-deflation, Valsalva, and sniffing tests measured in 15 control ears of adult subjects after unilateral myringotomy (group 1) and in 23 ears of 19 adult subjects with ventilation tubes inserted for ETD (group 2). The discriminant analysis identified 4 ETF test parameters (Valsalva, ET opening pressure, dilatory efficiency, and percentage of positive pressure equilibrated) that together correctly assigned ears to group 2 at a sensitivity of 95% and a specificity of 83%. Individual parameters representing the efficiency of ET opening during swallowing showed moderately accurate assignments of ears to their respective groups. Three factors captured approximately 98% of the variance among parameters: the first had negative loadings of the ETF structural parameters; the second had positive loadings of the muscle-assisted ET opening parameters; and the third had negative loadings of the muscle-assisted ET opening parameters and positive loadings of the structural parameters. These results show that ETF tests can correctly assign individual ears to

  6. Corneal Sensitivity in Tear Dysfunction and its Correlation with Clinical Parameters and Blink Rate

    PubMed Central

    Rahman, Effie Z.; Lam, Peter K.; Chu, Chia-Kai; Moore, Quianta; Pflugfelder, Stephen C.

    2015-01-01

    Purpose To compare corneal sensitivity in tear dysfunction due to a variety of causes using contact and non-contact esthesiometers and to evaluate correlations between corneal sensitivity, blink rate and clinical parameters. Design Comparative observational case series. Methods Ten normal and 33 subjects with tear dysfunction [meibomian gland disease (n = 11), aqueous tear deficiency (n = 10) - without (n = 7) and with (n = 3) Sjögren syndrome (SS) and conjunctivochalasis (n = 12)] were evaluated. Corneal sensitivity was measured with Cochet-Bonnet and air jet esthesiometers and blink rate by electromyelography. Eye irritation symptoms, tear meniscus height, tear break-up time (TBUT), and corneal and conjunctival dye staining were measured. Between group means were compared and correlations calculated. Results Compared with control (Cochet-Bonnet 5.45 mm, air esthesiometer 3.62 mg), mean sensory thresholds were significantly higher in aqueous tear deficiency using either Cochet-Bonnet (3.6 mm; P = 0.003) or air (11.7 mg; P = 0.046) esthesiometers, but were not significantly different in the other groups. Reduced corneal sensitivity significantly correlated with more rapid TBUT and blink rate, and greater irritation and ocular surface dye staining with one or both esthesiometers. Mean blink rates were significantly higher in both aqueous tear deficiency and conjunctivochalasis compared with control. Among all subjects, blink rate positively correlated with ocular surface staining and irritation and inversely correlated with TBUT. Conclusion Amongst conditions causing tear dysfunction, reduced corneal sensitivity is associated with greater irritation, tear instability, ocular surface disease and blink rate. Rapid blinking is associated with worse ocular surface disease and tear stability. PMID:26255576

  7. Modeling of 2D diffusion processes based on microscopy data: parameter estimation and practical identifiability analysis.

    PubMed

    Hock, Sabrina; Hasenauer, Jan; Theis, Fabian J

    2013-01-01

    Diffusion is a key component of many biological processes such as chemotaxis, developmental differentiation and tissue morphogenesis. Since recently, the spatial gradients caused by diffusion can be assessed in-vitro and in-vivo using microscopy based imaging techniques. The resulting time-series of two dimensional, high-resolutions images in combination with mechanistic models enable the quantitative analysis of the underlying mechanisms. However, such a model-based analysis is still challenging due to measurement noise and sparse observations, which result in uncertainties of the model parameters. We introduce a likelihood function for image-based measurements with log-normal distributed noise. Based upon this likelihood function we formulate the maximum likelihood estimation problem, which is solved using PDE-constrained optimization methods. To assess the uncertainty and practical identifiability of the parameters we introduce profile likelihoods for diffusion processes. As proof of concept, we model certain aspects of the guidance of dendritic cells towards lymphatic vessels, an example for haptotaxis. Using a realistic set of artificial measurement data, we estimate the five kinetic parameters of this model and compute profile likelihoods. Our novel approach for the estimation of model parameters from image data as well as the proposed identifiability analysis approach is widely applicable to diffusion processes. The profile likelihood based method provides more rigorous uncertainty bounds in contrast to local approximation methods.

  8. Pre-study feasibility and identifying sensitivity analyses for protocol pre-specification in comparative effectiveness research.

    PubMed

    Girman, Cynthia J; Faries, Douglas; Ryan, Patrick; Rotelli, Matt; Belger, Mark; Binkowitz, Bruce; O'Neill, Robert

    2014-05-01

    The use of healthcare databases for comparative effectiveness research (CER) is increasing exponentially despite its challenges. Researchers must understand their data source and whether outcomes, exposures and confounding factors are captured sufficiently to address the research question. They must also assess whether bias and confounding can be adequately minimized. Many study design characteristics may impact on the results; however, minimal if any sensitivity analyses are typically conducted, and those performed are post hoc. We propose pre-study steps for CER feasibility assessment and to identify sensitivity analyses that might be most important to pre-specify to help ensure that CER produces valid interpretable results.

  9. Comprehensive Genomic Profiling Identifies Frequent Drug-Sensitive EGFR Exon 19 Deletions in NSCLC not Identified by Prior Molecular Testing.

    PubMed

    Schrock, Alexa B; Frampton, Garrett M; Herndon, Dana; Greenbowe, Joel R; Wang, Kai; Lipson, Doron; Yelensky, Roman; Chalmers, Zachary R; Chmielecki, Juliann; Elvin, Julia A; Wollner, Mira; Dvir, Addie; -Gutman, Lior Soussan; Bordoni, Rodolfo; Peled, Nir; Braiteh, Fadi; Raez, Luis; Erlich, Rachel; Ou, Sai-Hong Ignatius; Mohamed, Mohamed; Ross, Jeffrey S; Stephens, Philip J; Ali, Siraj M; Miller, Vincent A

    2016-07-01

    Reliable detection of drug-sensitive activating EGFR mutations is critical in the care of advanced non-small cell lung cancer (NSCLC), but such testing is commonly performed using a wide variety of platforms, many of which lack rigorous analytic validation. A large pool of NSCLC cases was assayed with well-validated, hybrid capture-based comprehensive genomic profiling (CGP) at the request of the individual treating physicians in the course of clinical care for the purpose of making therapy decisions. From these, 400 cases harboring EGFR exon 19 deletions (Δex19) were identified, and available clinical history was reviewed. Pathology reports were available for 250 consecutive cases with classical EGFR Δex19 (amino acids 743-754) and were reviewed to assess previous non-hybrid capture-based EGFR testing. Twelve of 71 (17%) cases with EGFR testing results available were negative by previous testing, including 8 of 46 (17%) cases for which the same biopsy was analyzed. Independently, five of six (83%) cases harboring C-helical EGFR Δex19 were previously negative. In a subset of these patients with available clinical outcome information, robust benefit from treatment with EGFR inhibitors was observed. CGP identifies drug-sensitive EGFR Δex19 in NSCLC cases that have undergone prior EGFR testing and returned negative results. Given the proven benefit in progression-free survival conferred by EGFR tyrosine kinase inhibitors in patients with these alterations, CGP should be considered in the initial presentation of advanced NSCLC and when previous testing for EGFR mutations or other driver alterations is negative. Clin Cancer Res; 22(13); 3281-5. ©2016 AACR. ©2016 American Association for Cancer Research.

  10. Assimilation of seasonal chlorophyll and nutrient data into an adjoint three-dimensional ocean carbon cycle model: Sensitivity analysis and ecosystem parameter optimization

    NASA Astrophysics Data System (ADS)

    Tjiputra, Jerry F.; Polzin, Dierk; Winguth, Arne M. E.

    2007-03-01

    An adjoint method is applied to a three-dimensional global ocean biogeochemical cycle model to optimize the ecosystem parameters on the basis of SeaWiFS surface chlorophyll observation. We showed with identical twin experiments that the model simulated chlorophyll concentration is sensitive to perturbation of phytoplankton and zooplankton exudation, herbivore egestion as fecal pellets, zooplankton grazing, and the assimilation efficiency parameters. The assimilation of SeaWiFS chlorophyll data significantly improved the prediction of chlorophyll concentration, especially in the high-latitude regions. Experiments that considered regional variations of parameters yielded a high seasonal variance of ecosystem parameters in the high latitudes, but a low variance in the tropical regions. These experiments indicate that the adjoint model is, despite the many uncertainties, generally capable to optimize sensitive parameters and carbon fluxes in the euphotic zone. The best fit regional parameters predict a global net primary production of 36 Pg C yr-1, which lies within the range suggested by Antoine et al. (1996). Additional constraints of nutrient data from the World Ocean Atlas showed further reduction in the model-data misfit and that assimilation with extensive data sets is necessary.

  11. Kinematic sensitivity of robot manipulators

    NASA Technical Reports Server (NTRS)

    Vuskovic, Marko I.

    1989-01-01

    Kinematic sensitivity vectors and matrices for open-loop, n degrees-of-freedom manipulators are derived. First-order sensitivity vectors are defined as partial derivatives of the manipulator's position and orientation with respect to its geometrical parameters. The four-parameter kinematic model is considered, as well as the five-parameter model in case of nominally parallel joint axes. Sensitivity vectors are expressed in terms of coordinate axes of manipulator frames. Second-order sensitivity vectors, the partial derivatives of first-order sensitivity vectors, are also considered. It is shown that second-order sensitivity vectors can be expressed as vector products of the first-order sensitivity vectors.

  12. Measurement Sensitivity Of Liquid Droplet Parameters Using Optical Fibers

    NASA Astrophysics Data System (ADS)

    Das, Alok K.; Mandal, Anup K.

    1990-02-01

    A new clad probing technique is used to measure the size, number, refractive index and viscosity of liquid droplets sprayed from a pressure nozzle on an uncoated core-clad fiber. The probe monitors the clad mode power loss within the leaky ray zone represented as a three region fiber. Liquid droplets measured are Glycerine, commercial grade Turpentine, Linseed oil and some oil mixtures. The measurement sensitivity depends on probing conditions and clad diameter which is observed experimentally and verified analytically. A maximum sensitivity is obtained for the tapered probe-fiber diameter made equal to the clad thickness. A slowly tapered probe-fiber and a small end angle as well as separation of the sensor-fiber and the probe-fiber further improve the sensitivity. Under the best probing condition for 90-percent Glycerine droplets of - 50 micron diameter and a 50/125 micron sensor fiber with clad refractive index of 1.465 and 0.2 NA, the measured sensitivity per drop is 0.015 and 0.006 dB, respectively, for (10-20) and (100-200) droplets. Sensitivities for different systems are shown. The sensitivity is optimized by choosing proper fiber for known liquids.

  13. A novel diagnostic protocol to identify patients suitable for discharge after a single high-sensitivity troponin

    PubMed Central

    Carlton, Edward W; Cullen, Louise; Than, Martin; Gamble, James; Khattab, Ahmed; Greaves, Kim

    2015-01-01

    Objective To establish whether a novel accelerated diagnostic protocol (ADP) for suspected acute coronary syndrome (ACS) could successfully identify low-risk patients suitable for discharge after a single high-sensitivity troponin T (hs-cTnT) taken at presentation to the emergency department. We also compared the diagnostic accuracy of this ADP with strategies using initial undetectable hs-cTnT. Methods This prospective observational study evaluated the ability of the Triage Rule-out Using high-Sensitivity Troponin (TRUST) ADP to identify low-risk patients with suspected ACS. The ADP incorporated a single presentation hs-cTnT of <14 ng/L, a non-ischaemic ECG and a modified Goldman risk score. Diagnostic performance of the ADP was compared with the detection limit cut-offs of hs-cTnT (<5 ng/L and <3 ng/L). The primary end point was fatal/non-fatal acute myocardial infarction (AMI) within 30 days. Results 960 participants were recruited, mean age 58.0 years, 80 (8.3%) had an AMI. The TRUST ADP classified 382 (39.8%) as low-risk with a sensitivity for identifying AMI of 98.8% (95% CI 92.5% to 99.9%). hs-cTnT detection limits (<5 ng/L and <3 ng/L) had a sensitivity of 100% (94.3 to 100) and 100% (94.4 to 100), respectively. The TRUST ADP identified more patients suitable for early discharge at 39.8% vs 29.3% (<5 ng/L) and 7.9% (<3 ng/L) (p<0.001) with a lower false-positive rate for AMI detection; specificity 43.3% (95% CI 42.7% to 43.4%) vs 32.0% (95% CI 31.5% to 32.0%) and 8.6% (95% CI 8.1% to 8.6%), respectively. Conclusions The TRUST ADP, which incorporates structured risk-assessment and a single presentation hs-cTnT blood draw, has potential to allow early discharge in 40% of patients with suspected ACS and has greater clinical utility than undetectable hs-cTnT strategies. Trial registration number ISRCTN No. 21109279. PMID:25691511

  14. Investigation, sensitivity analysis, and multi-objective optimization of effective parameters on temperature and force in robotic drilling cortical bone.

    PubMed

    Tahmasbi, Vahid; Ghoreishi, Majid; Zolfaghari, Mojtaba

    2017-11-01

    The bone drilling process is very prominent in orthopedic surgeries and in the repair of bone fractures. It is also very common in dentistry and bone sampling operations. Due to the complexity of bone and the sensitivity of the process, bone drilling is one of the most important and sensitive processes in biomedical engineering. Orthopedic surgeries can be improved using robotic systems and mechatronic tools. The most crucial problem during drilling is an unwanted increase in process temperature (higher than 47 °C), which causes thermal osteonecrosis or cell death and local burning of the bone tissue. Moreover, imposing higher forces to the bone may lead to breaking or cracking and consequently cause serious damage. In this study, a mathematical second-order linear regression model as a function of tool drilling speed, feed rate, tool diameter, and their effective interactions is introduced to predict temperature and force during the bone drilling process. This model can determine the maximum speed of surgery that remains within an acceptable temperature range. Moreover, for the first time, using designed experiments, the bone drilling process was modeled, and the drilling speed, feed rate, and tool diameter were optimized. Then, using response surface methodology and applying a multi-objective optimization, drilling force was minimized to sustain an acceptable temperature range without damaging the bone or the surrounding tissue. In addition, for the first time, Sobol statistical sensitivity analysis is used to ascertain the effect of process input parameters on process temperature and force. The results show that among all effective input parameters, tool rotational speed, feed rate, and tool diameter have the highest influence on process temperature and force, respectively. The behavior of each output parameters with variation in each input parameter is further investigated. Finally, a multi-objective optimization has been performed considering all the

  15. Spatiotemporal sensitivity analysis of vertical transport of pesticides in soil

    EPA Science Inventory

    Environmental fate and transport processes are influenced by many factors. Simulation models that mimic these processes often have complex implementations, which can lead to over-parameterization. Sensitivity analyses are subsequently used to identify critical parameters whose un...

  16. Practical limits for reverse engineering of dynamical systems: a statistical analysis of sensitivity and parameter inferability in systems biology models.

    PubMed

    Erguler, Kamil; Stumpf, Michael P H

    2011-05-01

    The size and complexity of cellular systems make building predictive models an extremely difficult task. In principle dynamical time-course data can be used to elucidate the structure of the underlying molecular mechanisms, but a central and recurring problem is that many and very different models can be fitted to experimental data, especially when the latter are limited and subject to noise. Even given a model, estimating its parameters remains challenging in real-world systems. Here we present a comprehensive analysis of 180 systems biology models, which allows us to classify the parameters with respect to their contribution to the overall dynamical behaviour of the different systems. Our results reveal candidate elements of control in biochemical pathways that differentially contribute to dynamics. We introduce sensitivity profiles that concisely characterize parameter sensitivity and demonstrate how this can be connected to variability in data. Systematically linking data and model sloppiness allows us to extract features of dynamical systems that determine how well parameters can be estimated from time-course measurements, and associates the extent of data required for parameter inference with the model structure, and also with the global dynamical state of the system. The comprehensive analysis of so many systems biology models reaffirms the inability to estimate precisely most model or kinetic parameters as a generic feature of dynamical systems, and provides safe guidelines for performing better inferences and model predictions in the context of reverse engineering of mathematical models for biological systems.

  17. Sensitivity analysis of add-on price estimate for select silicon wafering technologies

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.

    1982-01-01

    The cost of producing wafers from silicon ingots is a major component of the add-on price of silicon sheet. Economic analyses of the add-on price estimates and their sensitivity internal-diameter (ID) sawing, multiblade slurry (MBS) sawing and fixed-abrasive slicing technique (FAST) are presented. Interim price estimation guidelines (IPEG) are used for estimating a process add-on price. Sensitivity analysis of price is performed with respect to cost parameters such as equipment, space, direct labor, materials (blade life) and utilities, and the production parameters such as slicing rate, slices per centimeter and process yield, using a computer program specifically developed to do sensitivity analysis with IPEG. The results aid in identifying the important cost parameters and assist in deciding the direction of technology development efforts.

  18. Sensitivity Analysis of the USLE Soil Erodibility Factor to Its Determining Parameters

    NASA Astrophysics Data System (ADS)

    Mitova, Milena; Rousseva, Svetla

    2014-05-01

    Soil erosion is recognized as one of the most serious soil threats worldwide. Soil erosion prediction is the first step in soil conservation planning. The Universal Soil Loss Equation (USLE) is one of the most widely used models for soil erosion predictions. One of the five USLE predictors is the soil erodibility factor (K-factor), which evaluates the impact of soil characteristics on soil erosion rates. Soil erodibility nomograph defines K-factor depending on soil characteristics, such as: particle size distribution (fractions finer that 0.002 mm and from 0.1 to 0.002 mm), organic matter content, soil structure and soil profile water permeability. Identifying the soil characteristics, which mostly influence the K-factor would give an opportunity to control the soil loss through erosion by controlling the parameters, which reduce the K-factor value. The aim of the report is to present the results of analysis of the relative weight of these soil characteristics in the K-factor values. The relative impact of the soil characteristics on K-factor was studied through a series of statistical analyses of data from the geographic database for soil erosion risk assessments in Bulgaria. Degree of correlation between K-factor values and the parameters that determine it was studied by correlation analysis. The sensitivity of the K-factor was determined by studying the variance of each parameter within the range between minimum and maximum possible values considering average value of the other factors. Normalizing transformation of data sets was applied because of the different dimensions and the orders of variation of the values of the various parameters. The results show that the content of particles finer than 0.002 mm has the most significant relative impact on the soil erodibility, followed by the content of particles with size from 0.1 mm to 0.002 mm, the class of the water permeability of the soil profile, the content of organic matter and the aggregation class. The

  19. Experience of the JPL Exploratory Data Analysis Team at validating HIRS2/MSU cloud parameters

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph; Haskins, Robert D.; Granger-Gallegos, Stephanie; Pursch, Andrew; Delgenio, Anthony

    1992-01-01

    Validation of the HIRS2/MSU cloud parameters began with the cloud/climate feedback problem. The derived effective cloud amount is less sensitive to surface temperature for higher clouds. This occurs because as the cloud elevation increases, the difference between surface temperature and cloud temperature increases, so only a small change in cloud amount is needed to effect a large change in radiance at the detector. By validating the cloud parameters it is meant 'developing a quantitative sense for the physical meaning of the measured parameters', by: (1) identifying the assumptions involved in deriving parameters from the measured radiances, (2) testing the input data and derived parameters for statistical error, sensitivity, and internal consistency, and (3) comparing with similar parameters obtained from other sources using other techniques.

  20. Burnout sensitivity of power MOSFETs operating in a switching converter

    NASA Astrophysics Data System (ADS)

    Tastet, P.; Garnier, J.; Constans, H.; Tizon, A. H.

    1994-06-01

    Heavy ion tests of a switching converter using power MOSFETs have allowed us to identify the main parameters which affect the burnout sensitivity of these components. The differences between static and dynamic conditions are clarified in this paper.

  1. Predicting chemically-induced skin reactions. Part I: QSAR models of skin sensitization and their application to identify potentially hazardous compounds

    PubMed Central

    Alves, Vinicius M.; Muratov, Eugene; Fourches, Denis; Strickland, Judy; Kleinstreuer, Nicole; Andrade, Carolina H.; Tropsha, Alexander

    2015-01-01

    Repetitive exposure to a chemical agent can induce an immune reaction in inherently susceptible individuals that leads to skin sensitization. Although many chemicals have been reported as skin sensitizers, there have been very few rigorously validated QSAR models with defined applicability domains (AD) that were developed using a large group of chemically diverse compounds. In this study, we have aimed to compile, curate, and integrate the largest publicly available dataset related to chemically-induced skin sensitization, use this data to generate rigorously validated and QSAR models for skin sensitization, and employ these models as a virtual screening tool for identifying putative sensitizers among environmental chemicals. We followed best practices for model building and validation implemented with our predictive QSAR workflow using random forest modeling technique in combination with SiRMS and Dragon descriptors. The Correct Classification Rate (CCR) for QSAR models discriminating sensitizers from non-sensitizers were 71–88% when evaluated on several external validation sets, within a broad AD, with positive (for sensitizers) and negative (for non-sensitizers) predicted rates of 85% and 79% respectively. When compared to the skin sensitization module included in the OECD QSAR toolbox as well as to the skin sensitization model in publicly available VEGA software, our models showed a significantly higher prediction accuracy for the same sets of external compounds as evaluated by Positive Predicted Rate, Negative Predicted Rate, and CCR. These models were applied to identify putative chemical hazards in the ScoreCard database of possible skin or sense organ toxicants as primary candidates for experimental validation. PMID:25560674

  2. Search Strategy to Identify Dental Survival Analysis Articles Indexed in MEDLINE.

    PubMed

    Layton, Danielle M; Clarke, Michael

    2016-01-01

    Articles reporting survival outcomes (time-to-event outcomes) in patients over time are challenging to identify in the literature. Research shows the words authors use to describe their dental survival analyses vary, and that allocation of medical subject headings by MEDLINE indexers is inconsistent. Together, this undermines accurate article identification. The present study aims to develop and validate a search strategy to identify dental survival analyses indexed in MEDLINE (Ovid). A gold standard cohort of articles was identified to derive the search terms, and an independent gold standard cohort of articles was identified to test and validate the proposed search strategies. The first cohort included all 6,955 articles published in the 50 dental journals with the highest impact factors in 2008, of which 95 articles were dental survival articles. The second cohort included all 6,514 articles published in the 50 dental journals with the highest impact factors for 2012, of which 148 were dental survival articles. Each cohort was identified by a systematic hand search. Performance parameters of sensitivity, precision, and number needed to read (NNR) for the search strategies were calculated. Sensitive, precise, and optimized search strategies were developed and validated. The performances of the search strategy maximizing sensitivity were 92% sensitivity, 14% precision, and 7.11 NNR; the performances of the strategy maximizing precision were 93% precision, 10% sensitivity, and 1.07 NNR; and the performances of the strategy optimizing the balance between sensitivity and precision were 83% sensitivity, 24% precision, and 4.13 NNR. The methods used to identify search terms were objective, not subjective. The search strategies were validated in an independent group of articles that included different journals and different publication years. Across the three search strategies, dental survival articles can be identified with sensitivity up to 92%, precision up to 93

  3. Critical features of acute stress-induced cross-sensitization identified through the hypothalamic-pituitary-adrenal axis output.

    PubMed

    Belda, Xavier; Nadal, Roser; Armario, Antonio

    2016-08-11

    Stress-induced sensitization represents a process whereby prior exposure to severe stressors leaves animals or humans in a hyper-responsive state to further stressors. Indeed, this phenomenon is assumed to be the basis of certain stress-associated pathologies, including post-traumatic stress disorder and psychosis. One biological system particularly prone to sensitization is the hypothalamic-pituitary-adrenal (HPA) axis, the prototypic stress system. It is well established that under certain conditions, prior exposure of animals to acute and chronic (triggering) stressors enhances HPA responses to novel (heterotypic) stressors on subsequent days (e.g. raised plasma ACTH and corticosterone levels). However, such changes remain somewhat controversial and thus, the present study aimed to identify the critical characteristics of the triggering and challenging stressors that affect acute stress-induced HPA cross-sensitization in adult rats. We found that HPA cross-sensitization is markedly influenced by the intensity of the triggering stressor, whereas the length of exposure mainly affects its persistence. Importantly, HPA sensitization is more evident with mild than strong challenging stressors, and it may remain unnoticed if exposure to the challenging stressor is prolonged beyond 15 min. We speculate that heterotypic HPA sensitization might have developed to optimize biologically adaptive responses to further brief stressors.

  4. Critical features of acute stress-induced cross-sensitization identified through the hypothalamic-pituitary-adrenal axis output

    PubMed Central

    Belda, Xavier; Nadal, Roser; Armario, Antonio

    2016-01-01

    Stress-induced sensitization represents a process whereby prior exposure to severe stressors leaves animals or humans in a hyper-responsive state to further stressors. Indeed, this phenomenon is assumed to be the basis of certain stress-associated pathologies, including post-traumatic stress disorder and psychosis. One biological system particularly prone to sensitization is the hypothalamic-pituitary-adrenal (HPA) axis, the prototypic stress system. It is well established that under certain conditions, prior exposure of animals to acute and chronic (triggering) stressors enhances HPA responses to novel (heterotypic) stressors on subsequent days (e.g. raised plasma ACTH and corticosterone levels). However, such changes remain somewhat controversial and thus, the present study aimed to identify the critical characteristics of the triggering and challenging stressors that affect acute stress-induced HPA cross-sensitization in adult rats. We found that HPA cross-sensitization is markedly influenced by the intensity of the triggering stressor, whereas the length of exposure mainly affects its persistence. Importantly, HPA sensitization is more evident with mild than strong challenging stressors, and it may remain unnoticed if exposure to the challenging stressor is prolonged beyond 15 min. We speculate that heterotypic HPA sensitization might have developed to optimize biologically adaptive responses to further brief stressors. PMID:27511270

  5. Single-particle strength from nucleon transfer in oxygen isotopes: Sensitivity to model parameters

    NASA Astrophysics Data System (ADS)

    Flavigny, F.; Keeley, N.; Gillibert, A.; Obertelli, A.

    2018-03-01

    In the analysis of transfer reaction data to extract nuclear structure information the choice of input parameters to the reaction model such as distorting potentials and overlap functions has a significant impact. In this paper we consider a set of data for the (d ,t ) and (d ,3He ) reactions on 14,16,18O as a well-delimited subject for a study of the sensitivity of such analyses to different choices of distorting potentials and overlap functions with particular reference to a previous investigation of the variation of valence nucleon correlations as a function of the difference in nucleon separation energy Δ S =| Sp-Sn| [Phys. Rev. Lett. 110, 122503 (2013), 10.1103/PhysRevLett.110.122503].

  6. An Investigation on the Sensitivity of the Parameters of Urban Flood Model

    NASA Astrophysics Data System (ADS)

    M, A. B.; Lohani, B.; Jain, A.

    2015-12-01

    Global climatic change has triggered weather patterns which lead to heavy and sudden rainfall in different parts of world. The impact of heavy rainfall is severe especially on urban areas in the form of urban flooding. In order to understand the effect of heavy rainfall induced flooding, it is necessary to model the entire flooding scenario more accurately, which is now becoming possible with the availability of high resolution airborne LiDAR data and other real time observations. However, there is not much understanding on the optimal use of these data and on the effect of other parameters on the performance of the flood model. This study aims at developing understanding on these issues. In view of the above discussion, the aim of this study is to (i) understand that how the use of high resolution LiDAR data improves the performance of urban flood model, and (ii) understand the sensitivity of various hydrological parameters on urban flood modelling. In this study, modelling of flooding in urban areas due to heavy rainfall is carried out considering Indian Institute of Technology (IIT) Kanpur, India as the study site. The existing model MIKE FLOOD, which is accepted by Federal Emergency Management Agency (FEMA), is used along with the high resolution airborne LiDAR data. Once the model is setup it is made to run by changing the parameters such as resolution of Digital Surface Model (DSM), manning's roughness, initial losses, catchment description, concentration time, runoff reduction factor. In order to realize this, the results obtained from the model are compared with the field observations. The parametric study carried out in this work demonstrates that the selection of catchment description plays a very important role in urban flood modelling. Results also show the significant impact of resolution of DSM, initial losses and concentration time on urban flood model. This study will help in understanding the effect of various parameters that should be part of a

  7. Mesh refinement and numerical sensitivity analysis for parameter calibration of partial differential equations

    NASA Astrophysics Data System (ADS)

    Becker, Roland; Vexler, Boris

    2005-06-01

    We consider the calibration of parameters in physical models described by partial differential equations. This task is formulated as a constrained optimization problem with a cost functional of least squares type using information obtained from measurements. An important issue in the numerical solution of this type of problem is the control of the errors introduced, first, by discretization of the equations describing the physical model, and second, by measurement errors or other perturbations. Our strategy is as follows: we suppose that the user defines an interest functional I, which might depend on both the state variable and the parameters and which represents the goal of the computation. First, we propose an a posteriori error estimator which measures the error with respect to this functional. This error estimator is used in an adaptive algorithm to construct economic meshes by local mesh refinement. The proposed estimator requires the solution of an auxiliary linear equation. Second, we address the question of sensitivity. Applying similar techniques as before, we derive quantities which describe the influence of small changes in the measurements on the value of the interest functional. These numbers, which we call relative condition numbers, give additional information on the problem under consideration. They can be computed by means of the solution of the auxiliary problem determined before. Finally, we demonstrate our approach at hand of a parameter calibration problem for a model flow problem.

  8. Robust design of configurations and parameters of adaptable products

    NASA Astrophysics Data System (ADS)

    Zhang, Jian; Chen, Yongliang; Xue, Deyi; Gu, Peihua

    2014-03-01

    An adaptable product can satisfy different customer requirements by changing its configuration and parameter values during the operation stage. Design of adaptable products aims at reducing the environment impact through replacement of multiple different products with single adaptable ones. Due to the complex architecture, multiple functional requirements, and changes of product configurations and parameter values in operation, impact of uncertainties to the functional performance measures needs to be considered in design of adaptable products. In this paper, a robust design approach is introduced to identify the optimal design configuration and parameters of an adaptable product whose functional performance measures are the least sensitive to uncertainties. An adaptable product in this paper is modeled by both configurations and parameters. At the configuration level, methods to model different product configuration candidates in design and different product configuration states in operation to satisfy design requirements are introduced. At the parameter level, four types of product/operating parameters and relations among these parameters are discussed. A two-level optimization approach is developed to identify the optimal design configuration and its parameter values of the adaptable product. A case study is implemented to illustrate the effectiveness of the newly developed robust adaptable design method.

  9. Predicting chemically-induced skin reactions. Part I: QSAR models of skin sensitization and their application to identify potentially hazardous compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alves, Vinicius M.; Laboratory for Molecular Modeling, Division of Chemical Biology and Medicinal Chemistry, Eshelman School of Pharmacy, University of North Carolina, Chapel Hill, NC 27599; Muratov, Eugene

    Repetitive exposure to a chemical agent can induce an immune reaction in inherently susceptible individuals that leads to skin sensitization. Although many chemicals have been reported as skin sensitizers, there have been very few rigorously validated QSAR models with defined applicability domains (AD) that were developed using a large group of chemically diverse compounds. In this study, we have aimed to compile, curate, and integrate the largest publicly available dataset related to chemically-induced skin sensitization, use this data to generate rigorously validated and QSAR models for skin sensitization, and employ these models as a virtual screening tool for identifying putativemore » sensitizers among environmental chemicals. We followed best practices for model building and validation implemented with our predictive QSAR workflow using Random Forest modeling technique in combination with SiRMS and Dragon descriptors. The Correct Classification Rate (CCR) for QSAR models discriminating sensitizers from non-sensitizers was 71–88% when evaluated on several external validation sets, within a broad AD, with positive (for sensitizers) and negative (for non-sensitizers) predicted rates of 85% and 79% respectively. When compared to the skin sensitization module included in the OECD QSAR Toolbox as well as to the skin sensitization model in publicly available VEGA software, our models showed a significantly higher prediction accuracy for the same sets of external compounds as evaluated by Positive Predicted Rate, Negative Predicted Rate, and CCR. These models were applied to identify putative chemical hazards in the Scorecard database of possible skin or sense organ toxicants as primary candidates for experimental validation. - Highlights: • It was compiled the largest publicly-available skin sensitization dataset. • Predictive QSAR models were developed for skin sensitization. • Developed models have higher prediction accuracy than OECD QSAR Toolbox.

  10. Cell death, perfusion and electrical parameters are critical in models of hepatic radiofrequency ablation

    PubMed Central

    Hall, Sheldon K.; Ooi, Ean H.; Payne, Stephen J.

    2015-01-01

    Abstract Purpose: A sensitivity analysis has been performed on a mathematical model of radiofrequency ablation (RFA) in the liver. The purpose of this is to identify the most important parameters in the model, defined as those that produce the largest changes in the prediction. This is important in understanding the role of uncertainty and when comparing the model predictions to experimental data. Materials and methods: The Morris method was chosen to perform the sensitivity analysis because it is ideal for models with many parameters or that take a significant length of time to obtain solutions. A comprehensive literature review was performed to obtain ranges over which the model parameters are expected to vary, crucial input information. Results: The most important parameters in predicting the ablation zone size in our model of RFA are those representing the blood perfusion, electrical conductivity and the cell death model. The size of the 50 °C isotherm is sensitive to the electrical properties of tissue while the heat source is active, and to the thermal parameters during cooling. Conclusions: The parameter ranges chosen for the sensitivity analysis are believed to represent all that is currently known about their values in combination. The Morris method is able to compute global parameter sensitivities taking into account the interaction of all parameters, something that has not been done before. Research is needed to better understand the uncertainties in the cell death, electrical conductivity and perfusion models, but the other parameters are only of second order, providing a significant simplification. PMID:26000972

  11. Selection of noisy measurement locations for error reduction in static parameter identification

    NASA Astrophysics Data System (ADS)

    Sanayei, Masoud; Onipede, Oladipo; Babu, Suresh R.

    1992-09-01

    An incomplete set of noisy static force and displacement measurements is used for parameter identification of structures at the element level. Measurement location and the level of accuracy in the measured data can drastically affect the accuracy of the identified parameters. A heuristic method is presented to select a limited number of degrees of freedom (DOF) to perform a successful parameter identification and to reduce the impact of measurement errors on the identified parameters. This pretest simulation uses an error sensitivity analysis to determine the effect of measurement errors on the parameter estimates. The selected DOF can be used for nondestructive testing and health monitoring of structures. Two numerical examples, one for a truss and one for a frame, are presented to demonstrate that using the measurements at the selected subset of DOF can limit the error in the parameter estimates.

  12. Sensitivity and Nonlinearity of Thermoacoustic Oscillations

    NASA Astrophysics Data System (ADS)

    Juniper, Matthew P.; Sujith, R. I.

    2018-01-01

    Nine decades of rocket engine and gas turbine development have shown that thermoacoustic oscillations are difficult to predict but can usually be eliminated with relatively small ad hoc design changes. These changes can, however, be ruinously expensive to devise. This review explains why linear and nonlinear thermoacoustic behavior is so sensitive to parameters such as operating point, fuel composition, and injector geometry. It shows how nonperiodic behavior arises in experiments and simulations and discusses how fluctuations in thermoacoustic systems with turbulent reacting flow, which are usually filtered or averaged out as noise, can reveal useful information. Finally, it proposes tools to exploit this sensitivity in the future: adjoint-based sensitivity analysis to optimize passive control designs and complex systems theory to warn of impending thermoacoustic oscillations and to identify the most sensitive elements of a thermoacoustic system.

  13. How Sensitive Are Transdermal Transport Predictions by Microscopic Stratum Corneum Models to Geometric and Transport Parameter Input?

    PubMed

    Wen, Jessica; Koo, Soh Myoung; Lape, Nancy

    2018-02-01

    While predictive models of transdermal transport have the potential to reduce human and animal testing, microscopic stratum corneum (SC) model output is highly dependent on idealized SC geometry, transport pathway (transcellular vs. intercellular), and penetrant transport parameters (e.g., compound diffusivity in lipids). Most microscopic models are limited to a simple rectangular brick-and-mortar SC geometry and do not account for variability across delivery sites, hydration levels, and populations. In addition, these models rely on transport parameters obtained from pure theory, parameter fitting to match in vivo experiments, and time-intensive diffusion experiments for each compound. In this work, we develop a microscopic finite element model that allows us to probe model sensitivity to variations in geometry, transport pathway, and hydration level. Given the dearth of experimentally-validated transport data and the wide range in theoretically-predicted transport parameters, we examine the model's response to a variety of transport parameters reported in the literature. Results show that model predictions are strongly dependent on all aforementioned variations, resulting in order-of-magnitude differences in lag times and permeabilities for distinct structure, hydration, and parameter combinations. This work demonstrates that universally predictive models cannot fully succeed without employing experimentally verified transport parameters and individualized SC structures. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  14. A computational framework for testing arrhythmia marker sensitivities to model parameters in functionally calibrated populations of atrial cells

    NASA Astrophysics Data System (ADS)

    Vagos, Márcia R.; Arevalo, Hermenegild; de Oliveira, Bernardo Lino; Sundnes, Joakim; Maleckar, Mary M.

    2017-09-01

    Models of cardiac cell electrophysiology are complex non-linear systems which can be used to gain insight into mechanisms of cardiac dynamics in both healthy and pathological conditions. However, the complexity of cardiac models can make mechanistic insight difficult. Moreover, these are typically fitted to averaged experimental data which do not incorporate the variability in observations. Recently, building populations of models to incorporate inter- and intra-subject variability in simulations has been combined with sensitivity analysis (SA) to uncover novel ionic mechanisms and potentially clarify arrhythmogenic behaviors. We used the Koivumäki human atrial cell model to create two populations, representing normal Sinus Rhythm (nSR) and chronic Atrial Fibrillation (cAF), by varying 22 key model parameters. In each population, 14 biomarkers related to the action potential and dynamic restitution were extracted. Populations were calibrated based on distributions of biomarkers to obtain reasonable physiological behavior, and subjected to SA to quantify correlations between model parameters and pro-arrhythmia markers. The two populations showed distinct behaviors under steady state and dynamic pacing. The nSR population revealed greater variability, and more unstable dynamic restitution, as compared to the cAF population, suggesting that simulated cAF remodeling rendered cells more stable to parameter variation and rate adaptation. SA revealed that the biomarkers depended mainly on five ionic currents, with noted differences in sensitivities to these between nSR and cAF. Also, parameters could be selected to produce a model variant with no alternans and unaltered action potential morphology, highlighting that unstable dynamical behavior may be driven by specific cell parameter settings. These results ultimately suggest that arrhythmia maintenance in cAF may not be due to instability in cell membrane excitability, but rather due to tissue-level effects which

  15. An investigation on die crack detection using Temperature Sensitive Parameter for high speed LED mass production

    NASA Astrophysics Data System (ADS)

    Annaniah, Luruthudass; Devarajan, Mutharasu; San, Teoh Kok

    To ensure the highest quality & long-term reliability of LED components it is necessary to examine LED dice that have sustained mechanical damage during the manufacturing process. This paper has demonstrated that detection of die crack in mass manufactured LEDs can be achieved by measuring Temperature Sensitive Parameters (TSPs) during final testing. A newly-designed apparatus and microcontroller was used for this investigation in order to achieve the millisecond switching time needed for detecting thermal transient effects and at the same time meet the expected speed for mass manufacturing. Evaluations conducted at lab scale shows that thermal transient behaviour of cracked die is significantly different than that of an undamaged die. Having an established test limits to differentiate cracked dice, large volume tests in a production environment were used to confirm the effectiveness of this test method. Failure Bin Analysis (FBA) of this high volume experiment confirmed that all the cracked die LEDs were detected and the undamaged LEDs passed this test without over-rejection. The work verifies that tests based on TSP are effective in identifying die cracks and it is believed that the method could be extended to other types of rejects that have thermal transient signatures such as die delamination.

  16. Sensitivity of frozen section histology for identifying Propionibacterium acnes infections in revision shoulder arthroplasty.

    PubMed

    Grosso, Matthew J; Frangiamore, Salvatore J; Ricchetti, Eric T; Bauer, Thomas W; Iannotti, Joseph P

    2014-03-19

    Propionibacterium acnes is a clinically relevant pathogen with total shoulder arthroplasty. The purpose of this study was to determine the sensitivity of frozen section histology in identifying patients with Propionibacterium acnes infection during revision total shoulder arthroplasty and investigate various diagnostic thresholds of acute inflammation that may improve frozen section performance. We reviewed the results of forty-five patients who underwent revision total shoulder arthroplasty. Patients were divided into the non-infection group (n = 15), the Propionibacterium acnes infection group (n = 18), and the other infection group (n = 12). Routine preoperative testing was performed and intraoperative tissue culture and frozen section histology were collected for each patient. The histologic diagnosis was determined by one pathologist for each of the four different thresholds. The absolute maximum polymorphonuclear leukocyte concentration was used to construct a receiver operating characteristics curve to determine a new potential optimal threshold. Using the current thresholds for grading frozen section histology, the sensitivity was lower for the Propionibacterium acnes infection group (50%) compared with the other infection group (67%). The specificity of frozen section was 100%. Using a receiver operating characteristics curve, an optimized threshold was found at a total of ten polymorphonuclear leukocytes in five high-power fields (400×). Using this threshold, the sensitivity of frozen section for Propionibacterium acnes was increased to 72%, and the specificity remained at 100%. Using current histopathology grading systems, frozen sections were specific but showed low sensitivity with respect to the Propionibacterium acnes infection. A new threshold value of a total of ten or more polymorphonuclear leukocytes in five high-power fields may increase the sensitivity of frozen section, with minimal impact on specificity.

  17. Colorado River basin sensitivity to disturbance impacts

    NASA Astrophysics Data System (ADS)

    Bennett, K. E.; Urrego-Blanco, J. R.; Jonko, A. K.; Vano, J. A.; Newman, A. J.; Bohn, T. J.; Middleton, R. S.

    2017-12-01

    The Colorado River basin is an important river for the food-energy-water nexus in the United States and is projected to change under future scenarios of increased CO2emissions and warming. Streamflow estimates to consider climate impacts occurring as a result of this warming are often provided using modeling tools which rely on uncertain inputs—to fully understand impacts on streamflow sensitivity analysis can help determine how models respond under changing disturbances such as climate and vegetation. In this study, we conduct a global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the Variable Infiltration Capacity (VIC) hydrologic model to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in VIC. Additionally, we examine sensitivities of basin-wide model simulations using an approach that incorporates changes in temperature, precipitation and vegetation to consider impact responses for snow-dominated headwater catchments, low elevation arid basins, and for the upper and lower river basins. We find that for the Colorado River basin, snow-dominated regions are more sensitive to uncertainties. New parameter sensitivities identified include runoff/evapotranspiration sensitivity to albedo, while changes in snow water equivalent are sensitive to canopy fraction and Leaf Area Index (LAI). Basin-wide streamflow sensitivities to precipitation, temperature and vegetation are variable seasonally and also between sub-basins; with the largest sensitivities for smaller, snow-driven headwater systems where forests are dense. For a major headwater basin, a 1ºC of warming equaled a 30% loss of forest cover, while a 10% precipitation loss equaled a 90% forest cover decline. Scenarios utilizing multiple disturbances led to unexpected results where changes could either magnify or diminish extremes, such as low and peak flows and streamflow timing

  18. Integrated analysis of rice transcriptomic and metabolomic responses to elevated night temperatures identifies sensitivity- and tolerance-related profiles.

    PubMed

    Glaubitz, Ulrike; Li, Xia; Schaedel, Sandra; Erban, Alexander; Sulpice, Ronan; Kopka, Joachim; Hincha, Dirk K; Zuther, Ellen

    2017-01-01

    Transcript and metabolite profiling were performed on leaves from six rice cultivars under high night temperature (HNT) condition. Six genes were identified as central for HNT response encoding proteins involved in transcription regulation, signal transduction, protein-protein interactions, jasmonate response and the biosynthesis of secondary metabolites. Sensitive cultivars showed specific changes in transcript abundance including abiotic stress responses, changes of cell wall-related genes, of ABA signaling and secondary metabolism. Additionally, metabolite profiles revealed a highly activated TCA cycle under HNT and concomitantly increased levels in pathways branching off that could be corroborated by enzyme activity measurements. Integrated data analysis using clustering based on one-dimensional self-organizing maps identified two profiles highly correlated with HNT sensitivity. The sensitivity profile included genes of the functional bins abiotic stress, hormone metabolism, cell wall, signaling, redox state, transcription factors, secondary metabolites and defence genes. In the tolerance profile, similar bins were affected with slight differences in hormone metabolism and transcription factor responses. Metabolites of the two profiles revealed involvement of GABA signaling, thus providing a link to the TCA cycle status in sensitive cultivars and of myo-inositol as precursor for inositol phosphates linking jasmonate signaling to the HNT response specifically in tolerant cultivars. © 2016 John Wiley & Sons Ltd.

  19. Design and operational parameters of a rooftop rainwater harvesting system: definition, sensitivity and verification.

    PubMed

    Mun, J S; Han, M Y

    2012-01-01

    The appropriate design and evaluation of a rainwater harvesting (RWH) system is necessary to improve system performance and the stability of the water supply. The main design parameters (DPs) of an RWH system are rainfall, catchment area, collection efficiency, tank volume and water demand. Its operational parameters (OPs) include rainwater use efficiency (RUE), water saving efficiency (WSE) and cycle number (CN). The sensitivity analysis of a rooftop RWH system's DPs to its OPs reveals that the ratio of tank volume to catchment area (V/A) for an RWH system in Seoul, South Korea is recommended between 0.03 and 0.08 in terms of rate of change in RUE. The appropriate design value of V/A is varied with D/A. The extra tank volume up to V/A of 0.15∼0.2 is also available, if necessary to secure more water. Accordingly, we should figure out suitable value or range of DPs based on the sensitivity analysis to optimize design of an RWH system or improve operation efficiency. The operational data employed in this study, which was carried out to validate the design and evaluation method of an RWH system, were obtained from the system in use at a dormitory complex at Seoul National University (SNU) in Korea. The results of these operational data are in good agreement with those used in the initial simulation. The proposed method and the results of this research will be useful in evaluating and comparing the performance of RWH systems. It is found that RUE can be increased by expanding the variety of rainwater uses, particularly in the high rainfall season. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Estimation and Identifiability of Model Parameters in Human Nociceptive Processing Using Yes-No Detection Responses to Electrocutaneous Stimulation.

    PubMed

    Yang, Huan; Meijer, Hil G E; Buitenweg, Jan R; van Gils, Stephan A

    2016-01-01

    Healthy or pathological states of nociceptive subsystems determine different stimulus-response relations measured from quantitative sensory testing. In turn, stimulus-response measurements may be used to assess these states. In a recently developed computational model, six model parameters characterize activation of nerve endings and spinal neurons. However, both model nonlinearity and limited information in yes-no detection responses to electrocutaneous stimuli challenge to estimate model parameters. Here, we address the question whether and how one can overcome these difficulties for reliable parameter estimation. First, we fit the computational model to experimental stimulus-response pairs by maximizing the likelihood. To evaluate the balance between model fit and complexity, i.e., the number of model parameters, we evaluate the Bayesian Information Criterion. We find that the computational model is better than a conventional logistic model regarding the balance. Second, our theoretical analysis suggests to vary the pulse width among applied stimuli as a necessary condition to prevent structural non-identifiability. In addition, the numerically implemented profile likelihood approach reveals structural and practical non-identifiability. Our model-based approach with integration of psychophysical measurements can be useful for a reliable assessment of states of the nociceptive system.

  1. Volcano deformation source parameters estimated from InSAR: Sensitivities to uncertainties in seismic tomography

    USGS Publications Warehouse

    Masterlark, Timothy; Donovan, Theodore; Feigl, Kurt L.; Haney, Matt; Thurber, Clifford H.; Tung, Sui

    2016-01-01

    The eruption cycle of a volcano is controlled in part by the upward migration of magma. The characteristics of the magma flux produce a deformation signature at the Earth's surface. Inverse analyses use geodetic data to estimate strategic controlling parameters that describe the position and pressurization of a magma chamber at depth. The specific distribution of material properties controls how observed surface deformation translates to source parameter estimates. Seismic tomography models describe the spatial distributions of material properties that are necessary for accurate models of volcano deformation. This study investigates how uncertainties in seismic tomography models propagate into variations in the estimates of volcano deformation source parameters inverted from geodetic data. We conduct finite element model-based nonlinear inverse analyses of interferometric synthetic aperture radar (InSAR) data for Okmok volcano, Alaska, as an example. We then analyze the estimated parameters and their uncertainties to characterize the magma chamber. Analyses are performed separately for models simulating a pressurized chamber embedded in a homogeneous domain as well as for a domain having a heterogeneous distribution of material properties according to seismic tomography. The estimated depth of the source is sensitive to the distribution of material properties. The estimated depths for the homogeneous and heterogeneous domains are 2666 ± 42 and 3527 ± 56 m below mean sea level, respectively (99% confidence). A Monte Carlo analysis indicates that uncertainties of the seismic tomography cannot account for this discrepancy at the 99% confidence level. Accounting for the spatial distribution of elastic properties according to seismic tomography significantly improves the fit of the deformation model predictions and significantly influences estimates for parameters that describe the location of a pressurized magma chamber.

  2. Assessing uncertainty and sensitivity of model parameterizations and parameters in WRF affecting simulated surface fluxes and land-atmosphere coupling over the Amazon region

    NASA Astrophysics Data System (ADS)

    Qian, Y.; Wang, C.; Huang, M.; Berg, L. K.; Duan, Q.; Feng, Z.; Shrivastava, M. B.; Shin, H. H.; Hong, S. Y.

    2016-12-01

    This study aims to quantify the relative importance and uncertainties of different physical processes and parameters in affecting simulated surface fluxes and land-atmosphere coupling strength over the Amazon region. We used two-legged coupling metrics, which include both terrestrial (soil moisture to surface fluxes) and atmospheric (surface fluxes to atmospheric state or precipitation) legs, to diagnose the land-atmosphere interaction and coupling strength. Observations made using the Department of Energy's Atmospheric Radiation Measurement (ARM) Mobile Facility during the GoAmazon field campaign together with satellite and reanalysis data are used to evaluate model performance. To quantify the uncertainty in physical parameterizations, we performed a 120 member ensemble of simulations with the WRF model using a stratified experimental design including 6 cloud microphysics, 3 convection, 6 PBL and surface layer, and 3 land surface schemes. A multiple-way analysis of variance approach is used to quantitatively analyze the inter- and intra-group (scheme) means and variances. To quantify parameter sensitivity, we conducted an additional 256 WRF simulations in which an efficient sampling algorithm is used to explore the multiple-dimensional parameter space. Three uncertainty quantification approaches are applied for sensitivity analysis (SA) of multiple variables of interest to 20 selected parameters in YSU PBL and MM5 surface layer schemes. Results show consistent parameter sensitivity across different SA methods. We found that 5 out of 20 parameters contribute more than 90% total variance, and first-order effects dominate comparing to the interaction effects. Results of this uncertainty quantification study serve as guidance for better understanding the roles of different physical processes in land-atmosphere interactions, quantifying model uncertainties from various sources such as physical processes, parameters and structural errors, and providing insights for

  3. Histogram analysis parameters identify multiple associations between DWI and DCE MRI in head and neck squamous cell carcinoma.

    PubMed

    Meyer, Hans Jonas; Leifels, Leonard; Schob, Stefan; Garnov, Nikita; Surov, Alexey

    2018-01-01

    Nowadays, multiparametric investigations of head and neck squamous cell carcinoma (HNSCC) are established. These approaches can better characterize tumor biology and behavior. Diffusion weighted imaging (DWI) can by means of apparent diffusion coefficient (ADC) quantitatively characterize different tissue compartments. Dynamic contrast-enhanced magnetic resonance imaging (DCE MRI) reflects perfusion and vascularization of tissues. Recently, a novel approach of data acquisition, namely histogram analysis of different images is a novel diagnostic approach, which can provide more information of tissue heterogeneity. The purpose of this study was to analyze possible associations between DWI, and DCE parameters derived from histogram analysis in patients with HNSCC. Overall, 34 patients, 9 women and 25 men, mean age, 56.7±10.2years, with different HNSCC were involved in the study. DWI was obtained by using of an axial echo planar imaging sequence with b-values of 0 and 800s/mm 2 . Dynamic T1w DCE sequence after intravenous application of contrast medium was performed for estimation of the following perfusion parameters: volume transfer constant (K trans ), volume of the extravascular extracellular leakage space (Ve), and diffusion of contrast medium from the extravascular extracellular leakage space back to the plasma (Kep). Both ADC and perfusion parameters maps were processed offline in DICOM format with custom-made Matlab-based application. Thereafter, polygonal ROIs were manually drawn on the transferred maps on each slice. For every parameter, mean, maximal, minimal, and median values, as well percentiles 10th, 25th, 75th, 90th, kurtosis, skewness, and entropy were estimated. Сorrelation analysis identified multiple statistically significant correlations between the investigated parameters. Ve related parameters correlated well with different ADC values. Especially, percentiles 10 and 75, mode, and median values showed stronger correlations in comparison to other

  4. Tracer SWIW tests in propped and un-propped fractures: parameter sensitivity issues, revisited

    NASA Astrophysics Data System (ADS)

    Ghergut, Julia; Behrens, Horst; Sauter, Martin

    2017-04-01

    -scale diffusion; (iii) attempt to determine both advective and non-advective transport parameters from one and the same conservative-tracer signal (relying on 'third-party' knowledge), or from twin signals of a so-called 'dual' tracer pair, e. g.: using tracers with contrasting reactivity and partitioning behavior to determine residual saturation in depleted oilfields (Tomich et al. 1973), or to determine advective parameters (Ghergut et al. 2014); using early-time signals of conservative and sorptive tracers for propped-fracture characterization (Karmakar et al. 2015); using mid-time signals of conservative tracers for a reservoir-borne inflow profiling in multi-frac systems (Ghergut et al. 2016), etc. The poster describes new uses of type-(iii) techniques for the specific purposes of shale-gas reservoir characterization, productivity monitoring, diagnostics and engineering of 're-frac' treatments, based on parameter sensitivity findings from German BMWi research project "TRENDS" (Federal Ministry for Economic Affairs and Energy, FKZ 0325515) and from the EU-H2020 project "FracRisk" (grant no. 640979).

  5. ON IDENTIFIABILITY OF NONLINEAR ODE MODELS AND APPLICATIONS IN VIRAL DYNAMICS

    PubMed Central

    MIAO, HONGYU; XIA, XIAOHUA; PERELSON, ALAN S.; WU, HULIN

    2011-01-01

    Ordinary differential equations (ODE) are a powerful tool for modeling dynamic processes with wide applications in a variety of scientific fields. Over the last 2 decades, ODEs have also emerged as a prevailing tool in various biomedical research fields, especially in infectious disease modeling. In practice, it is important and necessary to determine unknown parameters in ODE models based on experimental data. Identifiability analysis is the first step in determing unknown parameters in ODE models and such analysis techniques for nonlinear ODE models are still under development. In this article, we review identifiability analysis methodologies for nonlinear ODE models developed in the past one to two decades, including structural identifiability analysis, practical identifiability analysis and sensitivity-based identifiability analysis. Some advanced topics and ongoing research are also briefly reviewed. Finally, some examples from modeling viral dynamics of HIV, influenza and hepatitis viruses are given to illustrate how to apply these identifiability analysis methods in practice. PMID:21785515

  6. Comparison of surrogate indices for insulin sensitivity with parameters of the intravenous glucose tolerance test in early lactation dairy cattle.

    PubMed

    Alves-Nores, V; Castillo, C; Hernandez, J; Abuelo, A

    2017-10-01

    The aim of this study was to investigate the correlation between different surrogate indices and parameters of the intravenous glucose tolerance test (IVGTT) in dairy cows at the start of their lactation. Ten dairy cows underwent IVGTT on Days 3 to 7 after calving. Areas under the curve during the 90 min after infusion, peak and nadir concentrations, elimination rates, and times to reach half-maximal and basal concentrations for glucose, insulin, nonesterified fatty acids, and β-hydroxybutyrate were calculated. Surrogate indices were computed using the average of the IVGTT basal samples, and their correlation with the IVGTT parameters studied through the Spearman's rank test. No statistically significant or strong correlation coefficients (P > 0.05; |ρ| < 0.50) were observed between the insulin sensitivity measures derived from the IVGTT and any of the surrogate indices. Therefore, these results support that the assessment of insulin sensitivity in early lactation cattle cannot rely on the calculation of surrogate indices in just a blood sample, and the more laborious tests (ie, hyperinsulinemic euglycemic clamp test or IVGTT) should be employed to predict the sensitivity of the peripheral tissues to insulin accurately. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Linear-quadratic-Gaussian synthesis with reduced parameter sensitivity

    NASA Technical Reports Server (NTRS)

    Lin, J. Y.; Mingori, D. L.

    1992-01-01

    We present a method for improving the tolerance of a conventional LQG controller to parameter errors in the plant model. The improvement is achieved by introducing additional terms reflecting the structure of the parameter errors into the LQR cost function, and also the process and measurement noise models. Adjusting the sizes of these additional terms permits a trade-off between robustness and nominal performance. Manipulation of some of the additional terms leads to high gain controllers while other terms lead to low gain controllers. Conditions are developed under which the high-gain approach asymptotically recovers the robustness of the corresponding full-state feedback design, and the low-gain approach makes the closed-loop poles asymptotically insensitive to parameter errors.

  8. Sensitivity Analysis in Sequential Decision Models.

    PubMed

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  9. High-resolution linkage analyses to identify genes that influence Varroa sensitive hygiene behavior in honey bees.

    PubMed

    Tsuruda, Jennifer M; Harris, Jeffrey W; Bourgeois, Lanie; Danka, Robert G; Hunt, Greg J

    2012-01-01

    Varroa mites (V. destructor) are a major threat to honey bees (Apis melilfera) and beekeeping worldwide and likely lead to colony decline if colonies are not treated. Most treatments involve chemical control of the mites; however, Varroa has evolved resistance to many of these miticides, leaving beekeepers with a limited number of alternatives. A non-chemical control method is highly desirable for numerous reasons including lack of chemical residues and decreased likelihood of resistance. Varroa sensitive hygiene behavior is one of two behaviors identified that are most important for controlling the growth of Varroa populations in bee hives. To identify genes influencing this trait, a study was conducted to map quantitative trait loci (QTL). Individual workers of a backcross family were observed and evaluated for their VSH behavior in a mite-infested observation hive. Bees that uncapped or removed pupae were identified. The genotypes for 1,340 informative single nucleotide polymorphisms were used to construct a high-resolution genetic map and interval mapping was used to analyze the association of the genotypes with the performance of Varroa sensitive hygiene. We identified one major QTL on chromosome 9 (LOD score = 3.21) and a suggestive QTL on chromosome 1 (LOD = 1.95). The QTL confidence interval on chromosome 9 contains the gene 'no receptor potential A' and a dopamine receptor. 'No receptor potential A' is involved in vision and olfaction in Drosophila, and dopamine signaling has been previously shown to be required for aversive olfactory learning in honey bees, which is probably necessary for identifying mites within brood cells. Further studies on these candidate genes may allow for breeding bees with this trait using marker-assisted selection.

  10. Sensitivity of combustion and ignition characteristics of the solid-fuel charge of the microelectromechanical system of a microthruster to macrokinetic and design parameters

    NASA Astrophysics Data System (ADS)

    Futko, S. I.; Ermolaeva, E. M.; Dobrego, K. V.; Bondarenko, V. P.; Dolgii, L. N.

    2012-07-01

    We have developed a sensitivity analysis permitting effective estimation of the change in the impulse responses of a microthrusters and in the ignition characteristics of the solid-fuel charge caused by the variation of the basic macrokinetic parameters of the mixed fuel and the design parameters of the microthruster's combustion chamber. On the basis of the proposed sensitivity analysis, we have estimated the spread of both the propulsive force and impulse and the induction period and self-ignition temperature depending on the macrokinetic parameters of combustion (pre-exponential factor, activation energy, density, and heat content) of the solid-fuel charge of the microthruster. The obtained results can be used for rapid and effective estimation of the spread of goal functions to provide stable physicochemical characteristics and impulse responses of solid-fuel mixtures in making and using microthrusters.

  11. Monitoring Tumor Response to Carbogen Breathing by Oxygen-Sensitive Magnetic Resonance Parameters to Predict the Outcome of Radiation Therapy: A Preclinical Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao-Pham, Thanh-Trang; Tran, Ly-Binh-An; Colliez, Florence

    Purpose: In an effort to develop noninvasive in vivo methods for mapping tumor oxygenation, magnetic resonance (MR)-derived parameters are being considered, including global R{sub 1}, water R{sub 1}, lipids R{sub 1}, and R{sub 2}*. R{sub 1} is sensitive to dissolved molecular oxygen, whereas R{sub 2}* is sensitive to blood oxygenation, detecting changes in dHb. This work compares global R{sub 1}, water R{sub 1}, lipids R{sub 1}, and R{sub 2}* with pO{sub 2} assessed by electron paramagnetic resonance (EPR) oximetry, as potential markers of the outcome of radiation therapy (RT). Methods and Materials: R{sub 1}, R{sub 2}*, and EPR were performed onmore » rhabdomyosarcoma and 9L-glioma tumor models, under air and carbogen breathing conditions (95% O{sub 2}, 5% CO{sub 2}). Because the models demonstrated different radiosensitivity properties toward carbogen, a growth delay (GD) assay was performed on the rhabdomyosarcoma model and a tumor control dose 50% (TCD50) was performed on the 9L-glioma model. Results: Magnetic resonance imaging oxygen-sensitive parameters detected the positive changes in oxygenation induced by carbogen within tumors. No consistent correlation was seen throughout the study between MR parameters and pO{sub 2}. Global and lipids R{sub 1} were found to be correlated to pO{sub 2} in the rhabdomyosarcoma model, whereas R{sub 2}* was found to be inversely correlated to pO{sub 2} in the 9L-glioma model (P=.05 and .03). Carbogen increased the TCD50 of 9L-glioma but did not increase the GD of rhabdomyosarcoma. Only R{sub 2}* was predictive (P<.05) for the curability of 9L-glioma at 40 Gy, a dose that showed a difference in response to RT between carbogen and air-breathing groups. {sup 18}F-FAZA positron emission tomography imaging has been shown to be a predictive marker under the same conditions. Conclusion: This work illustrates the sensitivity of oxygen-sensitive R{sub 1} and R{sub 2}* parameters to changes in tumor oxygenation. However, R{sub 1

  12. Sensitivity Analysis of Methane Hydrate Reservoirs: Effects of Reservoir Parameters on Gas Productivity and Economics

    NASA Astrophysics Data System (ADS)

    Anderson, B. J.; Gaddipati, M.; Nyayapathi, L.

    2008-12-01

    This paper presents a parametric study on production rates of natural gas from gas hydrates by the method of depressurization, using CMG STARS. Seven factors/parameters were considered as perturbations from a base-case hydrate reservoir description based on Problem 7 of the International Methane Hydrate Reservoir Simulator Code Comparison Study led by the Department of Energy and the USGS. This reservoir is modeled after the inferred properties of the hydrate deposit at the Prudhoe Bay L-106 site. The included sensitivity variables were hydrate saturation, pressure (depth), temperature, bottom-hole pressure of the production well, free water saturation, intrinsic rock permeability, and porosity. A two-level (L=2) Plackett-Burman experimental design was used to study the relative effects of these factors. The measured variable was the discounted cumulative gas production. The discount rate chosen was 15%, resulting in the gas contribution to the net present value of a reservoir. Eight different designs were developed for conducting sensitivity analysis and the effects of the parameters on the real and discounted production rates will be discussed. The breakeven price in various cases and the dependence of the breakeven price on the production parameters is given in the paper. As expected, initial reservoir temperature has the strongest positive effect on the productivity of a hydrate deposit and the bottom-hole pressure in the production well has the strongest negative dependence. Also resulting in a positive correlation is the intrinsic permeability and the initial free water of the formation. Negative effects were found for initial hydrate saturation (at saturations greater than 50% of the pore space) and the reservoir porosity. These negative effects are related to the available sensible heat of the reservoir, with decreasing productivity due to decreasing available sensible heat. Finally, we conclude that for the base case reservoir, the break-even price (BEP

  13. Assessing the sensitivity of bovine tuberculosis surveillance in Canada's cattle population, 2009-2013.

    PubMed

    El Allaki, Farouk; Harrington, Noel; Howden, Krista

    2016-11-01

    The objectives of this study were (1) to estimate the annual sensitivity of Canada's bTB surveillance system and its three system components (slaughter surveillance, export testing and disease investigation) using a scenario tree modelling approach, and (2) to identify key model parameters that influence the estimates of the surveillance system sensitivity (SSSe). To achieve these objectives, we designed stochastic scenario tree models for three surveillance system components included in the analysis. Demographic data, slaughter data, export testing data, and disease investigation data from 2009 to 2013 were extracted for input into the scenario trees. Sensitivity analysis was conducted to identify key influential parameters on SSSe estimates. The median annual SSSe estimates generated from the study were very high, ranging from 0.95 (95% probability interval [PI]: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). Median annual sensitivity estimates for the slaughter surveillance component ranged from 0.95 (95% PI: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). This shows that slaughter surveillance to be the major contributor to overall surveillance system sensitivity with a high probability to detect M. bovis infection if present at a prevalence of 0.00028% or greater during the study period. The export testing and disease investigation components had extremely low component sensitivity estimates-the maximum median sensitivity estimates were 0.02 (95% PI: 0.014-0.023) and 0.0061 (95% PI: 0.0056-0.0066) respectively. The three most influential input parameters on the model's output (SSSe) were the probability of a granuloma being detected at slaughter inspection, the probability of a granuloma being present in older animals (≥12 months of age), and the probability of a granuloma sample being submitted to the laboratory. Additional studies are required to reduce the levels of uncertainty and variability associated with these three parameters influencing the surveillance system

  14. Sensitivity of turbine-height wind speeds to parameters in planetary boundary-layer and surface-layer schemes in the weather research and forecasting model

    DOE PAGES

    Yang, Ben; Qian, Yun; Berg, Larry K.; ...

    2016-07-21

    We evaluate the sensitivity of simulated turbine-height wind speeds to 26 parameters within the Mellor–Yamada–Nakanishi–Niino (MYNN) planetary boundary-layer scheme and MM5 surface-layer scheme of the Weather Research and Forecasting model over an area of complex terrain. An efficient sampling algorithm and generalized linear model are used to explore the multiple-dimensional parameter space and quantify the parametric sensitivity of simulated turbine-height wind speeds. The results indicate that most of the variability in the ensemble simulations is due to parameters related to the dissipation of turbulent kinetic energy (TKE), Prandtl number, turbulent length scales, surface roughness, and the von Kármán constant. Themore » parameter associated with the TKE dissipation rate is found to be most important, and a larger dissipation rate produces larger hub-height wind speeds. A larger Prandtl number results in smaller nighttime wind speeds. Increasing surface roughness reduces the frequencies of both extremely weak and strong airflows, implying a reduction in the variability of wind speed. All of the above parameters significantly affect the vertical profiles of wind speed and the magnitude of wind shear. Lastly, the relative contributions of individual parameters are found to be dependent on both the terrain slope and atmospheric stability.« less

  15. Sensitivity of turbine-height wind speeds to parameters in planetary boundary-layer and surface-layer schemes in the weather research and forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Ben; Qian, Yun; Berg, Larry K.

    We evaluate the sensitivity of simulated turbine-height wind speeds to 26 parameters within the Mellor–Yamada–Nakanishi–Niino (MYNN) planetary boundary-layer scheme and MM5 surface-layer scheme of the Weather Research and Forecasting model over an area of complex terrain. An efficient sampling algorithm and generalized linear model are used to explore the multiple-dimensional parameter space and quantify the parametric sensitivity of simulated turbine-height wind speeds. The results indicate that most of the variability in the ensemble simulations is due to parameters related to the dissipation of turbulent kinetic energy (TKE), Prandtl number, turbulent length scales, surface roughness, and the von Kármán constant. Themore » parameter associated with the TKE dissipation rate is found to be most important, and a larger dissipation rate produces larger hub-height wind speeds. A larger Prandtl number results in smaller nighttime wind speeds. Increasing surface roughness reduces the frequencies of both extremely weak and strong airflows, implying a reduction in the variability of wind speed. All of the above parameters significantly affect the vertical profiles of wind speed and the magnitude of wind shear. Lastly, the relative contributions of individual parameters are found to be dependent on both the terrain slope and atmospheric stability.« less

  16. Stability assessment and operating parameter optimization on experimental results in very small plasma focus, using sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Jafari, Hossein; Habibi, Morteza

    2018-04-01

    Regarding the importance of stability in small-scale plasma focus devices for producing the repeatable and strength pinching, a sensitivity analysis approach has been used for applicability in design parameters optimization of an actually very low energy device (84 nF, 48 nH, 8-9.5 kV, ∼2.7-3.7 J). To optimize the devices functional specification, four different coaxial electrode configurations have been studied, scanning an argon gas pressure range from 0.6 to 1.5 mbar via the charging voltage variation study from 8.3 to 9.3 kV. The strength and efficient pinching was observed for the tapered anode configuration, over an expanded operating pressure range of 0.6 to 1.5 mbar. The analysis results showed that the most sensitive of the pinch voltage was associated with 0.88 ± 0.8mbar argon gas pressure and 8.3-8.5 kV charging voltage, respectively, as the optimum operating parameters. From the viewpoint of stability assessment of the device, it was observed that the least variation in stable operation of the device was for a charging voltage range of 8.3 to 8.7 kV in an operating pressure range from 0.6 to 1.1 mbar.

  17. Automated Optimization of Potential Parameters

    PubMed Central

    Michele, Di Pierro; Ron, Elber

    2013-01-01

    An algorithm and software to refine parameters of empirical energy functions according to condensed phase experimental measurements are discussed. The algorithm is based on sensitivity analysis and local minimization of the differences between experiment and simulation as a function of potential parameters. It is illustrated for a toy problem of alanine dipeptide and is applied to folding of the peptide WAAAH. The helix fraction is highly sensitive to the potential parameters while the slope of the melting curve is not. The sensitivity variations make it difficult to satisfy both observations simultaneously. We conjecture that there is no set of parameters that reproduces experimental melting curves of short peptides that are modeled with the usual functional form of a force field. PMID:24015115

  18. The impact of standard and hard-coded parameters on the hydrologic fluxes in the Noah-MP land surface model

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Branch, Oliver; Attinger, Sabine; Thober, Stephan

    2016-09-01

    Land surface models incorporate a large number of process descriptions, containing a multitude of parameters. These parameters are typically read from tabulated input files. Some of these parameters might be fixed numbers in the computer code though, which hinder model agility during calibration. Here we identified 139 hard-coded parameters in the model code of the Noah land surface model with multiple process options (Noah-MP). We performed a Sobol' global sensitivity analysis of Noah-MP for a specific set of process options, which includes 42 out of the 71 standard parameters and 75 out of the 139 hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated at 12 catchments within the United States with very different hydrometeorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its applicable standard parameters (i.e., Sobol' indexes above 1%). The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for direct evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities because of their tight coupling via the water balance. A calibration of Noah-MP against either of these fluxes should therefore give comparable results. Moreover, these fluxes are sensitive to both plant and soil parameters. Calibrating, for example, only soil parameters hence limit the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were

  19. Molecular dissection of colorectal cancer in pre-clinical models identifies biomarkers predicting sensitivity to EGFR inhibitors

    PubMed Central

    Schütte, Moritz; Risch, Thomas; Abdavi-Azar, Nilofar; Boehnke, Karsten; Schumacher, Dirk; Keil, Marlen; Yildiriman, Reha; Jandrasits, Christine; Borodina, Tatiana; Amstislavskiy, Vyacheslav; Worth, Catherine L.; Schweiger, Caroline; Liebs, Sandra; Lange, Martin; Warnatz, Hans- Jörg; Butcher, Lee M.; Barrett, James E.; Sultan, Marc; Wierling, Christoph; Golob-Schwarzl, Nicole; Lax, Sigurd; Uranitsch, Stefan; Becker, Michael; Welte, Yvonne; Regan, Joseph Lewis; Silvestrov, Maxine; Kehler, Inge; Fusi, Alberto; Kessler, Thomas; Herwig, Ralf; Landegren, Ulf; Wienke, Dirk; Nilsson, Mats; Velasco, Juan A.; Garin-Chesa, Pilar; Reinhard, Christoph; Beck, Stephan; Schäfer, Reinhold; Regenbrecht, Christian R. A.; Henderson, David; Lange, Bodo; Haybaeck, Johannes; Keilholz, Ulrich; Hoffmann, Jens; Lehrach, Hans; Yaspo, Marie-Laure

    2017-01-01

    Colorectal carcinoma represents a heterogeneous entity, with only a fraction of the tumours responding to available therapies, requiring a better molecular understanding of the disease in precision oncology. To address this challenge, the OncoTrack consortium recruited 106 CRC patients (stages I–IV) and developed a pre-clinical platform generating a compendium of drug sensitivity data totalling >4,000 assays testing 16 clinical drugs on patient-derived in vivo and in vitro models. This large biobank of 106 tumours, 35 organoids and 59 xenografts, with extensive omics data comparing donor tumours and derived models provides a resource for advancing our understanding of CRC. Models recapitulate many of the genetic and transcriptomic features of the donors, but defined less complex molecular sub-groups because of the loss of human stroma. Linking molecular profiles with drug sensitivity patterns identifies novel biomarkers, including a signature outperforming RAS/RAF mutations in predicting sensitivity to the EGFR inhibitor cetuximab. PMID:28186126

  20. Sensitivity analysis of periodic errors in heterodyne interferometry

    NASA Astrophysics Data System (ADS)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-03-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.

  1. Integrative Approach to Pain Genetics Identifies Pain Sensitivity Loci across Diseases

    PubMed Central

    Ruau, David; Dudley, Joel T.; Chen, Rong; Phillips, Nicholas G.; Swan, Gary E.; Lazzeroni, Laura C.; Clark, J. David

    2012-01-01

    Identifying human genes relevant for the processing of pain requires difficult-to-conduct and expensive large-scale clinical trials. Here, we examine a novel integrative paradigm for data-driven discovery of pain gene candidates, taking advantage of the vast amount of existing disease-related clinical literature and gene expression microarray data stored in large international repositories. First, thousands of diseases were ranked according to a disease-specific pain index (DSPI), derived from Medical Subject Heading (MESH) annotations in MEDLINE. Second, gene expression profiles of 121 of these human diseases were obtained from public sources. Third, genes with expression variation significantly correlated with DSPI across diseases were selected as candidate pain genes. Finally, selected candidate pain genes were genotyped in an independent human cohort and prospectively evaluated for significant association between variants and measures of pain sensitivity. The strongest signal was with rs4512126 (5q32, ABLIM3, P = 1.3×10−10) for the sensitivity to cold pressor pain in males, but not in females. Significant associations were also observed with rs12548828, rs7826700 and rs1075791 on 8q22.2 within NCALD (P = 1.7×10−4, 1.8×10−4, and 2.2×10−4 respectively). Our results demonstrate the utility of a novel paradigm that integrates publicly available disease-specific gene expression data with clinical data curated from MEDLINE to facilitate the discovery of pain-relevant genes. This data-derived list of pain gene candidates enables additional focused and efficient biological studies validating additional candidates. PMID:22685391

  2. Sensitivity of corneal biomechanical and optical behavior to material parameters using design of experiments method.

    PubMed

    Xu, Mengchen; Lerner, Amy L; Funkenbusch, Paul D; Richhariya, Ashutosh; Yoon, Geunyoung

    2018-02-01

    The optical performance of the human cornea under intraocular pressure (IOP) is the result of complex material properties and their interactions. The measurement of the numerous material parameters that define this material behavior may be key in the refinement of patient-specific models. The goal of this study was to investigate the relative contribution of these parameters to the biomechanical and optical responses of human cornea predicted by a widely accepted anisotropic hyperelastic finite element model, with regional variations in the alignment of fibers. Design of experiments methods were used to quantify the relative importance of material properties including matrix stiffness, fiber stiffness, fiber nonlinearity and fiber dispersion under physiological IOP. Our sensitivity results showed that corneal apical displacement was influenced nearly evenly by matrix stiffness, fiber stiffness and nonlinearity. However, the variations in corneal optical aberrations (refractive power and spherical aberration) were primarily dependent on the value of the matrix stiffness. The optical aberrations predicted by variations in this material parameter were sufficiently large to predict clinically important changes in retinal image quality. Therefore, well-characterized individual variations in matrix stiffness could be critical in cornea modeling in order to reliably predict optical behavior under different IOPs or after corneal surgery.

  3. A Computational Framework for Identifiability and Ill-Conditioning Analysis of Lithium-Ion Battery Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    López C, Diana C.; Wozny, Günter; Flores-Tlacuahuac, Antonio

    2016-03-23

    The lack of informative experimental data and the complexity of first-principles battery models make the recovery of kinetic, transport, and thermodynamic parameters complicated. We present a computational framework that combines sensitivity, singular value, and Monte Carlo analysis to explore how different sources of experimental data affect parameter structural ill conditioning and identifiability. Our study is conducted on a modified version of the Doyle-Fuller-Newman model. We demonstrate that the use of voltage discharge curves only enables the identification of a small parameter subset, regardless of the number of experiments considered. Furthermore, we show that the inclusion of a single electrolyte concentrationmore » measurement significantly aids identifiability and mitigates ill-conditioning.« less

  4. Sensitivity and Specificity of Cetuximab-IRDye800CW to Identify Regional Metastatic Disease in Head and Neck Cancer.

    PubMed

    Rosenthal, Eben L; Moore, Lindsay S; Tipirneni, Kiranya; de Boer, Esther; Stevens, Todd M; Hartman, Yolanda E; Carroll, William R; Zinn, Kurt R; Warram, Jason M

    2017-08-15

    Purpose: Comprehensive cervical lymphadenectomy can be associated with significant morbidity and poor quality of life. This study evaluated the sensitivity and specificity of cetuximab-IRDye800CW to identify metastatic disease in patients with head and neck cancer. Experimental Design: Consenting patients scheduled for curative resection were enrolled in a clinical trial to evaluate the safety and specificity of cetuximab-IRDye800CW. Patients ( n = 12) received escalating doses of the study drug. Where indicated, cervical lymphadenectomy accompanied primary tumor resection, which occurred 3 to 7 days following intravenous infusion of cetuximab-IRDye800CW. All 471 dissected lymph nodes were imaged with a closed-field, near-infrared imaging device during gross processing of the fresh specimens. Intraoperative imaging of exposed neck levels was performed with an open-field fluorescence imaging device. Blinded assessments of the fluorescence data were compared to histopathology to calculate sensitivity, specificity, negative predictive value (NPV), and positive predictive value (PPV). Results: Of the 35 nodes diagnosed pathologically positive, 34 were correctly identified with fluorescence imaging, yielding a sensitivity of 97.2%. Of the 435 pathologically negative nodes, 401 were correctly assessed using fluorescence imaging, yielding a specificity of 92.7%. The NPV was determined to be 99.7%, and the PPV was 50.7%. When 37 fluorescently false-positive nodes were sectioned deeper (1 mm) into their respective blocks, metastatic cancer was found in 8.1% of the recut nodal specimens, which altered staging in two of those cases. Conclusions: Fluorescence imaging of lymph nodes after systemic cetuximab-IRDye800CW administration demonstrated high sensitivity and was capable of identifying additional positive nodes on deep sectioning. Clin Cancer Res; 23(16); 4744-52. ©2017 AACR . ©2017 American Association for Cancer Research.

  5. Addressing Curse of Dimensionality in Sensitivity Analysis: How Can We Handle High-Dimensional Problems?

    NASA Astrophysics Data System (ADS)

    Safaei, S.; Haghnegahdar, A.; Razavi, S.

    2016-12-01

    Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.

  6. High-Resolution Linkage Analyses to Identify Genes That Influence Varroa Sensitive Hygiene Behavior in Honey Bees

    PubMed Central

    Tsuruda, Jennifer M.; Harris, Jeffrey W.; Bourgeois, Lanie; Danka, Robert G.; Hunt, Greg J.

    2012-01-01

    Varroa mites (V. destructor) are a major threat to honey bees (Apis melilfera) and beekeeping worldwide and likely lead to colony decline if colonies are not treated. Most treatments involve chemical control of the mites; however, Varroa has evolved resistance to many of these miticides, leaving beekeepers with a limited number of alternatives. A non-chemical control method is highly desirable for numerous reasons including lack of chemical residues and decreased likelihood of resistance. Varroa sensitive hygiene behavior is one of two behaviors identified that are most important for controlling the growth of Varroa populations in bee hives. To identify genes influencing this trait, a study was conducted to map quantitative trait loci (QTL). Individual workers of a backcross family were observed and evaluated for their VSH behavior in a mite-infested observation hive. Bees that uncapped or removed pupae were identified. The genotypes for 1,340 informative single nucleotide polymorphisms were used to construct a high-resolution genetic map and interval mapping was used to analyze the association of the genotypes with the performance of Varroa sensitive hygiene. We identified one major QTL on chromosome 9 (LOD score = 3.21) and a suggestive QTL on chromosome 1 (LOD = 1.95). The QTL confidence interval on chromosome 9 contains the gene ‘no receptor potential A’ and a dopamine receptor. ‘No receptor potential A’ is involved in vision and olfaction in Drosophila, and dopamine signaling has been previously shown to be required for aversive olfactory learning in honey bees, which is probably necessary for identifying mites within brood cells. Further studies on these candidate genes may allow for breeding bees with this trait using marker-assisted selection. PMID:23133626

  7. What Constitutes a "Good" Sensitivity Analysis? Elements and Tools for a Robust Sensitivity Analysis with Reduced Computational Cost

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin

    2016-04-01

    Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  8. Quantitative Trait Loci for Light Sensitivity, Body Weight, Body Size, and Morphological Eye Parameters in the Bumblebee, Bombus terrestris.

    PubMed

    Maebe, Kevin; Meeus, Ivan; De Riek, Jan; Smagghe, Guy

    2015-01-01

    Bumblebees such as Bombus terrestris are essential pollinators in natural and managed ecosystems. In addition, this species is intensively used in agriculture for its pollination services, for instance in tomato and pepper greenhouses. Here we performed a quantitative trait loci (QTL) analysis on B. terrestris using 136 microsatellite DNA markers to identify genes linked with 20 traits including light sensitivity, body size and mass, and eye and hind leg measures. By composite interval mapping (IM), we found 83 and 34 suggestive QTLs for 19 of the 20 traits at the linkage group wide significance levels of p = 0.05 and 0.01, respectively. Furthermore, we also found five significant QTLs at the genome wide significant level of p = 0.05. Individual QTLs accounted for 7.5-53.3% of the phenotypic variation. For 15 traits, at least one QTL was confirmed with multiple QTL model mapping. Multivariate principal components analysis confirmed 11 univariate suggestive QTLs but revealed three suggestive QTLs not identified by the individual traits. We also identified several candidate genes linked with light sensitivity, in particular the Phosrestin-1-like gene is a primary candidate for its phototransduction function. In conclusion, we believe that the suggestive and significant QTLs, and markers identified here, can be of use in marker-assisted breeding to improve selection towards light sensitive bumblebees, and thus also the pollination service of bumblebees.

  9. Genome-Wide Association Study of the Modified Stumvoll Insulin Sensitivity Index Identifies BCL2 and FAM19A2 as Novel Insulin Sensitivity Loci

    PubMed Central

    Gustafsson, Stefan; Rybin, Denis; Stančáková, Alena; Chen, Han; Liu, Ching-Ti; Hong, Jaeyoung; Jensen, Richard A.; Rice, Ken; Morris, Andrew P.; Mägi, Reedik; Tönjes, Anke; Prokopenko, Inga; Kleber, Marcus E.; Delgado, Graciela; Silbernagel, Günther; Jackson, Anne U.; Appel, Emil V.; Grarup, Niels; Lewis, Joshua P.; Montasser, May E.; Landenvall, Claes; Staiger, Harald; Luan, Jian’an; Frayling, Timothy M.; Weedon, Michael N.; Xie, Weijia; Morcillo, Sonsoles; Martínez-Larrad, María Teresa; Biggs, Mary L.; Chen, Yii-Der Ida; Corbaton-Anchuelo, Arturo; Færch, Kristine; Gómez-Zumaquero, Juan Miguel; Goodarzi, Mark O.; Kizer, Jorge R.; Koistinen, Heikki A.; Leong, Aaron; Lind, Lars; Lindgren, Cecilia; Machicao, Fausto; Manning, Alisa K.; Martín-Núñez, Gracia María; Rojo-Martínez, Gemma; Rotter, Jerome I.; Siscovick, David S.; Zmuda, Joseph M.; Zhang, Zhongyang; Serrano-Rios, Manuel; Smith, Ulf; Soriguer, Federico; Hansen, Torben; Jørgensen, Torben J.; Linnenberg, Allan; Pedersen, Oluf; Walker, Mark; Langenberg, Claudia; Scott, Robert A.; Wareham, Nicholas J.; Fritsche, Andreas; Häring, Hans-Ulrich; Stefan, Norbert; Groop, Leif; O’Connell, Jeff R.; Boehnke, Michael; Bergman, Richard N.; Collins, Francis S.; Mohlke, Karen L.; Tuomilehto, Jaakko; März, Winfried; Kovacs, Peter; Stumvoll, Michael; Psaty, Bruce M.; Kuusisto, Johanna; Laakso, Markku; Meigs, James B.; Dupuis, Josée; Ingelsson, Erik; Florez, Jose C.

    2016-01-01

    Genome-wide association studies (GWAS) have found few common variants that influence fasting measures of insulin sensitivity. We hypothesized that a GWAS of an integrated assessment of fasting and dynamic measures of insulin sensitivity would detect novel common variants. We performed a GWAS of the modified Stumvoll Insulin Sensitivity Index (ISI) within the Meta-Analyses of Glucose and Insulin-Related Traits Consortium. Discovery for genetic association was performed in 16,753 individuals, and replication was attempted for the 23 most significant novel loci in 13,354 independent individuals. Association with ISI was tested in models adjusted for age, sex, and BMI and in a model analyzing the combined influence of the genotype effect adjusted for BMI and the interaction effect between the genotype and BMI on ISI (model 3). In model 3, three variants reached genome-wide significance: rs13422522 (NYAP2; P = 8.87 × 10−11), rs12454712 (BCL2; P = 2.7 × 10−8), and rs10506418 (FAM19A2; P = 1.9 × 10−8). The association at NYAP2 was eliminated by conditioning on the known IRS1 insulin sensitivity locus; the BCL2 and FAM19A2 associations were independent of known cardiometabolic loci. In conclusion, we identified two novel loci and replicated known variants associated with insulin sensitivity. Further studies are needed to clarify the causal variant and function at the BCL2 and FAM19A2 loci. PMID:27416945

  10. Iterative integral parameter identification of a respiratory mechanics model.

    PubMed

    Schranz, Christoph; Docherty, Paul D; Chiew, Yeong Shiong; Möller, Knut; Chase, J Geoffrey

    2012-07-18

    Patient-specific respiratory mechanics models can support the evaluation of optimal lung protective ventilator settings during ventilation therapy. Clinical application requires that the individual's model parameter values must be identified with information available at the bedside. Multiple linear regression or gradient-based parameter identification methods are highly sensitive to noise and initial parameter estimates. Thus, they are difficult to apply at the bedside to support therapeutic decisions. An iterative integral parameter identification method is applied to a second order respiratory mechanics model. The method is compared to the commonly used regression methods and error-mapping approaches using simulated and clinical data. The clinical potential of the method was evaluated on data from 13 Acute Respiratory Distress Syndrome (ARDS) patients. The iterative integral method converged to error minima 350 times faster than the Simplex Search Method using simulation data sets and 50 times faster using clinical data sets. Established regression methods reported erroneous results due to sensitivity to noise. In contrast, the iterative integral method was effective independent of initial parameter estimations, and converged successfully in each case tested. These investigations reveal that the iterative integral method is beneficial with respect to computing time, operator independence and robustness, and thus applicable at the bedside for this clinical application.

  11. Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan

    2016-04-01

    Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the

  12. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.

    PubMed

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.

  13. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis

    PubMed Central

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736

  14. A sensitivity analysis of process design parameters, commodity prices and robustness on the economics of odour abatement technologies.

    PubMed

    Estrada, José M; Kraakman, N J R Bart; Lebrero, Raquel; Muñoz, Raúl

    2012-01-01

    The sensitivity of the economics of the five most commonly applied odour abatement technologies (biofiltration, biotrickling filtration, activated carbon adsorption, chemical scrubbing and a hybrid technology consisting of a biotrickling filter coupled with carbon adsorption) towards design parameters and commodity prices was evaluated. Besides, the influence of the geographical location on the Net Present Value calculated for a 20 years lifespan (NPV20) of each technology and its robustness towards typical process fluctuations and operational upsets were also assessed. This comparative analysis showed that biological techniques present lower operating costs (up to 6 times) and lower sensitivity than their physical/chemical counterparts, with the packing material being the key parameter affecting their operating costs (40-50% of the total operating costs). The use of recycled or partially treated water (e.g. secondary effluent in wastewater treatment plants) offers an opportunity to significantly reduce costs in biological techniques. Physical/chemical technologies present a high sensitivity towards H2S concentration, which is an important drawback due to the fluctuating nature of malodorous emissions. The geographical analysis evidenced high NPV20 variations around the world for all the technologies evaluated, but despite the differences in wage and price levels, biofiltration and biotrickling filtration are always the most cost-efficient alternatives (NPV20). When, in an economical evaluation, the robustness is as relevant as the overall costs (NPV20), the hybrid technology would move up next to BTF as the most preferred technologies. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. Effect of cinnamon on glucose control and lipid parameters.

    PubMed

    Baker, William L; Gutierrez-Williams, Gabriela; White, C Michael; Kluger, Jeffrey; Coleman, Craig I

    2008-01-01

    To perform a meta-analysis of randomized controlled trials of cinnamon to better characterize its impact on glucose and plasma lipids. A systematic literature search through July 2007 was conducted to identify randomized placebo-controlled trials of cinnamon that reported data on A1C, fasting blood glucose (FBG), or lipid parameters. The mean change in each study end point from baseline was treated as a continuous variable, and the weighted mean difference was calculated as the difference between the mean value in the treatment and control groups. A random-effects model was used. Five prospective randomized controlled trials (n = 282) were identified. Upon meta-analysis, the use of cinnamon did not significantly alter A1C, FBG, or lipid parameters. Subgroup and sensitivity analyses did not significantly change the results. Cinnamon does not appear to improve A1C, FBG, or lipid parameters in patients with type 1 or type 2 diabetes.

  16. An easily implemented static condensation method for structural sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gangadharan, S. N.; Haftka, R. T.; Nikolaidis, E.

    1990-01-01

    A black-box approach to static condensation for sensitivity analysis is presented with illustrative examples of a cube and a car structure. The sensitivity of the structural response with respect to joint stiffness parameter is calculated using the direct method, forward-difference, and central-difference schemes. The efficiency of the various methods for identifying joint stiffness parameters from measured static deflections of these structures is compared. The results indicate that the use of static condensation can reduce computation times significantly and the black-box approach is only slightly less efficient than the standard implementation of static condensation. The ease of implementation of the black-box approach recommends it for use with general-purpose finite element codes that do not have a built-in facility for static condensation.

  17. Simultaneous Estimation of Microphysical Parameters and Atmospheric State Variables With Radar Data and Ensemble Square-root Kalman Filter

    NASA Astrophysics Data System (ADS)

    Tong, M.; Xue, M.

    2006-12-01

    An important source of model error for convective-scale data assimilation and prediction is microphysical parameterization. This study investigates the possibility of estimating up to five fundamental microphysical parameters, which are closely involved in the definition of drop size distribution of microphysical species in a commonly used single-moment ice microphysics scheme, using radar observations and the ensemble Kalman filter method. The five parameters include the intercept parameters for rain, snow and hail/graupel, and the bulk densities of hail/graupel and snow. Parameter sensitivity and identifiability are first examined. The ensemble square-root Kalman filter (EnSRF) is employed for simultaneous state and parameter estimation. OSS experiments are performed for a model-simulated supercell storm, in which the five microphysical parameters are estimated individually or in different combinations starting from different initial guesses. When error exists in only one of the microphysical parameters, the parameter can be successfully estimated without exception. The estimation of multiple parameters is found to be less robust, with end results of estimation being sensitive to the realization of the initial parameter perturbation. This is believed to be because of the reduced parameter identifiability and the existence of non-unique solutions. The results of state estimation are, however, always improved when simultaneous parameter estimation is performed, even when the estimated parameters values are not accurate.

  18. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks

    PubMed Central

    Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over

  19. The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Khavaran, Abbas

    2010-01-01

    Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.

  20. Investigation, development and application of optimal output feedback theory. Vol. 4: Measures of eigenvalue/eigenvector sensitivity to system parameters and unmodeled dynamics

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim

    1987-01-01

    Some measures of eigenvalue and eigenvector sensitivity applicable to both continuous and discrete linear systems are developed and investigated. An infinite series representation is developed for the eigenvalues and eigenvectors of a system. The coefficients of the series are coupled, but can be obtained recursively using a nonlinear coupled vector difference equation. A new sensitivity measure is developed by considering the effects of unmodeled dynamics. It is shown that the sensitivity is high when any unmodeled eigenvalue is near a modeled eigenvalue. Using a simple example where the sensor dynamics have been neglected, it is shown that high feedback gains produce high eigenvalue/eigenvector sensitivity. The smallest singular value of the return difference is shown not to reflect eigenvalue sensitivity since it increases with the feedback gains. Using an upper bound obtained from the infinite series, a procedure to evaluate whether the sensitivity to parameter variations is within given acceptable bounds is developed and demonstrated by an example.

  1. A Process-based, Climate-Sensitive Model to Derive Methane Emissions from Natural Wetlands: Application to 5 Wetland Sites, Sensitivity to Model Parameters and Climate

    NASA Technical Reports Server (NTRS)

    Walter, Bernadette P.; Heimann, Martin

    1999-01-01

    Methane emissions from natural wetlands constitutes the largest methane source at present and depends highly on the climate. In order to investigate the response of methane emissions from natural wetlands to climate variations, a 1-dimensional process-based climate-sensitive model to derive methane emissions from natural wetlands is developed. In the model the processes leading to methane emission are simulated within a 1-dimensional soil column and the three different transport mechanisms diffusion, plant-mediated transport and ebullition are modeled explicitly. The model forcing consists of daily values of soil temperature, water table and Net Primary Productivity, and at permafrost sites the thaw depth is included. The methane model is tested using observational data obtained at 5 wetland sites located in North America, Europe and Central America, representing a large variety of environmental conditions. It can be shown that in most cases seasonal variations in methane emissions can be explained by the combined effect of changes in soil temperature and the position of the water table. Our results also show that a process-based approach is needed, because there is no simple relationship between these controlling factors and methane emissions that applies to a variety of wetland sites. The sensitivity of the model to the choice of key model parameters is tested and further sensitivity tests are performed to demonstrate how methane emissions from wetlands respond to climate variations.

  2. Global sensitivity analysis in wind energy assessment

    NASA Astrophysics Data System (ADS)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present

  3. Geriatric-specific triage criteria are more sensitive than standard adult criteria in identifying need for trauma center care in injured older adults.

    PubMed

    Ichwan, Brian; Darbha, Subrahmanyam; Shah, Manish N; Thompson, Laura; Evans, David C; Boulger, Creagh T; Caterino, Jeffrey M

    2015-01-01

    We evaluate the sensitivity of Ohio's 2009 emergency medical services (EMS) geriatric trauma triage criteria compared with the previous adult triage criteria in identifying need for trauma center care among older adults. We studied a retrospective cohort of injured patients aged 16 years or older in the 2006 to 2011 Ohio Trauma Registry. Patients aged 70 years or older were considered geriatric. We identified whether each patient met the geriatric and the adult triage criteria. The outcome measure was need for trauma center care, defined by surrogate markers: Injury Severity Score greater than 15, operating room in fewer than 48 hours, any ICU stay, and inhospital mortality. We calculated sensitivity and specificity of both triage criteria for both age groups. We included 101,577 patients; 33,379 (33%) were geriatric. Overall, 57% of patients met adult criteria and 68% met geriatric criteria. Using Injury Severity Score, for older adults geriatric criteria were more sensitive for need for trauma center care (93%; 95% confidence interval [CI] 92% to 93%) than adult criteria (61%; 95% CI 60% to 62%). Geriatric criteria decreased specificity in older adults from 61% (95% CI 61% to 62%) to 49% (95% CI 48% to 49%). Geriatric criteria in older adults (93% sensitivity, 49% specificity) performed similarly to the adult criteria in younger adults (sensitivity 87% and specificity 44%). Similar patterns were observed for other outcomes. Standard adult EMS triage guidelines provide poor sensitivity in older adults. Ohio's geriatric trauma triage guidelines significantly improve sensitivity in identifying Injury Severity Score and other surrogate markers of the need for trauma center care, with modest decreases in specificity for older adults. Copyright © 2014 American College of Emergency Physicians. Published by Elsevier Inc. All rights reserved.

  4. General methods for sensitivity analysis of equilibrium dynamics in patch occupancy models

    USGS Publications Warehouse

    Miller, David A.W.

    2012-01-01

    Sensitivity analysis is a useful tool for the study of ecological models that has many potential applications for patch occupancy modeling. Drawing from the rich foundation of existing methods for Markov chain models, I demonstrate new methods for sensitivity analysis of the equilibrium state dynamics of occupancy models. Estimates from three previous studies are used to illustrate the utility of the sensitivity calculations: a joint occupancy model for a prey species, its predators, and habitat used by both; occurrence dynamics from a well-known metapopulation study of three butterfly species; and Golden Eagle occupancy and reproductive dynamics. I show how to deal efficiently with multistate models and how to calculate sensitivities involving derived state variables and lower-level parameters. In addition, I extend methods to incorporate environmental variation by allowing for spatial and temporal variability in transition probabilities. The approach used here is concise and general and can fully account for environmental variability in transition parameters. The methods can be used to improve inferences in occupancy studies by quantifying the effects of underlying parameters, aiding prediction of future system states, and identifying priorities for sampling effort.

  5. Sensitivity of Turbine-Height Wind Speeds to Parameters in Planetary Boundary-Layer and Surface-Layer Schemes in the Weather Research and Forecasting Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Ben; Qian, Yun; Berg, Larry K.

    We evaluate the sensitivity of simulated turbine-height winds to 26 parameters applied in a planetary boundary layer (PBL) scheme and a surface layer scheme of the Weather Research and Forecasting (WRF) model over an area of complex terrain during the Columbia Basin Wind Energy Study. An efficient sampling algorithm and a generalized linear model are used to explore the multiple-dimensional parameter space and quantify the parametric sensitivity of modeled turbine-height winds. The results indicate that most of the variability in the ensemble simulations is contributed by parameters related to the dissipation of the turbulence kinetic energy (TKE), Prandtl number, turbulencemore » length scales, surface roughness, and the von Kármán constant. The relative contributions of individual parameters are found to be dependent on both the terrain slope and atmospheric stability. The parameter associated with the TKE dissipation rate is found to be the most important one, and a larger dissipation rate can produce larger hub-height winds. A larger Prandtl number results in weaker nighttime winds. Increasing surface roughness reduces the frequencies of both extremely weak and strong winds, implying a reduction in the variability of the wind speed. All of the above parameters can significantly affect the vertical profiles of wind speed, the altitude of the low-level jet and the magnitude of the wind shear strength. The wind direction is found to be modulated by the same subset of influential parameters. Remainder of abstract is in attachment.« less

  6. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, J.; Tolson, B.

    2017-12-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an

  7. Two-step sensitivity testing of parametrized and regionalized life cycle assessments: methodology and case study.

    PubMed

    Mutel, Christopher L; de Baan, Laura; Hellweg, Stefanie

    2013-06-04

    Comprehensive sensitivity analysis is a significant tool to interpret and improve life cycle assessment (LCA) models, but is rarely performed. Sensitivity analysis will increase in importance as inventory databases become regionalized, increasing the number of system parameters, and parametrized, adding complexity through variables and nonlinear formulas. We propose and implement a new two-step approach to sensitivity analysis. First, we identify parameters with high global sensitivities for further examination and analysis with a screening step, the method of elementary effects. Second, the more computationally intensive contribution to variance test is used to quantify the relative importance of these parameters. The two-step sensitivity test is illustrated on a regionalized, nonlinear case study of the biodiversity impacts from land use of cocoa production, including a worldwide cocoa products trade model. Our simplified trade model can be used for transformable commodities where one is assessing market shares that vary over time. In the case study, the highly uncertain characterization factors for the Ivory Coast and Ghana contributed more than 50% of variance for almost all countries and years examined. The two-step sensitivity test allows for the interpretation, understanding, and improvement of large, complex, and nonlinear LCA systems.

  8. Predicted Infiltration for Sodic/Saline Soils from Reclaimed Coastal Areas: Sensitivity to Model Parameters

    PubMed Central

    She, Dongli; Yu, Shuang'en; Shao, Guangcheng

    2014-01-01

    This study was conducted to assess the influences of soil surface conditions and initial soil water content on water movement in unsaturated sodic soils of reclaimed coastal areas. Data was collected from column experiments in which two soils from a Chinese coastal area reclaimed in 2007 (Soil A, saline) and 1960 (Soil B, nonsaline) were used, with bulk densities of 1.4 or 1.5 g/cm3. A 1D-infiltration model was created using a finite difference method and its sensitivity to hydraulic related parameters was tested. The model well simulated the measured data. The results revealed that soil compaction notably affected the water retention of both soils. Model simulations showed that increasing the ponded water depth had little effect on the infiltration process, since the increases in cumulative infiltration and wetting front advancement rate were small. However, the wetting front advancement rate increased and the cumulative infiltration decreased to a greater extent when θ 0 was increased. Soil physical quality was described better by the S parameter than by the saturated hydraulic conductivity since the latter was also affected by the physical chemical effects on clay swelling occurring in the presence of different levels of electrolytes in the soil solutions of the two soils. PMID:25197699

  9. Predicted infiltration for sodic/saline soils from reclaimed coastal areas: sensitivity to model parameters.

    PubMed

    Liu, Dongdong; She, Dongli; Yu, Shuang'en; Shao, Guangcheng; Chen, Dan

    2014-01-01

    This study was conducted to assess the influences of soil surface conditions and initial soil water content on water movement in unsaturated sodic soils of reclaimed coastal areas. Data was collected from column experiments in which two soils from a Chinese coastal area reclaimed in 2007 (Soil A, saline) and 1960 (Soil B, nonsaline) were used, with bulk densities of 1.4 or 1.5 g/cm(3). A 1D-infiltration model was created using a finite difference method and its sensitivity to hydraulic related parameters was tested. The model well simulated the measured data. The results revealed that soil compaction notably affected the water retention of both soils. Model simulations showed that increasing the ponded water depth had little effect on the infiltration process, since the increases in cumulative infiltration and wetting front advancement rate were small. However, the wetting front advancement rate increased and the cumulative infiltration decreased to a greater extent when θ₀ was increased. Soil physical quality was described better by the S parameter than by the saturated hydraulic conductivity since the latter was also affected by the physical chemical effects on clay swelling occurring in the presence of different levels of electrolytes in the soil solutions of the two soils.

  10. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Tolson, Bryan

    2017-04-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different

  11. Quantitative analysis of iris parameters in keratoconus patients using optical coherence tomography.

    PubMed

    Bonfadini, Gustavo; Arora, Karun; Vianna, Lucas M; Campos, Mauro; Friedman, David; Muñoz, Beatriz; Jun, Albert S

    2015-01-01

    To investigate the relationship between quantitative iris parameters and the presence of keratoconus. Cross-sectional observational study that included 15 affected eyes of 15 patients with keratoconus and 26 eyes of 26 normal age- and sex-matched controls. Iris parameters (area, thickness, and pupil diameter) of affected and unaffected eyes were measured under standardized light and dark conditions using anterior segment optical coherence tomography (AS-OCT). To identify optimal iris thickness cutoff points to maximize the sensitivity and specificity when discriminating keratoconus eyes from normal eyes, the analysis included the use of receiver operating characteristic (ROC) curves. Iris thickness and area were lower in keratoconus eyes than in normal eyes. The mean thickness at the pupillary margin under both light and dark conditions was found to be the best parameter for discriminating normal patients from keratoconus patients. Diagnostic performance was assessed by the area under the ROC curve (AROC), which had a value of 0.8256 with 80.0% sensitivity and 84.6% specificity, using a cutoff of 0.4125 mm. The sensitivity increased to 86.7% when a cutoff of 0.4700 mm was used. In our sample, iris thickness was lower in keratoconus eyes than in normal eyes. These results suggest that tomographic parameters may provide novel adjunct approaches for keratoconus screening.

  12. Carbon and water flux responses to physiology by environment interactions: a sensitivity analysis of variation in climate on photosynthetic and stomatal parameters

    NASA Astrophysics Data System (ADS)

    Bauerle, William L.; Daniels, Alex B.; Barnard, David M.

    2014-05-01

    Sensitivity of carbon uptake and water use estimates to changes in physiology was determined with a coupled photosynthesis and stomatal conductance ( g s) model, linked to canopy microclimate with a spatially explicit scheme (MAESTRA). The sensitivity analyses were conducted over the range of intraspecific physiology parameter variation observed for Acer rubrum L. and temperate hardwood C3 (C3) vegetation across the following climate conditions: carbon dioxide concentration 200-700 ppm, photosynthetically active radiation 50-2,000 μmol m-2 s-1, air temperature 5-40 °C, relative humidity 5-95 %, and wind speed at the top of the canopy 1-10 m s-1. Five key physiological inputs [quantum yield of electron transport ( α), minimum stomatal conductance ( g 0), stomatal sensitivity to the marginal water cost of carbon gain ( g 1), maximum rate of electron transport ( J max), and maximum carboxylation rate of Rubisco ( V cmax)] changed carbon and water flux estimates ≥15 % in response to climate gradients; variation in α, J max, and V cmax input resulted in up to ~50 and 82 % intraspecific and C3 photosynthesis estimate output differences respectively. Transpiration estimates were affected up to ~46 and 147 % by differences in intraspecific and C3 g 1 and g 0 values—two parameters previously overlooked in modeling land-atmosphere carbon and water exchange. We show that a variable environment, within a canopy or along a climate gradient, changes the spatial parameter effects of g 0, g 1, α, J max, and V cmax in photosynthesis- g s models. Since variation in physiology parameter input effects are dependent on climate, this approach can be used to assess the geographical importance of key physiology model inputs when estimating large scale carbon and water exchange.

  13. Sensitivity analysis of coupled processes and parameters on the performance of enhanced geothermal systems.

    PubMed

    Pandey, S N; Vishal, Vikram

    2017-12-06

    3-D modeling of coupled thermo-hydro-mechanical (THM) processes in enhanced geothermal systems using the control volume finite element code was done. In a first, a comparative analysis on the effects of coupled processes, operational parameters and reservoir parameters on heat extraction was conducted. We found that significant temperature drop and fluid overpressure occurred inside the reservoirs/fracture that affected the transport behavior of the fracture. The spatio-temporal variations of fracture aperture greatly impacted the thermal drawdown and consequently the net energy output. The results showed that maximum aperture evolution occurred near the injection zone instead of the production zone. Opening of the fracture reduced the injection pressure required to circulate a fixed mass of water. The thermal breakthrough and heat extraction strongly depend on the injection mass flow rate, well distances, reservoir permeability and geothermal gradients. High permeability caused higher water loss, leading to reduced heat extraction. From the results of TH vs THM process simulations, we conclude that appropriate coupling is vital and can impact the estimates of net heat extraction. This study can help in identifying the critical operational parameters, and process optimization for enhanced energy extraction from a geothermal system.

  14. Probabilistic parameter estimation of activated sludge processes using Markov Chain Monte Carlo.

    PubMed

    Sharifi, Soroosh; Murthy, Sudhir; Takács, Imre; Massoudieh, Arash

    2014-03-01

    One of the most important challenges in making activated sludge models (ASMs) applicable to design problems is identifying the values of its many stoichiometric and kinetic parameters. When wastewater characteristics data from full-scale biological treatment systems are used for parameter estimation, several sources of uncertainty, including uncertainty in measured data, external forcing (e.g. influent characteristics), and model structural errors influence the value of the estimated parameters. This paper presents a Bayesian hierarchical modeling framework for the probabilistic estimation of activated sludge process parameters. The method provides the joint probability density functions (JPDFs) of stoichiometric and kinetic parameters by updating prior information regarding the parameters obtained from expert knowledge and literature. The method also provides the posterior correlations between the parameters, as well as a measure of sensitivity of the different constituents with respect to the parameters. This information can be used to design experiments to provide higher information content regarding certain parameters. The method is illustrated using the ASM1 model to describe synthetically generated data from a hypothetical biological treatment system. The results indicate that data from full-scale systems can narrow down the ranges of some parameters substantially whereas the amount of information they provide regarding other parameters is small, due to either large correlations between some of the parameters or a lack of sensitivity with respect to the parameters. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Developing a methodology for the inverse estimation of root architectural parameters from field based sampling schemes

    NASA Astrophysics Data System (ADS)

    Morandage, Shehan; Schnepf, Andrea; Vanderborght, Jan; Javaux, Mathieu; Leitner, Daniel; Laloy, Eric; Vereecken, Harry

    2017-04-01

    highly nonlinear effect to the model output. The most sensitive parameters will be subject to inverse estimation from the virtual field sampling data using DREAMzs algorithm. The estimated parameters can then be compared with the ground truth in order to determine the suitability of the sampling schemes to identify specific traits or parameters of the root growth model.

  16. Nuclear morphology for the detection of alterations in bronchial cells from lung cancer: an attempt to improve sensitivity and specificity.

    PubMed

    Fafin-Lefevre, Mélanie; Morlais, Fabrice; Guittet, Lydia; Clin, Bénédicte; Launoy, Guy; Galateau-Sallé, Françoise; Plancoulaine, Benoît; Herlin, Paulette; Letourneux, Marc

    2011-08-01

    To identify which morphologic or densitometric parameters are modified in cell nuclei from bronchopulmonary cancer based on 18 parameters involving shape, intensity, chromatin, texture, and DNA content and develop a bronchopulmonary cancer screening method relying on analysis of sputum sample cell nuclei. A total of 25 sputum samples from controls and 22 bronchial aspiration samples from patients presenting with bronchopulmonary cancer who were professionally exposed to cancer were used. After Feulgen staining, 18 morphologic and DNA content parameters were measured on cell nuclei, via image cytom- etry. A method was developed for analyzing distribution quantiles, compared with simply interpreting mean values, to characterize morphologic modifications in cell nuclei. Distribution analysis of parameters enabled us to distinguish 13 of 18 parameters that demonstrated significant differences between controls and cancer cases. These parameters, used alone, enabled us to distinguish two population types, with both sensitivity and specificity > 70%. Three parameters offered 100% sensitivity and specificity. When mean values offered high sensitivity and specificity, comparable or higher sensitivity and specificity values were observed for at least one of the corresponding quantiles. Analysis of modification in morphologic parameters via distribution analysis proved promising for screening bronchopulmonary cancer from sputum.

  17. Validation and Parameter Sensitivity Tests for Reconstructing Swell Field Based on an Ensemble Kalman Filter

    PubMed Central

    Wang, Xuan; Tandeo, Pierre; Fablet, Ronan; Husson, Romain; Guan, Lei; Chen, Ge

    2016-01-01

    The swell propagation model built on geometric optics is known to work well when simulating radiated swells from a far located storm. Based on this simple approximation, satellites have acquired plenty of large samples on basin-traversing swells induced by fierce storms situated in mid-latitudes. How to routinely reconstruct swell fields with these irregularly sampled observations from space via known swell propagation principle requires more examination. In this study, we apply 3-h interval pseudo SAR observations in the ensemble Kalman filter (EnKF) to reconstruct a swell field in ocean basin, and compare it with buoy swell partitions and polynomial regression results. As validated against in situ measurements, EnKF works well in terms of spatial–temporal consistency in far-field swell propagation scenarios. Using this framework, we further address the influence of EnKF parameters, and perform a sensitivity analysis to evaluate estimations made under different sets of parameters. Such analysis is of key interest with respect to future multiple-source routinely recorded swell field data. Satellite-derived swell data can serve as a valuable complementary dataset to in situ or wave re-analysis datasets. PMID:27898005

  18. Spectral properties of identified polarized-light sensitive interneurons in the brain of the desert locust Schistocerca gregaria.

    PubMed

    Kinoshita, Michiyo; Pfeiffer, Keram; Homberg, Uwe

    2007-04-01

    Many migrating animals employ a celestial compass mechanism for spatial navigation. Behavioral experiments in bees and ants have shown that sun compass navigation may rely on the spectral gradient in the sky as well as on the pattern of sky polarization. While polarized-light sensitive interneurons (POL neurons) have been identified in the brain of several insect species, there are at present no data on the neural basis of coding the spectral gradient of the sky. In the present study we have analyzed the chromatic properties of two identified POL neurons in the brain of the desert locust. Both neurons, termed TuTu1 and LoTu1, arborize in the anterior optic tubercle and respond to unpolarized light as well as to polarized light. We show here that the polarized-light response of both types of neuron relies on blue-sensitive photoreceptors. Responses to unpolarized light depended on stimulus position and wavelength. Dorsal unpolarized blue light inhibited the neurons, while stimulation from the ipsilateral side resulted in opponent responses to UV light and green light. While LoTu1 was inhibited by UV light and was excited by green light, one subtype of TuTu1 was excited by UV and inhibited by green light. In LoTu1 the sensitivity to polarized light was at least 2 log units higher than the response to unpolarized light stimuli. Taken together, the spatial and chromatic properties of the neurons may be suited to signal azimuthal directions based on a combination of the spectral gradient and the polarization pattern of the sky.

  19. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2015-08-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  20. Computationally inexpensive identification of noninformative model parameters by sequential screening

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis

    2016-04-01

    Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.

  1. Scaling in sensitivity analysis

    USGS Publications Warehouse

    Link, W.A.; Doherty, P.F.

    2002-01-01

    Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.

  2. Surgeon Reported Outcome Measure for Spine Trauma: An International Expert Survey Identifying Parameters Relevant for the Outcome of Subaxial Cervical Spine Injuries.

    PubMed

    Sadiqi, Said; Verlaan, Jorrit-Jan; Lehr, A Mechteld; Dvorak, Marcel F; Kandziora, Frank; Rajasekaran, S; Schnake, Klaus J; Vaccaro, Alexander R; Oner, F Cumhur

    2016-12-15

    International web-based survey. To identify clinical and radiological parameters that spine surgeons consider most relevant when evaluating clinical and functional outcomes of subaxial cervical spine trauma patients. Although an outcome instrument that reflects the patients' perspective is imperative, there is also a need for a surgeon reported outcome measure to reflect the clinicians' perspective adequately. A cross-sectional online survey was conducted among a selected number of spine surgeons from all five AOSpine International world regions. They were asked to indicate the relevance of a compilation of 21 parameters, both for the short term (3 mo-2 yr) and long term (≥2 yr), on a five-point scale. The responses were analyzed using descriptive statistics, frequency analysis, and Kruskal-Wallis test. Of the 279 AOSpine International and International Spinal Cord Society members who received the survey, 108 (38.7%) participated in the study. Ten parameters were identified as relevant both for short term and long term by at least 70% of the participants. Neurological status, implant failure within 3 months, and patient satisfaction were most relevant. Bony fusion was the only parameter for the long term, whereas five parameters were identified for the short term. The remaining six parameters were not deemed relevant. Minor differences were observed when analyzing the responses according to each world region, or spine surgeons' degree of experience. The perspective of an international sample of highly experienced spine surgeons was explored on the most relevant parameters to evaluate and predict outcomes of subaxial cervical spine trauma patients. These results form the basis for the development of a disease-specific surgeon reported outcome measure, which will be a helpful tool in research and clinical practice. 4.

  3. Pain Sensitivity Risk Factors for Chronic TMD: Descriptive Data and Empirically Identified Domains from the OPPERA Case Control Study

    PubMed Central

    Greenspan, Joel D.; Slade, Gary D.; Bair, Eric; Dubner, Ronald; Fillingim, Roger B.; Ohrbach, Richard; Knott, Charlie; Mulkey, Flora; Rothwell, Rebecca; Maixner, William

    2011-01-01

    Many studies report that people with temporomandibular disorders (TMD) are more sensitive to experimental pain stimuli than TMD-free controls. Such differences in sensitivity are observed in remote body sites as well as in the orofacial region, suggesting a generalized upregulation of nociceptive processing in TMD cases. This large case-control study of 185 adults with TMD and 1,633 TMD-free controls measured sensitivity to painful pressure, mechanical cutaneous, and heat stimuli, using multiple testing protocols. Based on an unprecedented 36 experimental pain measures, 28 showed statistically significantly greater pain sensitivity in TMD cases than controls. The largest effects were seen for pressure pain thresholds at multiple body sites and cutaneous mechanical pain threshold. The other mechanical cutaneous pain measures and many of the heat pain measures showed significant differences, but with lesser effect sizes. Principal component analysis (PCA) of the pain measures derived from 1,633 controls identified five components labeled: (1) heat pain ratings, (2) heat pain aftersensations and tolerance, (3) mechanical cutaneous pain sensitivity, (4) pressure pain thresholds, and (5) heat pain temporal summation. These results demonstrate that, compared to TMD-free controls, chronic TMD cases are more sensitive to many experimental noxious stimuli at extra-cranial body sites, and provides for the first time the ability to directly compare the case-control effect sizes of a wide range of pain sensitivity measures. PMID:22074753

  4. Single measure and gated screening approaches for identifying students at-risk for academic problems: Implications for sensitivity and specificity.

    PubMed

    Van Norman, Ethan R; Nelson, Peter M; Klingbeil, David A

    2017-09-01

    Educators need recommendations to improve screening practices without limiting students' instructional opportunities. Repurposing previous years' state test scores has shown promise in identifying at-risk students within multitiered systems of support. However, researchers have not directly compared the diagnostic accuracy of previous years' state test scores with data collected during fall screening periods to identify at-risk students. In addition, the benefit of using previous state test scores in conjunction with data from a separate measure to identify at-risk students has not been explored. The diagnostic accuracy of 3 types of screening approaches were tested to predict proficiency on end-of-year high-stakes assessments: state test data obtained during the previous year, data from a different measure administered in the fall, and both measures combined (i.e., a gated model). Extant reading and math data (N = 2,996) from 10 schools in the Midwest were analyzed. When used alone, both measures yielded similar sensitivity and specificity values. The gated model yielded superior specificity values compared with using either measure alone, at the expense of sensitivity. Implications, limitations, and ideas for future research are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Identifying core nursing sensitive outcomes associated with the most frequently used North American Nursing Diagnosis Association-International nursing diagnoses for patients with cerebrovascular disease in Korea.

    PubMed

    Lee, Eunjoo; Park, Hyejin; Whyte, James; Kim, Youngae; Park, Sang Youn

    2014-12-01

    The purpose of this study was to identify the core nursing sensitive outcomes according to the most frequently used five North American Nursing Diagnosis Association-International for patients with cerebrovascular disease using the Nursing Outcomes Classification (NOC). A cross-sectional survey design was used. First, nursing problems were identified through 78 charts review, and then linkages between each of nursing problems and nursing sensitive outcomes were established and validated by an expert group for questionnaires. Second, 80 nurses working in the neurosurgical intensive care unit and neurosurgery departments of five Korean hospitals were asked to evaluate how important each outcome is and how often each outcome used to evaluate patient outcomes using 5-point Likert scale. Although there were some differences in the core outcomes identified for each of the nursing problem, consciousness, cognitive orientation, neurologic status and communication were considered the most critical nursing sensitive outcomes for patients suffering cerebrovascular disease. Core nursing sensitive outcomes of patients suffering cerebrovascular disease using NOC were identified to measure the effectiveness of nursing care. © 2013 Wiley Publishing Asia Pty Ltd.

  6. Ultra-sensitive Sequencing Identifies High Prevalence of Clonal Hematopoiesis-Associated Mutations throughout Adult Life.

    PubMed

    Acuna-Hidalgo, Rocio; Sengul, Hilal; Steehouwer, Marloes; van de Vorst, Maartje; Vermeulen, Sita H; Kiemeney, Lambertus A L M; Veltman, Joris A; Gilissen, Christian; Hoischen, Alexander

    2017-07-06

    Clonal hematopoiesis results from somatic mutations in hematopoietic stem cells, which give an advantage to mutant cells, driving their clonal expansion and potentially leading to leukemia. The acquisition of clonal hematopoiesis-driver mutations (CHDMs) occurs with normal aging and these mutations have been detected in more than 10% of individuals ≥65 years. We aimed to examine the prevalence and characteristics of CHDMs throughout adult life. We developed a targeted re-sequencing assay combining high-throughput with ultra-high sensitivity based on single-molecule molecular inversion probes (smMIPs). Using smMIPs, we screened more than 100 loci for CHDMs in more than 2,000 blood DNA samples from population controls between 20 and 69 years of age. Loci screened included 40 regions known to drive clonal hematopoiesis when mutated and 64 novel candidate loci. We identified 224 somatic mutations throughout our cohort, of which 216 were coding mutations in known driver genes (DNMT3A, JAK2, GNAS, TET2, and ASXL1), including 196 point mutations and 20 indels. Our assay's improved sensitivity allowed us to detect mutations with variant allele frequencies as low as 0.001. CHDMs were identified in more than 20% of individuals 60 to 69 years of age and in 3% of individuals 20 to 29 years of age, approximately double the previously reported prevalence despite screening a limited set of loci. Our findings support the occurrence of clonal hematopoiesis-associated mutations as a widespread mechanism linked with aging, suggesting that mosaicism as a result of clonal evolution of cells harboring somatic mutations is a universal mechanism occurring at all ages in healthy humans. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  7. Relative sensitivities of DCE-MRI pharmacokinetic parameters to arterial input function (AIF) scaling

    NASA Astrophysics Data System (ADS)

    Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V.; Rooney, William D.; Garzotto, Mark G.; Springer, Charles S.

    2016-08-01

    Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (Ktrans) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging

  8. Relative sensitivities of DCE-MRI pharmacokinetic parameters to arterial input function (AIF) scaling.

    PubMed

    Li, Xin; Cai, Yu; Moloney, Brendan; Chen, Yiyi; Huang, Wei; Woods, Mark; Coakley, Fergus V; Rooney, William D; Garzotto, Mark G; Springer, Charles S

    2016-08-01

    Dynamic-Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) has been used widely for clinical applications. Pharmacokinetic modeling of DCE-MRI data that extracts quantitative contrast reagent/tissue-specific model parameters is the most investigated method. One of the primary challenges in pharmacokinetic analysis of DCE-MRI data is accurate and reliable measurement of the arterial input function (AIF), which is the driving force behind all pharmacokinetics. Because of effects such as inflow and partial volume averaging, AIF measured from individual arteries sometimes require amplitude scaling for better representation of the blood contrast reagent (CR) concentration time-courses. Empirical approaches like blinded AIF estimation or reference tissue AIF derivation can be useful and practical, especially when there is no clearly visible blood vessel within the imaging field-of-view (FOV). Similarly, these approaches generally also require magnitude scaling of the derived AIF time-courses. Since the AIF varies among individuals even with the same CR injection protocol and the perfect scaling factor for reconstructing the ground truth AIF often remains unknown, variations in estimated pharmacokinetic parameters due to varying AIF scaling factors are of special interest. In this work, using simulated and real prostate cancer DCE-MRI data, we examined parameter variations associated with AIF scaling. Our results show that, for both the fast-exchange-limit (FXL) Tofts model and the water exchange sensitized fast-exchange-regime (FXR) model, the commonly fitted CR transfer constant (K(trans)) and the extravascular, extracellular volume fraction (ve) scale nearly proportionally with the AIF, whereas the FXR-specific unidirectional cellular water efflux rate constant, kio, and the CR intravasation rate constant, kep, are both AIF scaling insensitive. This indicates that, for DCE-MRI of prostate cancer and possibly other cancers, kio and kep may be more suitable imaging

  9. Sensitivity Analysis of an ENteric Immunity SImulator (ENISI)-Based Model of Immune Responses to Helicobacter pylori Infection

    PubMed Central

    Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav

    2015-01-01

    Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290

  10. Sensitivity Analysis of an ENteric Immunity SImulator (ENISI)-Based Model of Immune Responses to Helicobacter pylori Infection.

    PubMed

    Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav

    2015-01-01

    Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.

  11. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less

  12. Identifying Determinants of PARP Inhibitor Sensitivity in Ovarian Cancer

    DTIC Science & Technology

    2015-10-01

    such as those lacking functional BRCA1 are highly sensitive to poly(ADP-ribose) polymerase (PARP) inhibitors. Ovarian cancer patients that harbored...Principal Investigator (Last, first, middle): Johnson, Neil  Dr. Johnson’s mentor, Dr. Jeffrey Boyd, left Fox Chase for Florida International

  13. [Weight parameters of water quality impact and risk grade determination of water environmental sensitive spots in Jiashan].

    PubMed

    Xie, Rong-Rong; Pang, Yong; Zhang, Qian; Chen, Ke; Sun, Ming-Yuan

    2012-07-01

    For the safety of the water environment in Jiashan county in Zhejiang Province, one-dimensional hydrodynamic and water quality models are established based on three large-scale monitoring of hydrology and water quality in Jiashan county, three water environmental sensitive spots including Hongqitang dam Chijia hydrological station and Luxie pond are selected to investigate weight parameters of water quality impact and risk grade determination. Results indicate as follows (1) Internal pollution impact in Jiashan areas was greater than the external, the average weight parameters of internal chemical oxygen demand (COD) pollution is 55.3%, internal ammonia nitrogen (NH(4+)-N) is 67.4%, internal total phosphor (TP) is 63.1%. Non-point pollution impact in Jiashan areas was greater than point pollution impact, the average weight parameters of non-point COD pollutions is 53.7%, non-point NH(4+)-N is 65.9%, non-point TP is 57.8%. (2) The risk of Hongqitang dam and Chijia hydrological station are in the middle risk. The risk of Luxie pond is also in the middle risk in August, and in April and December the risk of Luxie pond is low. The strategic decision will be suggested to guarantee water environment security and social and economic security in the study.

  14. A Sensitivity Analysis of an Inverted Pendulum Balance Control Model.

    PubMed

    Pasma, Jantsje H; Boonstra, Tjitske A; van Kordelaar, Joost; Spyropoulou, Vasiliki V; Schouten, Alfred C

    2017-01-01

    Balance control models are used to describe balance behavior in health and disease. We identified the unique contribution and relative importance of each parameter of a commonly used balance control model, the Independent Channel (IC) model, to identify which parameters are crucial to describe balance behavior. The balance behavior was expressed by transfer functions (TFs), representing the relationship between sensory perturbations and body sway as a function of frequency, in terms of amplitude (i.e., magnitude) and timing (i.e., phase). The model included an inverted pendulum controlled by a neuromuscular system, described by several parameters. Local sensitivity of each parameter was determined for both the magnitude and phase using partial derivatives. Both the intrinsic stiffness and proportional gain shape the magnitude at low frequencies (0.1-1 Hz). The derivative gain shapes the peak and slope of the magnitude between 0.5 and 0.9 Hz. The sensory weight influences the overall magnitude, and does not have any effect on the phase. The effect of the time delay becomes apparent in the phase above 0.6 Hz. The force feedback parameters and intrinsic stiffness have a small effect compared with the other parameters. All parameters shape the TF magnitude and phase and therefore play a role in the balance behavior. The sensory weight, time delay, derivative gain, and the proportional gain have a unique effect on the TFs, while the force feedback parameters and intrinsic stiffness contribute less. More insight in the unique contribution and relative importance of all parameters shows which parameters are crucial and critical to identify underlying differences in balance behavior between different patient groups.

  15. Strategy to Identify and Test Putative Light-Sensitive Non-Opsin G-Protein-Coupled Receptors: A Case Study.

    PubMed

    Faggionato, Davide; Serb, Jeanne M

    2017-08-01

    The rise of high-throughput RNA sequencing (RNA-seq) and de novo transcriptome assembly has had a transformative impact on how we identify and study genes in the phototransduction cascade of non-model organisms. But the advantage provided by the nearly automated annotation of RNA-seq transcriptomes may at the same time hinder the possibility for gene discovery and the discovery of new gene functions. For example, standard functional annotation based on domain homology to known protein families can only confirm group membership, not identify the emergence of new biochemical function. In this study, we show the importance of developing a strategy that circumvents the limitations of semiautomated annotation and apply this workflow to photosensitivity as a means to discover non-opsin photoreceptors. We hypothesize that non-opsin G-protein-coupled receptor (GPCR) proteins may have chromophore-binding lysines in locations that differ from opsin. Here, we provide the first case study describing non-opsin light-sensitive GPCRs based on tissue-specific RNA-seq data of the common bay scallop Argopecten irradians (Lamarck, 1819). Using a combination of sequence analysis and three-dimensional protein modeling, we identified two candidate proteins. We tested their photochemical properties and provide evidence showing that these two proteins incorporate 11-cis and/or all-trans retinal and react to light photochemically. Based on this case study, we demonstrate that there is potential for the discovery of new light-sensitive GPCRs, and we have developed a workflow that starts from RNA-seq assemblies to the discovery of new non-opsin, GPCR-based photopigments.

  16. Parameter Uncertainty Analysis Using Monte Carlo Simulations for a Regional-Scale Groundwater Model

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Pohlmann, K.

    2016-12-01

    Regional-scale grid-based groundwater models for flow and transport often contain multiple types of parameters that can intensify the challenge of parameter uncertainty analysis. We propose a Monte Carlo approach to systematically quantify the influence of various types of model parameters on groundwater flux and contaminant travel times. The Monte Carlo simulations were conducted based on the steady-state conversion of the original transient model, which was then combined with the PEST sensitivity analysis tool SENSAN and particle tracking software MODPATH. Results identified hydrogeologic units whose hydraulic conductivity can significantly affect groundwater flux, and thirteen out of 173 model parameters that can cause large variation in travel times for contaminant particles originating from given source zones.

  17. A Sensitivity Analysis of fMRI Balloon Model.

    PubMed

    Zayane, Chadia; Laleg-Kirati, Taous Meriem

    2015-01-01

    Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.

  18. On Finding and Using Identifiable Parameter Combinations in Nonlinear Dynamic Systems Biology Models and COMBOS: A Novel Web Implementation

    PubMed Central

    DiStefano, Joseph

    2014-01-01

    Parameter identifiability problems can plague biomodelers when they reach the quantification stage of development, even for relatively simple models. Structural identifiability (SI) is the primary question, usually understood as knowing which of P unknown biomodel parameters p 1,…, pi,…, pP are-and which are not-quantifiable in principle from particular input-output (I-O) biodata. It is not widely appreciated that the same database also can provide quantitative information about the structurally unidentifiable (not quantifiable) subset, in the form of explicit algebraic relationships among unidentifiable pi. Importantly, this is a first step toward finding what else is needed to quantify particular unidentifiable parameters of interest from new I–O experiments. We further develop, implement and exemplify novel algorithms that address and solve the SI problem for a practical class of ordinary differential equation (ODE) systems biology models, as a user-friendly and universally-accessible web application (app)–COMBOS. Users provide the structural ODE and output measurement models in one of two standard forms to a remote server via their web browser. COMBOS provides a list of uniquely and non-uniquely SI model parameters, and–importantly-the combinations of parameters not individually SI. If non-uniquely SI, it also provides the maximum number of different solutions, with important practical implications. The behind-the-scenes symbolic differential algebra algorithms are based on computing Gröbner bases of model attributes established after some algebraic transformations, using the computer-algebra system Maxima. COMBOS was developed for facile instructional and research use as well as modeling. We use it in the classroom to illustrate SI analysis; and have simplified complex models of tumor suppressor p53 and hormone regulation, based on explicit computation of parameter combinations. It’s illustrated and validated here for models of moderate complexity

  19. Unsteady hovering wake parameters identified from dynamic model tests, part 1

    NASA Technical Reports Server (NTRS)

    Hohenemser, K. H.; Crews, S. T.

    1977-01-01

    The development of a 4-bladed model rotor is reported that can be excited with a simple eccentric mechanism in progressing and regressing modes with either harmonic or transient inputs. Parameter identification methods were applied to the problem of extracting parameters for linear perturbation models, including rotor dynamic inflow effects, from the measured blade flapping responses to transient pitch stirring excitations. These perturbation models were then used to predict blade flapping response to other pitch stirring transient inputs, and rotor wake and blade flapping responses to harmonic inputs. The viability and utility of using parameter identification methods for extracting the perturbation models from transients are demonstrated through these combined analytical and experimental studies.

  20. Sensitivity Challenge of Steep Transistors

    NASA Astrophysics Data System (ADS)

    Ilatikhameneh, Hesameddin; Ameen, Tarek A.; Chen, ChinYi; Klimeck, Gerhard; Rahman, Rajib

    2018-04-01

    Steep transistors are crucial in lowering power consumption of the integrated circuits. However, the difficulties in achieving steepness beyond the Boltzmann limit experimentally have hindered the fundamental challenges in application of these devices in integrated circuits. From a sensitivity perspective, an ideal switch should have a high sensitivity to the gate voltage and lower sensitivity to the device design parameters like oxide and body thicknesses. In this work, conventional tunnel-FET (TFET) and negative capacitance FET are shown to suffer from high sensitivity to device design parameters using full-band atomistic quantum transport simulations and analytical analysis. Although Dielectric Engineered (DE-) TFETs based on 2D materials show smaller sensitivity compared with the conventional TFETs, they have leakage issue. To mitigate this challenge, a novel DE-TFET design has been proposed and studied.

  1. Global Sensitivity Analysis for Process Identification under Model Uncertainty

    NASA Astrophysics Data System (ADS)

    Ye, M.; Dai, H.; Walker, A. P.; Shi, L.; Yang, J.

    2015-12-01

    The environmental system consists of various physical, chemical, and biological processes, and environmental models are always built to simulate these processes and their interactions. For model building, improvement, and validation, it is necessary to identify important processes so that limited resources can be used to better characterize the processes. While global sensitivity analysis has been widely used to identify important processes, the process identification is always based on deterministic process conceptualization that uses a single model for representing a process. However, environmental systems are complex, and it happens often that a single process may be simulated by multiple alternative models. Ignoring the model uncertainty in process identification may lead to biased identification in that identified important processes may not be so in the real world. This study addresses this problem by developing a new method of global sensitivity analysis for process identification. The new method is based on the concept of Sobol sensitivity analysis and model averaging. Similar to the Sobol sensitivity analysis to identify important parameters, our new method evaluates variance change when a process is fixed at its different conceptualizations. The variance considers both parametric and model uncertainty using the method of model averaging. The method is demonstrated using a synthetic study of groundwater modeling that considers recharge process and parameterization process. Each process has two alternative models. Important processes of groundwater flow and transport are evaluated using our new method. The method is mathematically general, and can be applied to a wide range of environmental problems.

  2. Sensitivity and specificity of the Chinese version of the Schizotypal Personality Questionnaire-Brief for identifying undergraduate students susceptible to psychosis.

    PubMed

    Ma, Wei-Fen; Wu, Po-Lun; Yang, Shu-Ju; Cheng, Kuang-Fu; Chiu, Hsien-Tsai; Lane, Hsien-Yuan

    2010-12-01

    Early interventions can improve treatment outcomes for individuals with major psychiatric disorders and with nonspecific symptoms but increasingly impaired cognitive perception, emotions, and behaviour. One way used to identify people susceptible to psychosis is through the schizotypal personality trait. Persons with schizotypal characteristics have been identified with the widely used Schizotypal Personality Questionnaire-Brief. However, no suitable instruments are available to screen individuals in the Taiwanese population for evidence of early psychotic symptoms. The purpose of this study was to test the sensitivity and specificity of the Chinese version of the Schizotypal Personality Questionnaire-Brief for identifying undergraduate students' susceptibility to psychosis. Two-stage, cross-sectional survey design. The self-administered scale was tested in a convenience sample of 618 undergraduate students at a medical university in Taiwan. Among these students, 54 completed the scale 2 weeks apart for test-retest reliability, and 80 were tested to identify their susceptibility to psychosis. In Stage I, participants with scores in the top 6.5% were classified as the high-score group (n=40). The control group (n=40) was randomly selected from the remaining participants with scores <15 and matched by gender. These 80 students were asked to participate in psychiatric interviews in Stage II. The instrument was tested for reliability using intraclass correlation coefficients and the Kuder-Richardson formula 20. The instrument was analysed for optimal sensitivity and specificity using odds-ratio analysis and receiver operating characteristic curves. The 22-item Chinese version of the Schizotypal Personality Questionnaire-Brief had a 2-week test-retest reliability of 0.82 and internal consistency of 0.76. The optimal cut-off score was 17, with odds ratios of 24.4 and an area under the receiver operating characteristic curves of 0.83. The instrument had a sensitivity of

  3. Soil and vegetation parameter uncertainty on future terrestrial carbon sinks

    NASA Astrophysics Data System (ADS)

    Kothavala, Z.; Felzer, B. S.

    2013-12-01

    We examine the role of the terrestrial carbon cycle in a changing climate at the centennial scale using an intermediate complexity Earth system climate model that includes the effects of dynamic vegetation and the global carbon cycle. We present a series of ensemble simulations to evaluate the sensitivity of simulated terrestrial carbon sinks to three key model parameters: (a) The temperature dependence of soil carbon decomposition, (b) the upper temperature limits on the rate of photosynthesis, and (c) the nitrogen limitation of the maximum rate of carboxylation of Rubisco. We integrated the model in fully coupled mode for a 1200-year spin-up period, followed by a 300-year transient simulation starting at year 1800. Ensemble simulations were conducted varying each parameter individually and in combination with other variables. The results of the transient simulations show that terrestrial carbon uptake is very sensitive to the choice of model parameters. Changes in net primary productivity were most sensitive to the upper temperature limit on the rate of photosynthesis, which also had a dominant effect on overall land carbon trends; this is consistent with previous research that has shown the importance of climatic suppression of photosynthesis as a driver of carbon-climate feedbacks. Soil carbon generally decreased with increasing temperature, though the magnitude of this trend depends on both the net primary productivity changes and the temperature dependence of soil carbon decomposition. Vegetation carbon increased in some simulations, but this was not consistent across all configurations of model parameters. Comparing to global carbon budget observations, we identify the subset of model parameters which are consistent with observed carbon sinks; this serves to narrow considerably the future model projections of terrestrial carbon sink changes in comparison with the full model ensemble.

  4. Simulation of the influence of aerosol particles on Stokes parameters of polarized skylight

    NASA Astrophysics Data System (ADS)

    Li, L.; Li, Z. Q.; Wendisch, M.

    2014-03-01

    Microphysical properties and chemical compositions of aerosol particles determine polarized radiance distribution in the atmosphere. In this paper, the influences of different aerosol properties (particle size, shape, real and imaginary parts of refractive index) on Stokes parameters of polarized skylight in the solar principal and almucantar planes are studied by using vector radiative transfer simulations. The results show high sensitivity of the normalized Stokes parameters due to fine particle size, shape and real part of refractive index of aerosols. It is possible to utilize the strength variations at the peak positions of the normalized Stokes parameters in the principal and almucantar planes to identify aerosol types.

  5. Parameter sensitivity analysis for pesticide impacts on honeybee colonies

    EPA Science Inventory

    We employ Monte Carlo simulation and linear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed that simulate hive population trajectories, taking into account queen strength, foraging success, weather, colo...

  6. Influence of Population Variation of Physiological Parameters in Computational Models of Space Physiology

    NASA Technical Reports Server (NTRS)

    Myers, J. G.; Feola, A.; Werner, C.; Nelson, E. S.; Raykin, J.; Samuels, B.; Ethier, C. R.

    2016-01-01

    The earliest manifestations of Visual Impairment and Intracranial Pressure (VIIP) syndrome become evident after months of spaceflight and include a variety of ophthalmic changes, including posterior globe flattening and distension of the optic nerve sheath. Prevailing evidence links the occurrence of VIIP to the cephalic fluid shift induced by microgravity and the subsequent pressure changes around the optic nerve and eye. Deducing the etiology of VIIP is challenging due to the wide range of physiological parameters that may be influenced by spaceflight and are required to address a realistic spectrum of physiological responses. Here, we report on the application of an efficient approach to interrogating physiological parameter space through computational modeling. Specifically, we assess the influence of uncertainty in input parameters for two models of VIIP syndrome: a lumped-parameter model (LPM) of the cardiovascular and central nervous systems, and a finite-element model (FEM) of the posterior eye, optic nerve head (ONH) and optic nerve sheath. Methods: To investigate the parameter space in each model, we employed Latin hypercube sampling partial rank correlation coefficient (LHSPRCC) strategies. LHS techniques outperform Monte Carlo approaches by enforcing efficient sampling across the entire range of all parameters. The PRCC method estimates the sensitivity of model outputs to these parameters while adjusting for the linear effects of all other inputs. The LPM analysis addressed uncertainties in 42 physiological parameters, such as initial compartmental volume and nominal compartment percentage of total cardiac output in the supine state, while the FEM evaluated the effects on biomechanical strain from uncertainties in 23 material and pressure parameters for the ocular anatomy. Results and Conclusion: The LPM analysis identified several key factors including high sensitivity to the initial fluid distribution. The FEM study found that intraocular pressure and

  7. Inverse modeling of hydrologic parameters using surface flux and runoff observations in the Community Land Model

    NASA Astrophysics Data System (ADS)

    Sun, Y.; Hou, Z.; Huang, M.; Tian, F.; Leung, L. Ruby

    2013-12-01

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Both deterministic least-square fitting and stochastic Markov-chain Monte Carlo (MCMC)-Bayesian inversion approaches are evaluated by applying them to CLM4 at selected sites with different climate and soil conditions. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the sampling-based stochastic inversion approaches provides significant improvements in the model simulations compared to using default CLM4 parameter values, and that as more information comes in, the predictive intervals (ranges of posterior distributions) of the calibrated parameters become narrower. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.

  8. What do we mean by sensitivity analysis? The need for comprehensive characterization of "global" sensitivity in Earth and Environmental systems models

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin V.

    2015-05-01

    Sensitivity analysis is an essential paradigm in Earth and Environmental Systems modeling. However, the term "sensitivity" has a clear definition, based in partial derivatives, only when specified locally around a particular point (e.g., optimal solution) in the problem space. Accordingly, no unique definition exists for "global sensitivity" across the problem space, when considering one or more model responses to different factors such as model parameters or forcings. A variety of approaches have been proposed for global sensitivity analysis, based on different philosophies and theories, and each of these formally characterizes a different "intuitive" understanding of sensitivity. These approaches focus on different properties of the model response at a fundamental level and may therefore lead to different (even conflicting) conclusions about the underlying sensitivities. Here we revisit the theoretical basis for sensitivity analysis, summarize and critically evaluate existing approaches in the literature, and demonstrate their flaws and shortcomings through conceptual examples. We also demonstrate the difficulty involved in interpreting "global" interaction effects, which may undermine the value of existing interpretive approaches. With this background, we identify several important properties of response surfaces that are associated with the understanding and interpretation of sensitivities in the context of Earth and Environmental System models. Finally, we highlight the need for a new, comprehensive framework for sensitivity analysis that effectively characterizes all of the important sensitivity-related properties of model response surfaces.

  9. Sea Oil Spill Detection Using Self-Similarity Parameter of Polarimetric SAR Data

    NASA Astrophysics Data System (ADS)

    Tong, S.; Chen, Q.; Liu, X.

    2018-04-01

    The ocean oil spills cause serious damage to the marine ecosystem. Polarimetric Synthetic Aperture Radar (SAR) is an important mean for oil spill detections on sea surface. The major challenge is how to distinguish oil slicks from look-alikes effectively. In this paper, a new parameter called self-similarity parameter, which is sensitive to the scattering mechanism of oil slicks, is introduced to identify oil slicks and reduce false alarm caused by look-alikes. Self-similarity parameter is small in oil slicks region and it is large in sea region or look-alikes region. So, this parameter can be used to detect oil slicks from look-alikes and water. In addition, evaluations and comparisons were conducted with one Radarsat-2 image and two SIR-C images. The experimental results demonstrate the effectiveness of the self-similarity parameter for oil spill detection.

  10. Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems With Switching [Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems

    DOE PAGES

    Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil; ...

    2017-01-24

    Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less

  11. Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems With Switching [Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil

    Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less

  12. Performance evaluation of spectral vegetation indices using a statistical sensitivity function

    USGS Publications Warehouse

    Ji, Lei; Peters, Albert J.

    2007-01-01

    A great number of spectral vegetation indices (VIs) have been developed to estimate biophysical parameters of vegetation. Traditional techniques for evaluating the performance of VIs are regression-based statistics, such as the coefficient of determination and root mean square error. These statistics, however, are not capable of quantifying the detailed relationship between VIs and biophysical parameters because the sensitivity of a VI is usually a function of the biophysical parameter instead of a constant. To better quantify this relationship, we developed a “sensitivity function” for measuring the sensitivity of a VI to biophysical parameters. The sensitivity function is defined as the first derivative of the regression function, divided by the standard error of the dependent variable prediction. The function elucidates the change in sensitivity over the range of the biophysical parameter. The Student's t- or z-statistic can be used to test the significance of VI sensitivity. Additionally, we developed a “relative sensitivity function” that compares the sensitivities of two VIs when the biophysical parameters are unavailable.

  13. Key Parameters for Urban Heat Island Assessment in A Mediterranean Context: A Sensitivity Analysis Using the Urban Weather Generator Model

    NASA Astrophysics Data System (ADS)

    Salvati, Agnese; Palme, Massimo; Inostroza, Luis

    2017-10-01

    Although Urban Heat Island (UHI) is a fundamental effect modifying the urban climate, being widely studied, the relative weight of the parameters involved in its generation is still not clear. This paper investigates the hierarchy of importance of eight parameters responsible for UHI intensity in the Mediterranean context. Sensitivity analyses have been carried out using the Urban Weather Generator model, considering the range of variability of: 1) city radius, 2) urban morphology, 3) tree coverage, 4) anthropogenic heat from vehicles, 5) building’s cooling set point, 6) heat released to canyon from HVAC systems, 7) wall construction properties and 8) albedo of vertical and horizontal surfaces. Results show a clear hierarchy of significance among the considered parameters; the urban morphology is the most important variable, causing a relative change up to 120% of the annual average UHI intensity in the Mediterranean context. The impact of anthropogenic sources of heat such as cooling systems and vehicles is also significant. These results suggest that urban morphology parameters can be used as descriptors of the climatic performance of different urban areas, easing the work of urban planners and designers in understanding a complex physical phenomenon, such as the UHI.

  14. Multisite-multivariable sensitivity analysis of distributed watershed models: enhancing the perceptions from computationally frugal methods

    USDA-ARS?s Scientific Manuscript database

    This paper assesses the impact of different likelihood functions in identifying sensitive parameters of the highly parameterized, spatially distributed Soil and Water Assessment Tool (SWAT) watershed model for multiple variables at multiple sites. The global one-factor-at-a-time (OAT) method of Morr...

  15. Non-animal sensitization testing: state-of-the-art.

    PubMed

    Vandebriel, Rob J; van Loveren, Henk

    2010-05-01

    Predictive tests to identify the sensitizing properties of chemicals are carried out using animals. In the European Union timelines for phasing out many standard animal tests were established for cosmetics. Following this policy, the new European Chemicals Legislation (REACH) favors alternative methods, if validated and appropriate. In this review the authors aim to provide a state-of-the art overview of alternative methods (in silico, in chemico, and in vitro) to identify contact and respiratory sensitizing capacity and in some occasions give a measure of potency. The past few years have seen major advances in QSAR (quantitative structure-activity relationship) models where especially mechanism-based models have great potential, peptide reactivity assays where multiple parameters can be measured simultaneously, providing a more complete reactivity profile, and cell-based assays. Several cell-based assays are in development, not only using different cell types, but also several specifically developed assays such as three-dimenionally (3D)-reconstituted skin models, an antioxidant response reporter assay, determination of signaling pathways, and gene profiling. Some of these assays show relatively high sensitivity and specificity for a large number of sensitizers and should enter validation (or are indeed entering this process). Integrating multiple assays in a decision tree or integrated testing system is a next step, but has yet to be developed. Adequate risk assessment, however, is likely to require significantly more time and efforts.

  16. Sensitivity analysis and calibration of a dynamic physically based slope stability model

    NASA Astrophysics Data System (ADS)

    Zieher, Thomas; Rutzinger, Martin; Schneider-Muntau, Barbara; Perzl, Frank; Leidinger, David; Formayer, Herbert; Geitner, Clemens

    2017-06-01

    Physically based modelling of slope stability on a catchment scale is still a challenging task. When applying a physically based model on such a scale (1 : 10 000 to 1 : 50 000), parameters with a high impact on the model result should be calibrated to account for (i) the spatial variability of parameter values, (ii) shortcomings of the selected model, (iii) uncertainties of laboratory tests and field measurements or (iv) parameters that cannot be derived experimentally or measured in the field (e.g. calibration constants). While systematic parameter calibration is a common task in hydrological modelling, this is rarely done using physically based slope stability models. In the present study a dynamic, physically based, coupled hydrological-geomechanical slope stability model is calibrated based on a limited number of laboratory tests and a detailed multitemporal shallow landslide inventory covering two landslide-triggering rainfall events in the Laternser valley, Vorarlberg (Austria). Sensitive parameters are identified based on a local one-at-a-time sensitivity analysis. These parameters (hydraulic conductivity, specific storage, angle of internal friction for effective stress, cohesion for effective stress) are systematically sampled and calibrated for a landslide-triggering rainfall event in August 2005. The identified model ensemble, including 25 behavioural model runs with the highest portion of correctly predicted landslides and non-landslides, is then validated with another landslide-triggering rainfall event in May 1999. The identified model ensemble correctly predicts the location and the supposed triggering timing of 73.0 % of the observed landslides triggered in August 2005 and 91.5 % of the observed landslides triggered in May 1999. Results of the model ensemble driven with raised precipitation input reveal a slight increase in areas potentially affected by slope failure. At the same time, the peak run-off increases more markedly, suggesting

  17. Identifying Determinants of PARP Inhibitor Sensitivity in Ovarian Cancer

    DTIC Science & Technology

    2016-10-01

    inhibitors. Ovarian cancer patients that harbored germ- line BRCA1 mutations treated with PARP inhibitors exhibited meaningful responses in early phase...hypothesized that a range of common ovarian cancer predisposing germ- line BRCA1 gene mutations produce semi-functional proteins that are capable of...we have started our work examining exome sequences and gene expression in PARPi sensitive and resistance cancer cell lines . I attended and presented

  18. Evolution of Geometric Sensitivity Derivatives from Computer Aided Design Models

    NASA Technical Reports Server (NTRS)

    Jones, William T.; Lazzara, David; Haimes, Robert

    2010-01-01

    The generation of design parameter sensitivity derivatives is required for gradient-based optimization. Such sensitivity derivatives are elusive at best when working with geometry defined within the solid modeling context of Computer-Aided Design (CAD) systems. Solid modeling CAD systems are often proprietary and always complex, thereby necessitating ad hoc procedures to infer parameter sensitivity. A new perspective is presented that makes direct use of the hierarchical associativity of CAD features to trace their evolution and thereby track design parameter sensitivity. In contrast to ad hoc methods, this method provides a more concise procedure following the model design intent and determining the sensitivity of CAD geometry directly to its respective defining parameters.

  19. The sensitivity and significance analysis of parameters in the model of pH regulation on lactic acid production by Lactobacillus bulgaricus.

    PubMed

    Liu, Ke; Zeng, Xiangmiao; Qiao, Lei; Li, Xisheng; Yang, Yubo; Dai, Cuihong; Hou, Aiju; Xu, Dechang

    2014-01-01

    The excessive production of lactic acid by L. bulgaricus during yogurt storage is a phenomenon we are always tried to prevent. The methods used in industry either control the post-acidification inefficiently or kill the probiotics in yogurt. Genetic methods of changing the activity of one enzyme related to lactic acid metabolism make the bacteria short of energy to growth, although they are efficient ways in controlling lactic acid production. A model of pH-induced promoter regulation on the production of lactic acid by L. bulgaricus was built. The modelled lactic acid metabolism without pH-induced promoter regulation fitted well with wild type L. bulgaricus (R2LAC = 0.943, R2LA = 0.942). Both the local sensitivity analysis and Sobol sensitivity analysis indicated parameters Tmax, GR, KLR, S, V0, V1 and dLR were sensitive. In order to guide the future biology experiments, three adjustable parameters, KLR, V0 and V1, were chosen for further simulations. V0 had little effect on lactic acid production if the pH-induced promoter could be well induced when pH decreased to its threshold. KLR and V1 both exhibited great influence on the producing of lactic acid. The proposed method of introducing a pH-induced promoter to regulate a repressor gene could restrain the synthesis of lactic acid if an appropriate strength of promoter and/or an appropriate strength of ribosome binding sequence (RBS) in lacR gene has been designed.

  20. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    NASA Astrophysics Data System (ADS)

    Khawli, Toufik Al; Gebhardt, Sascha; Eppelt, Urs; Hermanns, Torsten; Kuhlen, Torsten; Schulz, Wolfgang

    2016-06-01

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part is to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.

  1. Using fixed-parameter and random-parameter ordered regression models to identify significant factors that affect the severity of drivers' injuries in vehicle-train collisions.

    PubMed

    Dabbour, Essam; Easa, Said; Haider, Murtaza

    2017-10-01

    This study attempts to identify significant factors that affect the severity of drivers' injuries when colliding with trains at railroad-grade crossings by analyzing the individual-specific heterogeneity related to those factors over a period of 15 years. Both fixed-parameter and random-parameter ordered regression models were used to analyze records of all vehicle-train collisions that occurred in the United States from January 1, 2001 to December 31, 2015. For fixed-parameter ordered models, both probit and negative log-log link functions were used. The latter function accounts for the fact that lower injury severity levels are more probable than higher ones. Separate models were developed for heavy and light-duty vehicles. Higher train and vehicle speeds, female, and young drivers (below the age of 21 years) were found to be consistently associated with higher severity of drivers' injuries for both heavy and light-duty vehicles. Furthermore, favorable weather, light-duty trucks (including pickup trucks, panel trucks, mini-vans, vans, and sports-utility vehicles), and senior drivers (above the age of 65 years) were found be consistently associated with higher severity of drivers' injuries for light-duty vehicles only. All other factors (e.g. air temperature, the type of warning devices, darkness conditions, and highway pavement type) were found to be temporally unstable, which may explain the conflicting findings of previous studies related to those factors. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries

    DOE PAGES

    Lu, Zhiming

    2018-01-30

    Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less

  3. Sensitivity Analysis of Hydraulic Head to Locations of Model Boundaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Zhiming

    Sensitivity analysis is an important component of many model activities in hydrology. Numerous studies have been conducted in calculating various sensitivities. Most of these sensitivity analysis focus on the sensitivity of state variables (e.g. hydraulic head) to parameters representing medium properties such as hydraulic conductivity or prescribed values such as constant head or flux at boundaries, while few studies address the sensitivity of the state variables to some shape parameters or design parameters that control the model domain. Instead, these shape parameters are typically assumed to be known in the model. In this study, based on the flow equation, wemore » derive the equation (and its associated initial and boundary conditions) for sensitivity of hydraulic head to shape parameters using continuous sensitivity equation (CSE) approach. These sensitivity equations can be solved numerically in general or analytically in some simplified cases. Finally, the approach has been demonstrated through two examples and the results are compared favorably to those from analytical solutions or numerical finite difference methods with perturbed model domains, while numerical shortcomings of the finite difference method are avoided.« less

  4. Evaluation of MEGAN-CLM parameter sensitivity to predictions of isoprene emissions from an Amazonian rainforest

    NASA Astrophysics Data System (ADS)

    Holm, J. A.; Jardine, K.; Guenther, A. B.; Chambers, J. Q.; Tribuzy, E.

    2014-09-01

    Tropical trees are known to be large emitters of biogenic volatile organic compounds (BVOC), accounting for up to 75% of the global isoprene budget. Once in the atmosphere, these compounds influence multiple processes associated with air quality and climate. However, uncertainty in biogenic emissions is two-fold, (1) the environmental controls over isoprene emissions from tropical forests remain highly uncertain; and (2) our ability to accurately represent these environmental controls within models is lacking. This study evaluated the biophysical parameters that drive the global Model of Emissions of Gases and Aerosols from Nature (MEGAN) embedded in a biogeochemistry land surface model, the Community Land Model (CLM), with a focus on isoprene emissions from an Amazonian forest. Upon evaluating the sensitivity of 19 parameters in CLM that currently influence isoprene emissions by using a Monte Carlo analysis, up to 61% of the uncertainty in mean isoprene emissions was caused by the uncertainty in the parameters related to leaf temperature. The eight parameters associated with photosynthetic active radiation (PAR) contributed in total to only 15% of the uncertainty in mean isoprene emissions. Leaf temperature was strongly correlated with isoprene emission activity (R2 = 0.89). However, when compared to field measurements in the Central Amazon, CLM failed to capture the upper 10-14 °C of leaf temperatures throughout the year (i.e., failed to represent ~32 to 46 °C), and the spread observed in field measurements was not representative in CLM. This is an important parameter to accurately simulate due to the non-linear response of emissions to temperature. MEGAN-CLM 4.0 overestimated isoprene emissions by 60% for a Central Amazon forest (5.7 mg m-2 h-1 vs. 3.6 mg m-2 h-1), but due to reductions in leaf area index (LAI) by 28% in MEGAN-CLM 4.5 isoprene emissions were within 7% of observed data (3.8 mg m-2 h-1). When a slight adjustment to leaf temperature was made to

  5. Flexural modeling of the elastic lithosphere at an ocean trench: A parameter sensitivity analysis using analytical solutions

    NASA Astrophysics Data System (ADS)

    Contreras-Reyes, Eduardo; Garay, Jeremías

    2018-01-01

    The outer rise is a topographic bulge seaward of the trench at a subduction zone that is caused by bending and flexure of the oceanic lithosphere as subduction commences. The classic model of the flexure of oceanic lithosphere w (x) is a hydrostatic restoring force acting upon an elastic plate at the trench axis. The governing parameters are elastic thickness Te, shear force V0, and bending moment M0. V0 and M0 are unknown variables that are typically replaced by other quantities such as the height of the fore-bulge, wb, and the half-width of the fore-bulge, (xb - xo). However, this method is difficult to implement with the presence of excessive topographic noise around the bulge of the outer rise. Here, we present an alternative method to the classic model, in which lithospheric flexure w (x) is a function of the flexure at the trench axis w0, the initial dip angle of subduction β0, and the elastic thickness Te. In this investigation, we apply a sensitivity analysis to both methods in order to determine the impact of the differing parameters on the solution, w (x). The parametric sensitivity analysis suggests that stable solutions for the alternative approach requires relatively low β0 values (<15°), which are consistent with the initial dip angles observed in seismic velocity-depth models across convergent margins worldwide. The predicted flexure for both methods are compared with observed bathymetric profiles across the Izu-Mariana trench, where the old and cold Pacific plate is characterized by a pronounced outer rise bulge. The alternative method is a more suitable approach, assuming that accurate geometric information at the trench axis (i.e., w0 and β0) is available.

  6. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    DOE PAGES

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more

  7. Cost-effectiveness of training rural providers to identify and treat patients at risk for fragility fractures.

    PubMed

    Nelson, S D; Nelson, R E; Cannon, G W; Lawrence, P; Battistone, M J; Grotzke, M; Rosenblum, Y; LaFleur, J

    2014-12-01

    This is a cost-effectiveness analysis of training rural providers to identify and treat osteoporosis. Results showed a slight cost savings, increase in life years, increase in treatment rates, and decrease in fracture incidence. However, the results were sensitive to small differences in effectiveness, being cost-effective in 70 % of simulations during probabilistic sensitivity analysis. We evaluated the cost-effectiveness of training rural providers to identify and treat veterans at risk for fragility fractures relative to referring these patients to an urban medical center for specialist care. The model evaluated the impact of training on patient life years, quality-adjusted life years (QALYs), treatment rates, fracture incidence, and costs from the perspective of the Department of Veterans Affairs. We constructed a Markov microsimulation model to compare costs and outcomes of a hypothetical cohort of veterans seen by rural providers. Parameter estimates were derived from previously published studies, and we conducted one-way and probabilistic sensitivity analyses on the parameter inputs. Base-case analysis showed that training resulted in no additional costs and an extra 0.083 life years (0.054 QALYs). Our model projected that as a result of training, more patients with osteoporosis would receive treatment (81.3 vs. 12.2 %), and all patients would have a lower incidence of fractures per 1,000 patient years (hip, 1.628 vs. 1.913; clinical vertebral, 0.566 vs. 1.037) when seen by a trained provider compared to an untrained provider. Results remained consistent in one-way sensitivity analysis and in probabilistic sensitivity analyses, training rural providers was cost-effective (less than $50,000/QALY) in 70 % of the simulations. Training rural providers to identify and treat veterans at risk for fragility fractures has a potential to be cost-effective, but the results are sensitive to small differences in effectiveness. It appears that provider education alone is

  8. Clinical and pathological tools for identifying microsatellite instability in colorectal cancer

    PubMed Central

    Krivokapić, Zoran; Marković, Srdjan; Antić, Jadranka; Dimitrijević, Ivan; Bojić, Daniela; Svorcan, Petar; Jojić, Njegica; Damjanović, Svetozar

    2012-01-01

    Aim To assess practical accuracy of revised Bethesda criteria (BGrev), pathological predictive model (MsPath), and histopathological parameters for detection of high-frequency of microsatellite instability (MSI-H) phenotype in patients with colorectal carcinoma (CRC). Method Tumors from 150 patients with CRC were analyzed for MSI using a fluorescence-based pentaplex polymerase chain reaction technique. For all patients, we evaluated age, sex, family history of cancer, localization, tumor differentiation, mucin production, lymphocytic infiltration (TIL), and Union for International Cancer Control stage. Patients were classified according to the BGrev, and the groups were compared. The utility of the BGrev, MsPath, and clinical and histopathological parameters for predicting microsatellite tumor status were assessed by univariate logistic regression analysis and by calculating the sensitivity, specificity, and positive (PPV) and negative (NPV) predictive values. Results Fifteen out of 45 patients who met and 4 of 105 patients who did not meet the BGrev criteria had MSI-H CRC. Sensitivity, specificity, PPV, and NPV for BGrev were 78.9%, 77%, 30%, and 70%, respectively. MSI histology (the third BGrev criterion without age limit) was as sensitive as BGrev, but more specific. MsPath model was more sensitive than BGrev (86%), with similar specificity. Any BGrev criterion fulfillment, mucinous differentiation, and right-sided CRC were singled out as independent factors to identify MSI-H colorectal cancer. Conclusion The BGrev, MsPath model, and MSI histology are useful tools for selecting patients for MSI testing. PMID:22911525

  9. Using sensitivity analysis in model calibration efforts

    USGS Publications Warehouse

    Tiedeman, Claire; Hill, Mary C.

    2003-01-01

    In models of natural and engineered systems, sensitivity analysis can be used to assess relations among system state observations, model parameters, and model predictions. The model itself links these three entities, and model sensitivities can be used to quantify the links. Sensitivities are defined as the derivatives of simulated quantities (such as simulated equivalents of observations, or model predictions) with respect to model parameters. We present four measures calculated from model sensitivities that quantify the observation-parameter-prediction links and that are especially useful during the calibration and prediction phases of modeling. These four measures are composite scaled sensitivities (CSS), prediction scaled sensitivities (PSS), the value of improved information (VOII) statistic, and the observation prediction (OPR) statistic. These measures can be used to help guide initial calibration of models, collection of field data beneficial to model predictions, and recalibration of models updated with new field information. Once model sensitivities have been calculated, each of the four measures requires minimal computational effort. We apply the four measures to a three-layer MODFLOW-2000 (Harbaugh et al., 2000; Hill et al., 2000) model of the Death Valley regional ground-water flow system (DVRFS), located in southern Nevada and California. D’Agnese et al. (1997, 1999) developed and calibrated the model using nonlinear regression methods. Figure 1 shows some of the observations, parameters, and predictions for the DVRFS model. Observed quantities include hydraulic heads and spring flows. The 23 defined model parameters include hydraulic conductivities, vertical anisotropies, recharge rates, evapotranspiration rates, and pumpage. Predictions of interest for this regional-scale model are advective transport paths from potential contamination sites underlying the Nevada Test Site and Yucca Mountain.

  10. Sensitivity of numerical simulation models of debris flow to the rheological parameters and application in the engineering environment

    NASA Astrophysics Data System (ADS)

    Rosso, M.; Sesenna, R.; Magni, L.; Demurtas, L.; Uras, G.

    2009-04-01

    bidimensional and monodimensional commercial models for the simulation of debris flow, in particular because of the reconstruction of famous and expected events in the river basin of the Comboè torrent (Aosta Valley, Italy), it has been possible to reach careful consideration about the calibration of the rheological parameters and the sensitivity of simulation models, specifically about the variability of them. The geomechanical and volumetric characteristics of the sediment at the bottom of the debris could produce uncertainties in model implementation, above all in not exclusively cinematic models, mostly influenced by the rheological parameters. The parameter that mainly influences the final result of the applied numerical models is the volumetric solid concentration that is variable in space and time during the debris flow propagation. In fact rheological parameters are described by a power equation of volumetric concentration. The potentiality and the suitability of a numerical code in the engineering environmental application have to be consider not referring only to the quality and amount of results, but also to the sensibility regarding the parameters variability that are bases of the inner ruotines of the program. Therefore, a suitable model will have to be sensitive to the variability of parameters that the customer can calculate with greater precision. On the other side, it will have to be sufficiently stable to the variation of those parameters that the customer cannot define univocally, but only by range of variation. One of the models utilized for the simulation of debris flow on the Comboè Torrent has been demonstrated as an heavy influenced example by small variation of rheological parameters. Consequently, in spite of the possibility to lead accurate procedures of back-analysis about a recent intense event, it has been found a difficulty in the calibration of the concentration for new expected events. That involved an extreme variability of the final results

  11. Ground water flow modeling with sensitivity analyses to guide field data collection in a mountain watershed

    USGS Publications Warehouse

    Johnson, Raymond H.

    2007-01-01

    In mountain watersheds, the increased demand for clean water resources has led to an increased need for an understanding of ground water flow in alpine settings. In Prospect Gulch, located in southwestern Colorado, understanding the ground water flow system is an important first step in addressing metal loads from acid-mine drainage and acid-rock drainage in an area with historical mining. Ground water flow modeling with sensitivity analyses are presented as a general tool to guide future field data collection, which is applicable to any ground water study, including mountain watersheds. For a series of conceptual models, the observation and sensitivity capabilities of MODFLOW-2000 are used to determine composite scaled sensitivities, dimensionless scaled sensitivities, and 1% scaled sensitivity maps of hydraulic head. These sensitivities determine the most important input parameter(s) along with the location of observation data that are most useful for future model calibration. The results are generally independent of the conceptual model and indicate recharge in a high-elevation recharge zone as the most important parameter, followed by the hydraulic conductivities in all layers and recharge in the next lower-elevation zone. The most important observation data in determining these parameters are hydraulic heads at high elevations, with a depth of less than 100 m being adequate. Evaluation of a possible geologic structure with a different hydraulic conductivity than the surrounding bedrock indicates that ground water discharge to individual stream reaches has the potential to identify some of these structures. Results of these sensitivity analyses can be used to prioritize data collection in an effort to reduce time and money spend by collecting the most relevant model calibration data.

  12. MOVES regional level sensitivity analysis

    DOT National Transportation Integrated Search

    2012-01-01

    The MOVES Regional Level Sensitivity Analysis was conducted to increase understanding of the operations of the MOVES Model in regional emissions analysis and to highlight the following: : the relative sensitivity of selected MOVES Model input paramet...

  13. Sensitivity of ground motion parameters to local site effects for areas characterised by a thick buried low-velocity layer.

    NASA Astrophysics Data System (ADS)

    Farrugia, Daniela; Galea, Pauline; D'Amico, Sebastiano; Paolucci, Enrico

    2016-04-01

    It is well known that earthquake damage at a particular site depends on the source, the path that the waves travel through and the local geology. The latter is capable of amplifying and changing the frequency content of the incoming seismic waves. In regions of sparse or no strong ground motion records, like Malta (Central Mediterranean), ground motion simulations are used to obtain parameters for purposes of seismic design and analysis. As an input to ground motion simulations, amplification functions related to the shallow subsurface are required. Shear-wave velocity profiles of several sites on the Maltese islands were obtained using the Horizontal-to-Vertical Spectral Ratio (H/V), the Extended Spatial Auto-Correlation (ESAC) technique and the Genetic Algorithm. The sites chosen were all characterised by a layer of Blue Clay, which can be up to 75 m thick, underlying the Upper Coralline Limestone, a fossiliferous coarse grained limestone. This situation gives rise to a velocity inversion. Available borehole data generally extends down till the top of the Blue Clay layer therefore the only way to check the validity of the modelled shear-wave velocity profile is through the thickness of the topmost layer. Surface wave methods are characterised by uncertainties related to the measurements and the model used for interpretation. Moreover the inversion procedure is also highly non-unique. Such uncertainties are not commonly included in site response analysis. Yet, the propagation of uncertainties from the extracted dispersion curves to inversion solutions can lead to significant differences in the simulations (Boaga et al., 2011). In this study, a series of sensitivity analyses will be presented with the aim of better identifying those stratigraphic properties which can perturb the ground motion simulation results. The stochastic one-dimensional site response analysis algorithm, Extended Source Simulation (EXSIM; Motazedian and Atkinson, 2005), was used to perform

  14. Using synchrotron radiation angiography with a highly sensitive detector to identify impaired peripheral perfusion in rat pulmonary emphysema

    PubMed Central

    Ito, Hiromichi; Matsushita, Shonosuke; Hyodo, Kazuyuki; Sato, Yukio; Sakakibara, Yuzuru

    2013-01-01

    Owing to limitations in spatial resolution and sensitivity, it is difficult for conventional angiography to detect minute changes of perfusion in diffuse lung diseases, including pulmonary emphysema (PE). However, a high-gain avalanche rushing amorphous photoconductor (HARP) detector can give high sensitivity to synchrotron radiation (SR) angiography. SR angiography with a HARP detector provides high spatial resolution and sensitivity in addition to time resolution owing to its angiographic nature. The purpose of this study was to investigate whether this SR angiography with a HARP detector could evaluate altered microcirculation in PE. Two groups of rats were used: group PE and group C (control). Transvenous SR angiography with a HARP detector was performed and histopathological findings were compared. Peak density of contrast material in peripheral lung was lower in group PE than group C (p < 0.01). The slope of the linear regression line in scattering diagrams was also lower in group PE than C (p < 0.05). The correlation between the slope and extent of PE in histopathology showed significant negative correlation (p < 0.05, r = 0.61). SR angiography with a HARP detector made it possible to identify impaired microcirculation in PE by means of its high spatial resolution and sensitivity. PMID:23412496

  15. Comprehensive Monte-Carlo simulator for optimization of imaging parameters for high sensitivity detection of skin cancer at the THz

    NASA Astrophysics Data System (ADS)

    Ney, Michael; Abdulhalim, Ibrahim

    2016-03-01

    Skin cancer detection at its early stages has been the focus of a large number of experimental and theoretical studies during the past decades. Among these studies two prominent approaches presenting high potential are reflectometric sensing at the THz wavelengths region and polarimetric imaging techniques in the visible wavelengths. While THz radiation contrast agent and source of sensitivity to cancer related tissue alterations was considered to be mainly the elevated water content in the cancerous tissue, the polarimetric approach has been verified to enable cancerous tissue differentiation based on cancer induced structural alterations to the tissue. Combining THz with the polarimetric approach, which is considered in this study, is examined in order to enable higher detection sensitivity than previously pure reflectometric THz measurements. For this, a comprehensive MC simulation of radiative transfer in a complex skin tissue model fitted for the THz domain that considers the skin`s stratified structure, tissue material optical dispersion modeling, surface roughness, scatterers, and substructure organelles has been developed. Additionally, a narrow beam Mueller matrix differential analysis technique is suggested for assessing skin cancer induced changes in the polarimetric image, enabling the tissue model and MC simulation to be utilized for determining the imaging parameters resulting in maximal detection sensitivity.

  16. How sensitive is earthquake ground motion to source parameters? Insights from a numerical study in the Mygdonian basin

    NASA Astrophysics Data System (ADS)

    Chaljub, Emmanuel; Maufroy, Emeline; deMartin, Florent; Hollender, Fabrice; Guyonnet-Benaize, Cédric; Manakou, Maria; Savvaidis, Alexandros; Kiratzi, Anastasia; Roumelioti, Zaferia; Theodoulidis, Nikos

    2014-05-01

    Understanding the origin of the variability of earthquake ground motion is critical for seismic hazard assessment. Here we present the results of a numerical analysis of the sensitivity of earthquake ground motion to seismic source parameters, focusing on the Mygdonian basin near Thessaloniki (Greece). We use an extended model of the basin (65 km [EW] x 50 km [NS]) which has been elaborated during the Euroseistest Verification and Validation Project. The numerical simulations are performed with two independent codes, both implementing the Spectral Element Method. They rely on a robust, semi-automated, mesh design strategy together with a simple homogenization procedure to define a smooth velocity model of the basin. Our simulations are accurate up to 4 Hz, and include the effects of surface topography and of intrinsic attenuation. Two kinds of simulations are performed: (1) direct simulations of the surface ground motion for real regional events having various back azimuth with respect to the center of the basin; (2) reciprocity-based calculations where the ground motion due to 980 different seismic sources is computed at a few stations in the basin. In the reciprocity-based calculations, we consider epicentral distances varying from 2.5 km to 40 km, source depths from 1 km to 15 km and we span the range of possible back-azimuths with a 10 degree bin. We will present some results showing (1) the sensitivity of ground motion parameters to the location and focal mechanism of the seismic sources; and (2) the variability of the amplification caused by site effects, as measured by standard spectral ratios, to the source characteristics

  17. Inverse Modeling of Hydrologic Parameters Using Surface Flux and Runoff Observations in the Community Land Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yu; Hou, Zhangshuan; Huang, Maoyi

    2013-12-10

    This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Two inversion strategies, the deterministic least-square fitting and stochastic Markov-Chain Monte-Carlo (MCMC) - Bayesian inversion approaches, are evaluated by applying them to CLM4 at selected sites. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find thatmore » using model parameters calibrated by the least-square fitting provides little improvements in the model simulations but the sampling-based stochastic inversion approaches are consistent - as more information comes in, the predictive intervals of the calibrated parameters become narrower and the misfits between the calculated and observed responses decrease. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to the different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.« less

  18. Results of an integrated structure/control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1989-01-01

    A design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations is discussed. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changes in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient than finite difference methods for the computation of the equivalent sensitivity information.

  19. Evaluation of transverse dispersion effects in tank experiments by numerical modeling: parameter estimation, sensitivity analysis and revision of experimental design.

    PubMed

    Ballarini, E; Bauer, S; Eberhardt, C; Beyer, C

    2012-06-01

    Transverse dispersion represents an important mixing process for transport of contaminants in groundwater and constitutes an essential prerequisite for geochemical and biodegradation reactions. Within this context, this work describes the detailed numerical simulation of highly controlled laboratory experiments using uranine, bromide and oxygen depleted water as conservative tracers for the quantification of transverse mixing in porous media. Synthetic numerical experiments reproducing an existing laboratory experimental set-up of quasi two-dimensional flow through tank were performed to assess the applicability of an analytical solution of the 2D advection-dispersion equation for the estimation of transverse dispersivity as fitting parameter. The fitted dispersivities were compared to the "true" values introduced in the numerical simulations and the associated error could be precisely estimated. A sensitivity analysis was performed on the experimental set-up in order to evaluate the sensitivities of the measurements taken at the tank experiment on the individual hydraulic and transport parameters. From the results, an improved experimental set-up as well as a numerical evaluation procedure could be developed, which allow for a precise and reliable determination of dispersivities. The improved tank set-up was used for new laboratory experiments, performed at advective velocities of 4.9 m d(-1) and 10.5 m d(-1). Numerical evaluation of these experiments yielded a unique and reliable parameter set, which closely fits the measured tracer concentration data. For the porous medium with a grain size of 0.25-0.30 mm, the fitted longitudinal and transverse dispersivities were 3.49×10(-4) m and 1.48×10(-5) m, respectively. The procedures developed in this paper for the synthetic and rigorous design and evaluation of the experiments can be generalized and transferred to comparable applications. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. A methodology for global-sensitivity analysis of time-dependent outputs in systems biology modelling.

    PubMed

    Sumner, T; Shephard, E; Bogle, I D L

    2012-09-07

    One of the main challenges in the development of mathematical and computational models of biological systems is the precise estimation of parameter values. Understanding the effects of uncertainties in parameter values on model behaviour is crucial to the successful use of these models. Global sensitivity analysis (SA) can be used to quantify the variability in model predictions resulting from the uncertainty in multiple parameters and to shed light on the biological mechanisms driving system behaviour. We present a new methodology for global SA in systems biology which is computationally efficient and can be used to identify the key parameters and their interactions which drive the dynamic behaviour of a complex biological model. The approach combines functional principal component analysis with established global SA techniques. The methodology is applied to a model of the insulin signalling pathway, defects of which are a major cause of type 2 diabetes and a number of key features of the system are identified.

  1. Salt-Sensitive Hypertension and Cardiac Hypertrophy in Transgenic Mice Expressing a Corin Variant Identified in African Americans

    PubMed Central

    Wang, Wei; Cui, Yujie; Shen, Jianzhong; Jiang, Jingjing; Chen, Shenghan; Peng, Jianhao; Wu, Qingyu

    2012-01-01

    African Americans represent a high risk population for salt-sensitive hypertension and heart disease but the underlying mechanism remains unclear. Corin is a cardiac protease that regulates blood pressure by activating natriuretic peptides. A corin gene variant (T555I/Q568P) was identified in African Americans with hypertension and cardiac hypertrophy. In this study, we test the hypothesis that the corin variant contributes to the hypertensive and cardiac hypertrophic phenotype in vivo. Transgenic mice were generated to express wild-type or T555I/Q568P variant corin in the heart under the control of α-myosin heavy chain promoter. The mice were crossed into a corin knockout background to create KO/TgWT and KO/TgV mice that expressed WT or variant corin, respectively, in the heart. Functional studies showed that KO/TgV mice had significantly higher levels of pro-atrial natriuretic peptide in the heart compared with that in control KO/TgWT mice, indicating that the corin variant was defective in processing natriuretic peptides in vivo. By radiotelemetry, corin KO/TgV mice were found to have hypertension that was sensitive to dietary salt loading. The mice also developed cardiac hypertrophy at 12–14 months of age when fed a normal salt diet or at a younger age when fed a high salt diet. The phenotype of salt-sensitive hypertension and cardiac hypertrophy in KO/TgV mice closely resembles the pathological findings in African Americans who carry the corin variant. The results indicate that corin defects may represent an important mechanism in salt-sensitive hypertension and cardiac hypertrophy in African Americans. PMID:22987923

  2. Estimating parameters from rotating ring disc electrode measurements

    DOE PAGES

    Santhanagopalan, Shriram; White, Ralph E.

    2017-10-21

    Rotating ring disc electrode (RRDE) experiments are a classic tool for investigating kinetics of electrochemical reactions. Several standardized methods exist for extracting transport parameters and reaction rate constants using RRDE measurements. Here in this work, we compare some approximate solutions to the convective diffusion used popularly in the literature to a rigorous numerical solution of the Nernst-Planck equations coupled to the three dimensional flow problem. In light of these computational advancements, we explore design aspects of the RRDE that will help improve sensitivity of our parameter estimation procedure to experimental data. We use the oxygen reduction in acidic media involvingmore » three charge transfer reactions and a chemical reaction as an example, and identify ways to isolate reaction currents for the individual processes in order to accurately estimate the exchange current densities.« less

  3. Global Sensitivity Analysis as Good Modelling Practices tool for the identification of the most influential process parameters of the primary drying step during freeze-drying.

    PubMed

    Van Bockstal, Pieter-Jan; Mortier, Séverine Thérèse F C; Corver, Jos; Nopens, Ingmar; Gernaey, Krist V; De Beer, Thomas

    2018-02-01

    Pharmaceutical batch freeze-drying is commonly used to improve the stability of biological therapeutics. The primary drying step is regulated by the dynamic settings of the adaptable process variables, shelf temperature T s and chamber pressure P c . Mechanistic modelling of the primary drying step leads to the optimal dynamic combination of these adaptable process variables in function of time. According to Good Modelling Practices, a Global Sensitivity Analysis (GSA) is essential for appropriate model building. In this study, both a regression-based and variance-based GSA were conducted on a validated mechanistic primary drying model to estimate the impact of several model input parameters on two output variables, the product temperature at the sublimation front T i and the sublimation rate ṁ sub . T s was identified as most influential parameter on both T i and ṁ sub , followed by P c and the dried product mass transfer resistance α Rp for T i and ṁ sub , respectively. The GSA findings were experimentally validated for ṁ sub via a Design of Experiments (DoE) approach. The results indicated that GSA is a very useful tool for the evaluation of the impact of different process variables on the model outcome, leading to essential process knowledge, without the need for time-consuming experiments (e.g., DoE). Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Utilizing High-Performance Computing to Investigate Parameter Sensitivity of an Inversion Model for Vadose Zone Flow and Transport

    NASA Astrophysics Data System (ADS)

    Fang, Z.; Ward, A. L.; Fang, Y.; Yabusaki, S.

    2011-12-01

    High-resolution geologic models have proven effective in improving the accuracy of subsurface flow and transport predictions. However, many of the parameters in subsurface flow and transport models cannot be determined directly at the scale of interest and must be estimated through inverse modeling. A major challenge, particularly in vadose zone flow and transport, is the inversion of the highly-nonlinear, high-dimensional problem as current methods are not readily scalable for large-scale, multi-process models. In this paper we describe the implementation of a fully automated approach for addressing complex parameter optimization and sensitivity issues on massively parallel multi- and many-core systems. The approach is based on the integration of PNNL's extreme scale Subsurface Transport Over Multiple Phases (eSTOMP) simulator, which uses the Global Array toolkit, with the Beowulf-Cluster inspired parallel nonlinear parameter estimation software, BeoPEST in the MPI mode. In the eSTOMP/BeoPEST implementation, a pre-processor generates all of the PEST input files based on the eSTOMP input file. Simulation results for comparison with observations are extracted automatically at each time step eliminating the need for post-process data extractions. The inversion framework was tested with three different experimental data sets: one-dimensional water flow at Hanford Grass Site; irrigation and infiltration experiment at the Andelfingen Site; and a three-dimensional injection experiment at Hanford's Sisson and Lu Site. Good agreements are achieved in all three applications between observations and simulations in both parameter estimates and water dynamics reproduction. Results show that eSTOMP/BeoPEST approach is highly scalable and can be run efficiently with hundreds or thousands of processors. BeoPEST is fault tolerant and new nodes can be dynamically added and removed. A major advantage of this approach is the ability to use high-resolution geologic models to preserve

  5. Specificity and Sensitivity of Claims-Based Algorithms for Identifying Members of Medicare+Choice Health Plans That Have Chronic Medical Conditions

    PubMed Central

    Rector, Thomas S; Wickstrom, Steven L; Shah, Mona; Thomas Greeenlee, N; Rheault, Paula; Rogowski, Jeannette; Freedman, Vicki; Adams, John; Escarce, José J

    2004-01-01

    Objective To examine the effects of varying diagnostic and pharmaceutical criteria on the performance of claims-based algorithms for identifying beneficiaries with hypertension, heart failure, chronic lung disease, arthritis, glaucoma, and diabetes. Study Setting Secondary 1999–2000 data from two Medicare+Choice health plans. Study Design Retrospective analysis of algorithm specificity and sensitivity. Data Collection Physician, facility, and pharmacy claims data were extracted from electronic records for a sample of 3,633 continuously enrolled beneficiaries who responded to an independent survey that included questions about chronic diseases. Principal Findings Compared to an algorithm that required a single medical claim in a one-year period that listed the diagnosis, either requiring that the diagnosis be listed on two separate claims or that the diagnosis to be listed on one claim for a face-to-face encounter with a health care provider significantly increased specificity for the conditions studied by 0.03 to 0.11. Specificity of algorithms was significantly improved by 0.03 to 0.17 when both a medical claim with a diagnosis and a pharmacy claim for a medication commonly used to treat the condition were required. Sensitivity improved significantly by 0.01 to 0.20 when the algorithm relied on a medical claim with a diagnosis or a pharmacy claim, and by 0.05 to 0.17 when two years rather than one year of claims data were analyzed. Algorithms that had specificity more than 0.95 were found for all six conditions. Sensitivity above 0.90 was not achieved all conditions. Conclusions Varying claims criteria improved the performance of case-finding algorithms for six chronic conditions. Highly specific, and sometimes sensitive, algorithms for identifying members of health plans with several chronic conditions can be developed using claims data. PMID:15533190

  6. BOTH HYPOMETHYLATION AND HYPERMETHYLATION OF DNA ASSOCIATED WITH ARSENITE EXPOSURE IN CULTURES OF HUMAN CELLS IDENTIFIED BY METHYLATION-SENSITIVE ARBITRARILY-PRIMED PCR

    EPA Science Inventory

    Differentially Methylated DNA Sequences Associated with Exposure to Arsenite in Cultures of Human Cells Identified by Methylation-Sensitive-Primed PCR

    Arsenic, a known human carcinogen, is converted to methylated derivatives by a methyltransferase (Mtase) and its biotra...

  7. Parametric Sensitivity Analysis of Precipitation at Global and Local Scales in the Community Atmosphere Model CAM5

    DOE PAGES

    Qian, Yun; Yan, Huiping; Hou, Zhangshuan; ...

    2015-04-10

    We investigate the sensitivity of precipitation characteristics (mean, extreme and diurnal cycle) to a set of uncertain parameters that influence the qualitative and quantitative behavior of the cloud and aerosol processes in the Community Atmosphere Model (CAM5). We adopt both the Latin hypercube and quasi-Monte Carlo sampling approaches to effectively explore the high-dimensional parameter space and then conduct two large sets of simulations. One set consists of 1100 simulations (cloud ensemble) perturbing 22 parameters related to cloud physics and convection, and the other set consists of 256 simulations (aerosol ensemble) focusing on 16 parameters related to aerosols and cloud microphysics.more » Results show that for the 22 parameters perturbed in the cloud ensemble, the six having the greatest influences on the global mean precipitation are identified, three of which (related to the deep convection scheme) are the primary contributors to the total variance of the phase and amplitude of the precipitation diurnal cycle over land. The extreme precipitation characteristics are sensitive to a fewer number of parameters. The precipitation does not always respond monotonically to parameter change. The influence of individual parameters does not depend on the sampling approaches or concomitant parameters selected. Generally the GLM is able to explain more of the parametric sensitivity of global precipitation than local or regional features. The total explained variance for precipitation is primarily due to contributions from the individual parameters (75-90% in total). The total variance shows a significant seasonal variability in the mid-latitude continental regions, but very small in tropical continental regions.« less

  8. THP-1 monocytes but not macrophages as a potential alternative for CD34{sup +} dendritic cells to identify chemical skin sensitizers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambrechts, Nathalie; Verstraelen, Sandra; Lodewyckx, Hanne

    2009-04-15

    Early detection of the sensitizing potential of chemicals is an emerging issue for chemical, pharmaceutical and cosmetic industries. In our institute, an in vitro classification model for prediction of chemical-induced skin sensitization based on gene expression signatures in human CD34{sup +} progenitor-derived dendritic cells (DC) has been developed. This primary cell model is able to closely mimic the induction phase of sensitization by Langerhans cells in the skin, but it has drawbacks, such as the availability of cord blood. The aim of this study was to investigate whether human in vitro cultured THP-1 monocytes or macrophages display a similar expressionmore » profile for 13 predictive gene markers previously identified in DC and whether they also possess a discriminating capacity towards skin sensitizers and non-sensitizers based on these marker genes. To this end, the cell models were exposed to 5 skin sensitizers (ammonium hexachloroplatinate IV, 1-chloro-2,4-dinitrobenzene, eugenol, para-phenylenediamine, and tetramethylthiuram disulfide) and 5 non-sensitizers (L-glutamic acid, methyl salicylate, sodium dodecyl sulfate, tributyltin chloride, and zinc sulfate) for 6, 10, and 24 h, and mRNA expression of the 13 genes was analyzed using real-time RT-PCR. The transcriptional response of 7 out of 13 genes in THP-1 monocytes was significantly correlated with DC, whereas only 2 out of 13 genes in THP-1 macrophages. After a cross-validation of a discriminant analysis of the gene expression profiles in the THP-1 monocytes, this cell model demonstrated to also have a capacity to distinguish skin sensitizers from non-sensitizers. However, the DC model was superior to the monocyte model for discrimination of (non-)sensitizing chemicals.« less

  9. Evolutionary Analysis Predicts Sensitive Positions of MMP20 and Validates Newly- and Previously-Identified MMP20 Mutations Causing Amelogenesis Imperfecta

    PubMed Central

    Gasse, Barbara; Prasad, Megana; Delgado, Sidney; Huckert, Mathilde; Kawczynski, Marzena; Garret-Bernardin, Annelyse; Lopez-Cazaux, Serena; Bailleul-Forestier, Isabelle; Manière, Marie-Cécile; Stoetzel, Corinne; Bloch-Zupan, Agnès; Sire, Jean-Yves

    2017-01-01

    Amelogenesis imperfecta (AI) designates a group of genetic diseases characterized by a large range of enamel disorders causing important social and health problems. These defects can result from mutations in enamel matrix proteins or protease encoding genes. A range of mutations in the enamel cleavage enzyme matrix metalloproteinase-20 gene (MMP20) produce enamel defects of varying severity. To address how various alterations produce a range of AI phenotypes, we performed a targeted analysis to find MMP20 mutations in French patients diagnosed with non-syndromic AI. Genomic DNA was isolated from saliva and MMP20 exons and exon-intron boundaries sequenced. We identified several homozygous or heterozygous mutations, putatively involved in the AI phenotypes. To validate missense mutations and predict sensitive positions in the MMP20 sequence, we evolutionarily compared 75 sequences extracted from the public databases using the Datamonkey webserver. These sequences were representative of mammalian lineages, covering more than 150 million years of evolution. This analysis allowed us to find 324 sensitive positions (out of the 483 MMP20 residues), pinpoint functionally important domains, and build an evolutionary chart of important conserved MMP20 regions. This is an efficient tool to identify new- and previously-identified mutations. We thus identified six functional MMP20 mutations in unrelated families, finding two novel mutated sites. The genotypes and phenotypes of these six mutations are described and compared. To date, 13 MMP20 mutations causing AI have been reported, making these genotypes and associated hypomature enamel phenotypes the most frequent in AI. PMID:28659819

  10. Evolutionary Analysis Predicts Sensitive Positions of MMP20 and Validates Newly- and Previously-Identified MMP20 Mutations Causing Amelogenesis Imperfecta.

    PubMed

    Gasse, Barbara; Prasad, Megana; Delgado, Sidney; Huckert, Mathilde; Kawczynski, Marzena; Garret-Bernardin, Annelyse; Lopez-Cazaux, Serena; Bailleul-Forestier, Isabelle; Manière, Marie-Cécile; Stoetzel, Corinne; Bloch-Zupan, Agnès; Sire, Jean-Yves

    2017-01-01

    Amelogenesis imperfecta (AI) designates a group of genetic diseases characterized by a large range of enamel disorders causing important social and health problems. These defects can result from mutations in enamel matrix proteins or protease encoding genes. A range of mutations in the enamel cleavage enzyme matrix metalloproteinase-20 gene ( MMP20 ) produce enamel defects of varying severity. To address how various alterations produce a range of AI phenotypes, we performed a targeted analysis to find MMP20 mutations in French patients diagnosed with non-syndromic AI. Genomic DNA was isolated from saliva and MMP20 exons and exon-intron boundaries sequenced. We identified several homozygous or heterozygous mutations, putatively involved in the AI phenotypes. To validate missense mutations and predict sensitive positions in the MMP20 sequence, we evolutionarily compared 75 sequences extracted from the public databases using the Datamonkey webserver. These sequences were representative of mammalian lineages, covering more than 150 million years of evolution. This analysis allowed us to find 324 sensitive positions (out of the 483 MMP20 residues), pinpoint functionally important domains, and build an evolutionary chart of important conserved MMP20 regions. This is an efficient tool to identify new- and previously-identified mutations. We thus identified six functional MMP20 mutations in unrelated families, finding two novel mutated sites. The genotypes and phenotypes of these six mutations are described and compared. To date, 13 MMP20 mutations causing AI have been reported, making these genotypes and associated hypomature enamel phenotypes the most frequent in AI.

  11. Sensitivities of Modeled Tropical Cyclones to Surface Friction and the Coriolis Parameter

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.; Chen, Baode; Tao, Wei-Kuo; Lau, William K. M. (Technical Monitor)

    2002-01-01

    In this investigation the sensitivities of a 2-D tropical cyclone (TC) model to surface frictional coefficient and the Coriolis parameter are studied and their implication is discussed. The model used is an axisymmetric version of the latest version of the Goddard cloud ensemble model. The model has stretched vertical grids with 33 levels varying from 30 m near the bottom to 1140 m near the top. The vertical domain is about 21 km. The horizontal domain covers a radius of 962 km (770 grids) with a grid size of 1.25 km. The time step is 10 seconds. An open lateral boundary condition is used. The sea surface temperature is specified at 29C. Unless specified otherwise, the Coriolis parameter is set at its value at 15 deg N. The Newtonian cooling is used with a time scale of 12 hours. The reference vertical temperature profile used in the Newtonian cooling is that of Jordan. The Newtonian cooling models not only the effect of radiative processes but also the effect of processes with scale larger than that of TC. Our experiments showed that if the Newtonian cooling is replaced by a radiation package, the simulated TC is much weaker. The initial condition has a temperature uniform in the radial direction and its vertical profile is that of Jordan. The initial winds are a weak Rankin vortex in the tangential winds superimposed on a resting atmosphere. The initial sea level pressure is set at 1015 hPa everywhere. Since there is no surface pressure perturbation, the initial condition is not in gradient balance. This initial condition is enough to lead to cyclogenesis, but the initial stage (say, the first 24 hrs) is not considered to resemble anything observed. The control experiment reaches quasi-equilibration after about 10 days with an eye wall extending from 15 to 25 km radius, reasonable comparing with the observations. The maximum surface wind of more than 70 m/s is located at about 18 km radius. The minimum sea level pressure on day 10 is about 886 hPa. Thus the

  12. Dynamic Contrast-Enhanced MRI of Cervical Cancers: Temporal Percentile Screening of Contrast Enhancement Identifies Parameters for Prediction of Chemoradioresistance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersen, Erlend K.F.; Hole, Knut Hakon; Lund, Kjersti V.

    Purpose: To systematically screen the tumor contrast enhancement of locally advanced cervical cancers to assess the prognostic value of two descriptive parameters derived from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Methods and Materials: This study included a prospectively collected cohort of 81 patients who underwent DCE-MRI with gadopentetate dimeglumine before chemoradiotherapy. The following descriptive DCE-MRI parameters were extracted voxel by voxel and presented as histograms for each time point in the dynamic series: normalized relative signal increase (nRSI) and normalized area under the curve (nAUC). The first to 100th percentiles of the histograms were included in a log-rank survival test,more » resulting in p value and relative risk maps of all percentile-time intervals for each DCE-MRI parameter. The maps were used to evaluate the robustness of the individual percentile-time pairs and to construct prognostic parameters. Clinical endpoints were locoregional control and progression-free survival. The study was approved by the institutional ethics committee. Results: The p value maps of nRSI and nAUC showed a large continuous region of percentile-time pairs that were significantly associated with locoregional control (p < 0.05). These parameters had prognostic impact independent of tumor stage, volume, and lymph node status on multivariate analysis. Only a small percentile-time interval of nRSI was associated with progression-free survival. Conclusions: The percentile-time screening identified DCE-MRI parameters that predict long-term locoregional control after chemoradiotherapy of cervical cancer.« less

  13. On the sensitivity analysis of porous material models

    NASA Astrophysics Data System (ADS)

    Ouisse, Morvan; Ichchou, Mohamed; Chedly, Slaheddine; Collet, Manuel

    2012-11-01

    Porous materials are used in many vibroacoustic applications. Different available models describe their behaviors according to materials' intrinsic characteristics. For instance, in the case of porous material with rigid frame, and according to the Champoux-Allard model, five parameters are employed. In this paper, an investigation about this model sensitivity to parameters according to frequency is conducted. Sobol and FAST algorithms are used for sensitivity analysis. A strong parametric frequency dependent hierarchy is shown. Sensitivity investigations confirm that resistivity is the most influent parameter when acoustic absorption and surface impedance of porous materials with rigid frame are considered. The analysis is first performed on a wide category of porous materials, and then restricted to a polyurethane foam analysis in order to illustrate the impact of the reduction of the design space. In a second part, a sensitivity analysis is performed using the Biot-Allard model with nine parameters including mechanical effects of the frame and conclusions are drawn through numerical simulations.

  14. Advanced Electrocardiography Can Identify Occult Cardiomyopathy in Doberman Pinschers

    NASA Technical Reports Server (NTRS)

    Spiljak, M.; Petric, A. Domanjko; Wilberg, M.; Olsen, L. H.; Stepancic, A.; Schlegel, T. T.; Starc, V.

    2011-01-01

    Recently, multiple advanced resting electrocardiographic (A-ECG) techniques have improved the diagnostic value of short-duration ECG in detection of dilated cardiomyopathy (DCM) in humans. This study investigated whether 12-lead A-ECG recordings could accurately identify the occult phase of DCM in dogs. Short-duration (3-5 min) high-fidelity 12-lead ECG recordings were obtained from 31 privately-owned, clinically healthy Doberman Pinschers (5.4 +/- 1.7 years, 11/20 males/females). Dogs were divided into 2 groups: 1) 19 healthy dogs with normal echocardiographic M-mode measurements: left ventricular internal diameter in diastole (LVIDd . 47mm) and in systole (LVIDs . 38mm) and normal 24-hour ECG recordings (<50 ventricular premature complexes, VPCs); and 2) 12 dogs with occult DCM: 11/12 dogs had increased M-mode measurements (LVIDd . 49mm and/or LVIDs . 40mm) and 5/11 dogs had also >100 VPCs/24h; 1/12 dogs had only abnormal 24-hour ECG recordings (>100 VPCs/24h). ECG recordings were evaluated via custom software programs to calculate multiple parameters of high-frequency (HF) QRS ECG, heart rate variability, QT variability, waveform complexity and 3-D ECG. Student's t-tests determined 19 ECG parameters that were significantly different (P < 0.05) between groups. Principal component factor analysis identified a 5-factor model with 81.4% explained variance. QRS dipolar and non-dipolar voltages, Cornell voltage criteria and QRS waveform residuum were increased significantly (P < 0.05), whereas mean HF QRS amplitude was decreased significantly (P < 0.05) in dogs with occult DCM. For the 5 selected parameters the prediction of occult DCM was performed using a binary logistic regression model with Chi-square tested significance (P < 0.01). ROC analyses showed that the five selected ECG parameters could identify occult ECG with sensitivity 89% and specificity 83%. Results suggest that 12-lead A-ECG might improve diagnostic value of short-duration ECG in earlier detection

  15. Sharing and Reuse of Sensitive Data and Samples: Supporting Researchers in Identifying Ethical and Legal Requirements

    PubMed Central

    Schluender, Irene; Smee, Carol; Suhr, Stephanie

    2015-01-01

    Availability of and access to data and biosamples are essential in medical and translational research, where their reuse and repurposing by the wider research community can maximize their value and accelerate discovery. However, sharing human-related data or samples is complicated by ethical, legal, and social sensitivities. The specific ethical and legal requirements linked to sensitive data are often unfamiliar to life science researchers who, faced with vast amounts of complex, fragmented, and sometimes even contradictory information, may not feel competent to navigate through it. In this case, the impulse may be not to share the data in order to safeguard against unintentional misuse. Consequently, helping data providers to identify relevant ethical and legal requirements and how they might address them is an essential and frequently neglected step in removing possible hurdles to data and sample sharing in the life sciences. Here, we describe the complex regulatory context and discuss relevant online tools—one which the authors co-developed—targeted at assisting providers of sensitive data or biosamples with ethical and legal questions. The main results are (1) that the different approaches of the tools assume different user needs and prior knowledge of ethical and legal requirements, affecting how a service is designed and its usefulness, (2) that there is much potential for collaboration between tool providers, and (3) that enriched annotations of services (e.g., update status, completeness of information, and disclaimers) would increase their value and facilitate quick assessment by users. Further, there is still work to do with respect to providing researchers using sensitive data or samples with truly ‘useful’ tools that do not require pre-existing, in-depth knowledge of legal and ethical requirements or time to delve into the details. Ultimately, separate resources, maintained by experts familiar with the respective fields of research, may be

  16. An automatic and effective parameter optimization method for model tuning

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-05-01

    Physical parameterizations in General Circulation Models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.

  17. An integrated approach for the knowledge discovery in computer simulation models with a multi-dimensional parameter space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khawli, Toufik Al; Eppelt, Urs; Hermanns, Torsten

    2016-06-08

    In production industries, parameter identification, sensitivity analysis and multi-dimensional visualization are vital steps in the planning process for achieving optimal designs and gaining valuable information. Sensitivity analysis and visualization can help in identifying the most-influential parameters and quantify their contribution to the model output, reduce the model complexity, and enhance the understanding of the model behavior. Typically, this requires a large number of simulations, which can be both very expensive and time consuming when the simulation models are numerically complex and the number of parameter inputs increases. There are three main constituent parts in this work. The first part ismore » to substitute the numerical, physical model by an accurate surrogate model, the so-called metamodel. The second part includes a multi-dimensional visualization approach for the visual exploration of metamodels. In the third part, the metamodel is used to provide the two global sensitivity measures: i) the Elementary Effect for screening the parameters, and ii) the variance decomposition method for calculating the Sobol indices that quantify both the main and interaction effects. The application of the proposed approach is illustrated with an industrial application with the goal of optimizing a drilling process using a Gaussian laser beam.« less

  18. Maximum likelihood identification and optimal input design for identifying aircraft stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Stepner, D. E.; Mehra, R. K.

    1973-01-01

    A new method of extracting aircraft stability and control derivatives from flight test data is developed based on the maximum likelihood cirterion. It is shown that this new method is capable of processing data from both linear and nonlinear models, both with and without process noise and includes output error and equation error methods as special cases. The first application of this method to flight test data is reported for lateral maneuvers of the HL-10 and M2/F3 lifting bodies, including the extraction of stability and control derivatives in the presence of wind gusts. All the problems encountered in this identification study are discussed. Several different methods (including a priori weighting, parameter fixing and constrained parameter values) for dealing with identifiability and uniqueness problems are introduced and the results given. The method for the design of optimal inputs for identifying the parameters of linear dynamic systems is also given. The criterion used for the optimization is the sensitivity of the system output to the unknown parameters. Several simple examples are first given and then the results of an extensive stability and control dervative identification simulation for a C-8 aircraft are detailed.

  19. Definition and sensitivity of the conceptual MORDOR rainfall-runoff model parameters using different multi-criteria calibration strategies

    NASA Astrophysics Data System (ADS)

    Garavaglia, F.; Seyve, E.; Gottardi, F.; Le Lay, M.; Gailhard, J.; Garçon, R.

    2014-12-01

    MORDOR is a conceptual hydrological model extensively used in Électricité de France (EDF, French electric utility company) operational applications: (i) hydrological forecasting, (ii) flood risk assessment, (iii) water balance and (iv) climate change studies. MORDOR is a lumped, reservoir, elevation based model with hourly or daily areal rainfall and air temperature as the driving input data. The principal hydrological processes represented are evapotranspiration, direct and indirect runoff, ground water, snow accumulation and melt and routing. The model has been intensively used at EDF for more than 20 years, in particular for modeling French mountainous watersheds. In the matter of parameters calibration we propose and test alternative multi-criteria techniques based on two specific approaches: automatic calibration using single-objective functions and a priori parameter calibration founded on hydrological watershed features. The automatic calibration approach uses single-objective functions, based on Kling-Gupta efficiency, to quantify the good agreement between the simulated and observed runoff focusing on four different runoff samples: (i) time-series sample, (I) annual hydrological regime, (iii) monthly cumulative distribution functions and (iv) recession sequences.The primary purpose of this study is to analyze the definition and sensitivity of MORDOR parameters testing different calibration techniques in order to: (i) simplify the model structure, (ii) increase the calibration-validation performance of the model and (iii) reduce the equifinality problem of calibration process. We propose an alternative calibration strategy that reaches these goals. The analysis is illustrated by calibrating MORDOR model to daily data for 50 watersheds located in French mountainous regions.

  20. Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat

    2016-01-01

    The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.

  1. Sensitivity Analysis for Steady State Groundwater Flow Using Adjoint Operators

    NASA Astrophysics Data System (ADS)

    Sykes, J. F.; Wilson, J. L.; Andrews, R. W.

    1985-03-01

    Adjoint sensitivity theory is currently being considered as a potential method for calculating the sensitivity of nuclear waste repository performance measures to the parameters of the system. For groundwater flow systems, performance measures of interest include piezometric heads in the vicinity of a waste site, velocities or travel time in aquifers, and mass discharge to biosphere points. The parameters include recharge-discharge rates, prescribed boundary heads or fluxes, formation thicknesses, and hydraulic conductivities. The derivative of a performance measure with respect to the system parameters is usually taken as a measure of sensitivity. To calculate sensitivities, adjoint sensitivity equations are formulated from the equations describing the primary problem. The solution of the primary problem and the adjoint sensitivity problem enables the determination of all of the required derivatives and hence related sensitivity coefficients. In this study, adjoint sensitivity theory is developed for equations of two-dimensional steady state flow in a confined aquifer. Both the primary flow equation and the adjoint sensitivity equation are solved using the Galerkin finite element method. The developed computer code is used to investigate the regional flow parameters of the Leadville Formation of the Paradox Basin in Utah. The results illustrate the sensitivity of calculated local heads to the boundary conditions. Alternatively, local velocity related performance measures are more sensitive to hydraulic conductivities.

  2. Identifying aMCI with Functional Connectivity Network Characteristics based on Subtle AAL Atlas.

    PubMed

    Zhuo, Zhizheng; Mo, Xiao; Ma, Xiangyu; Han, Ying; Li, Haiyun

    2018-05-02

    To investigate the subtle functional connectivity alterations of aMCI based on AAL atlas with 1024 regions (AAL_1024 atlas). Functional MRI images of 32 aMCI patients (Male/Female:15/17, Ages:66.8±8.36y) and 35 normal controls (Male/Female:13/22, Ages: 62.4±8.14y) were obtained in this study. Firstly, functional connectivity networks were constructed by Pearson's Correlation based on the subtle AAL_1024 atlas. Then, local and global network parameters were calculated from the thresholding functional connectivity matrices. Finally, multiple-comparison analysis was performed on these parameters to find the functional network alterations of aMCI. And furtherly, a couple of classifiers were adopted to identify the aMCI by using the network parameters. More subtle local brain functional alterations were detected by using AAL_1024 atlas. And the predominate nodes including hippocampus, inferior temporal gyrus, inferior parietal gyrus were identified which was not detected by AAL_90 atlas. The identification of aMCI from normal controls were significantly improved with the highest accuracy (98.51%), sensitivity (100%) and specificity (97.14%) compared to those (88.06%, 84.38% and 91.43% for the highest accuracy, sensitivity and specificity respectively) obtained by using AAL_90 atlas. More subtle functional connectivity alterations of aMCI could be found based on AAL_1024 atlas than those based on AAL_90 atlas. Besides, the identification of aMCI could also be improved. Copyright © 2018. Published by Elsevier B.V.

  3. Parametric sensitivity analysis of an agro-economic model of management of irrigation water

    NASA Astrophysics Data System (ADS)

    El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse

    2015-04-01

    The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.

  4. Vectorial capacity and vector control: reconsidering sensitivity to parameters for malaria elimination

    PubMed Central

    Brady, Oliver J.; Godfray, H. Charles J.; Tatem, Andrew J.; Gething, Peter W.; Cohen, Justin M.; McKenzie, F. Ellis; Perkins, T. Alex; Reiner, Robert C.; Tusting, Lucy S.; Sinka, Marianne E.; Moyes, Catherine L.; Eckhoff, Philip A.; Scott, Thomas W.; Lindsay, Steven W.; Hay, Simon I.; Smith, David L.

    2016-01-01

    Background Major gains have been made in reducing malaria transmission in many parts of the world, principally by scaling-up coverage with long-lasting insecticidal nets and indoor residual spraying. Historically, choice of vector control intervention has been largely guided by a parameter sensitivity analysis of George Macdonald's theory of vectorial capacity that suggested prioritizing methods that kill adult mosquitoes. While this advice has been highly successful for transmission suppression, there is a need to revisit these arguments as policymakers in certain areas consider which combinations of interventions are required to eliminate malaria. Methods and Results Using analytical solutions to updated equations for vectorial capacity we build on previous work to show that, while adult killing methods can be highly effective under many circumstances, other vector control methods are frequently required to fill effective coverage gaps. These can arise due to pre-existing or developing mosquito physiological and behavioral refractoriness but also due to additive changes in the relative importance of different vector species for transmission. Furthermore, the optimal combination of interventions will depend on the operational constraints and costs associated with reaching high coverage levels with each intervention. Conclusions Reaching specific policy goals, such as elimination, in defined contexts requires increasingly non-generic advice from modelling. Our results emphasize the importance of measuring baseline epidemiology, intervention coverage, vector ecology and program operational constraints in predicting expected outcomes with different combinations of interventions. PMID:26822603

  5. Vectorial capacity and vector control: reconsidering sensitivity to parameters for malaria elimination.

    PubMed

    Brady, Oliver J; Godfray, H Charles J; Tatem, Andrew J; Gething, Peter W; Cohen, Justin M; McKenzie, F Ellis; Perkins, T Alex; Reiner, Robert C; Tusting, Lucy S; Sinka, Marianne E; Moyes, Catherine L; Eckhoff, Philip A; Scott, Thomas W; Lindsay, Steven W; Hay, Simon I; Smith, David L

    2016-02-01

    Major gains have been made in reducing malaria transmission in many parts of the world, principally by scaling-up coverage with long-lasting insecticidal nets and indoor residual spraying. Historically, choice of vector control intervention has been largely guided by a parameter sensitivity analysis of George Macdonald's theory of vectorial capacity that suggested prioritizing methods that kill adult mosquitoes. While this advice has been highly successful for transmission suppression, there is a need to revisit these arguments as policymakers in certain areas consider which combinations of interventions are required to eliminate malaria. Using analytical solutions to updated equations for vectorial capacity we build on previous work to show that, while adult killing methods can be highly effective under many circumstances, other vector control methods are frequently required to fill effective coverage gaps. These can arise due to pre-existing or developing mosquito physiological and behavioral refractoriness but also due to additive changes in the relative importance of different vector species for transmission. Furthermore, the optimal combination of interventions will depend on the operational constraints and costs associated with reaching high coverage levels with each intervention. Reaching specific policy goals, such as elimination, in defined contexts requires increasingly non-generic advice from modelling. Our results emphasize the importance of measuring baseline epidemiology, intervention coverage, vector ecology and program operational constraints in predicting expected outcomes with different combinations of interventions. © The Author 2016. Published by Oxford University Press on behalf of Royal Society of Tropical Medicine and Hygiene.

  6. First-order exchange coefficient coupling for simulating surface water-groundwater interactions: Parameter sensitivity and consistency with a physics-based approach

    USGS Publications Warehouse

    Ebel, B.A.; Mirus, B.B.; Heppner, C.S.; VanderKwaak, J.E.; Loague, K.

    2009-01-01

    Distributed hydrologic models capable of simulating fully-coupled surface water and groundwater flow are increasingly used to examine problems in the hydrologic sciences. Several techniques are currently available to couple the surface and subsurface; the two most frequently employed approaches are first-order exchange coefficients (a.k.a., the surface conductance method) and enforced continuity of pressure and flux at the surface-subsurface boundary condition. The effort reported here examines the parameter sensitivity of simulated hydrologic response for the first-order exchange coefficients at a well-characterized field site using the fully coupled Integrated Hydrology Model (InHM). This investigation demonstrates that the first-order exchange coefficients can be selected such that the simulated hydrologic response is insensitive to the parameter choice, while simulation time is considerably reduced. Alternatively, the ability to choose a first-order exchange coefficient that intentionally decouples the surface and subsurface facilitates concept-development simulations to examine real-world situations where the surface-subsurface exchange is impaired. While the parameters comprising the first-order exchange coefficient cannot be directly estimated or measured, the insensitivity of the simulated flow system to these parameters (when chosen appropriately) combined with the ability to mimic actual physical processes suggests that the first-order exchange coefficient approach can be consistent with a physics-based framework. Copyright ?? 2009 John Wiley & Sons, Ltd.

  7. Identifying Watershed Regions Sensitive to Soil Erosion and Contributing to Lake Eutrophication--A Case Study in the Taihu Lake Basin (China).

    PubMed

    Lin, Chen; Ma, Ronghua; He, Bin

    2015-12-24

    Taihu Lake in China is suffering from severe eutrophication partly due to non-point pollution from the watershed. There is an increasing need to identify the regions within the watershed that most contribute to lake water degradation. The selection of appropriate temporal scales and lake indicators is important to identify sensitive watershed regions. This study selected three eutrophic lake areas, including Meiliang Bay (ML), Zhushan Bay (ZS), and the Western Coastal region (WC), as well as multiple buffer zones next to the lake boundary as the study sites. Soil erosion intensity was designated as a watershed indicator, and the lake algae area was designated as a lake quality indicator. The sensitive watershed region was identified based on the relationship between these two indicators among different lake divisions for a temporal sequence from 2000 to 2012. The results show that the relationship between soil erosion modulus and lake quality varied among different lake areas. Soil erosion from the two bay areas was more closely correlated with water quality than soil erosion from the WC region. This was most apparent at distances of 5 km to 10 km from the lake, where the r² was as high as 0.764. Results indicate that soil erosion could be used as an indicator for identifying key watershed protection areas. Different lake areas need to be considered separately due to differences in geographical features, land use, and the corresponding effects on lake water quality.

  8. Identifying Watershed Regions Sensitive to Soil Erosion and Contributing to Lake Eutrophication—A Case Study in the Taihu Lake Basin (China)

    PubMed Central

    Lin, Chen; Ma, Ronghua; He, Bin

    2015-01-01

    Taihu Lake in China is suffering from severe eutrophication partly due to non-point pollution from the watershed. There is an increasing need to identify the regions within the watershed that most contribute to lake water degradation. The selection of appropriate temporal scales and lake indicators is important to identify sensitive watershed regions. This study selected three eutrophic lake areas, including Meiliang Bay (ML), Zhushan Bay (ZS), and the Western Coastal region (WC), as well as multiple buffer zones next to the lake boundary as the study sites. Soil erosion intensity was designated as a watershed indicator, and the lake algae area was designated as a lake quality indicator. The sensitive watershed region was identified based on the relationship between these two indicators among different lake divisions for a temporal sequence from 2000 to 2012. The results show that the relationship between soil erosion modulus and lake quality varied among different lake areas. Soil erosion from the two bay areas was more closely correlated with water quality than soil erosion from the WC region. This was most apparent at distances of 5 km to 10 km from the lake, where the r2 was as high as 0.764. Results indicate that soil erosion could be used as an indicator for identifying key watershed protection areas. Different lake areas need to be considered separately due to differences in geographical features, land use, and the corresponding effects on lake water quality. PMID:26712772

  9. Quantitative phase-digital holographic microscopy: a new imaging modality to identify original cellular biomarkers of diseases

    NASA Astrophysics Data System (ADS)

    Marquet, P.; Rothenfusser, K.; Rappaz, B.; Depeursinge, C.; Jourdain, P.; Magistretti, P. J.

    2016-03-01

    Quantitative phase microscopy (QPM) has recently emerged as a powerful label-free technique in the field of living cell imaging allowing to non-invasively measure with a nanometric axial sensitivity cell structure and dynamics. Since the phase retardation of a light wave when transmitted through the observed cells, namely the quantitative phase signal (QPS), is sensitive to both cellular thickness and intracellular refractive index related to the cellular content, its accurate analysis allows to derive various cell parameters and monitor specific cell processes, which are very likely to identify new cell biomarkers. Specifically, quantitative phase-digital holographic microscopy (QP-DHM), thanks to its numerical flexibility facilitating parallelization and automation processes, represents an appealing imaging modality to both identify original cellular biomarkers of diseases as well to explore the underlying pathophysiological processes.

  10. An automatic and effective parameter optimization method for model tuning

    NASA Astrophysics Data System (ADS)

    Zhang, T.; Li, L.; Lin, Y.; Xue, W.; Xie, F.; Xu, H.; Huang, X.

    2015-11-01

    Physical parameterizations in general circulation models (GCMs), having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time-consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determining the model's sensitivity to the parameters and the other choosing the optimum initial value for those sensitive parameters, are introduced before the downhill simplex method. This new method reduces the number of parameters to be tuned and accelerates the convergence of the downhill simplex method. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  11. An approach to measure parameter sensitivity in watershed hydrological modelling

    EPA Science Inventory

    Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier for the Little Miami River (LMR) and Las Vegas Wash (LVW) watersheds were used for detail sensitivity analyses. To compare the...

  12. Sensitivity of NTCP parameter values against a change of dose calculation algorithm.

    PubMed

    Brink, Carsten; Berg, Martin; Nielsen, Morten

    2007-09-01

    Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis with those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.

  13. Two-Dimensional Modeling of Heat and Moisture Dynamics in Swedish Roads: Model Set up and Parameter Sensitivity

    NASA Astrophysics Data System (ADS)

    Rasul, H.; Wu, M.; Olofsson, B.

    2017-12-01

    Modelling moisture and heat changes in road layers is very important to understand road hydrology and for better construction and maintenance of roads in a sustainable manner. In cold regions due to the freezing/thawing process in the partially saturated material of roads, the modeling task will become more complicated than simple model of flow through porous media without freezing/thawing pores considerations. This study is presenting a 2-D model simulation for a section of highway with considering freezing/thawing and vapor changes. Partial deferential equations (PDEs) are used in formulation of the model. Parameters are optimized from modelling results based on the measured data from test station on E18 highway near Stockholm. Impacts of phase change considerations in the modelling are assessed by comparing the modeled soil moisture with TDR-measured data. The results show that the model can be used for prediction of water and ice content in different layers of the road and at different seasons. Parameter sensitivities are analyzed by implementing a calibration strategy. In addition, the phase change consideration is evaluated in the modeling process, by comparing the PDE model with another model without considerations of freezing/thawing in roads. The PDE model shows high potential in understanding the moisture dynamics in the road system.

  14. DIA-datasnooping and identifiability

    NASA Astrophysics Data System (ADS)

    Zaminpardaz, S.; Teunissen, P. J. G.

    2018-04-01

    In this contribution, we present and analyze datasnooping in the context of the DIA method. As the DIA method for the detection, identification and adaptation of mismodelling errors is concerned with estimation and testing, it is the combination of both that needs to be considered. This combination is rigorously captured by the DIA estimator. We discuss and analyze the DIA-datasnooping decision probabilities and the construction of the corresponding partitioning of misclosure space. We also investigate the circumstances under which two or more hypotheses are nonseparable in the identification step. By means of a theorem on the equivalence between the nonseparability of hypotheses and the inestimability of parameters, we demonstrate that one can forget about adapting the parameter vector for hypotheses that are nonseparable. However, as this concerns the complete vector and not necessarily functions of it, we also show that parameter functions may exist for which adaptation is still possible. It is shown how this adaptation looks like and how it changes the structure of the DIA estimator. To demonstrate the performance of the various elements of DIA-datasnooping, we apply the theory to some selected examples. We analyze how geometry changes in the measurement setup affect the testing procedure, by studying their partitioning of misclosure space, the decision probabilities and the minimal detectable and identifiable biases. The difference between these two minimal biases is highlighted by showing the difference between their corresponding contributing factors. We also show that if two alternative hypotheses, say Hi and Hj , are nonseparable, the testing procedure may have different levels of sensitivity to Hi -biases compared to the same Hj -biases.

  15. Progress on Reconstructed Human Skin Models for Allergy Research and Identifying Contact Sensitizers.

    PubMed

    Rodrigues Neves, Charlotte; Gibbs, Susan

    2018-06-23

    Contact with the skin is inevitable or desirable for daily life products such as cosmetics, hair dyes, perfumes, drugs, household products, and industrial and agricultural products. Whereas the majority of these products are harmless, a number can become metabolized and/or activate the immunological defense via innate and adaptive mechanisms resulting in sensitization and allergic contact dermatitis upon following exposures to the same substance. Therefore, strict safety (hazard) assessment of actives and ingredients in products and drugs applied to the skin is essential to determine I) whether the chemical is a potential sensitizer and if so II) what is the safe concentration for human exposure to prevent sensitization from occurring. Ex vivo skin is a valuable model for skin penetration studies but due to logistical and viability limitations the development of in vitro alternatives is required. The aim of this review is to give a clear overview of the organotypic in vitro skin models (reconstructed human epidermis, reconstructed human skin, immune competent skin models incorporating Langerhans Cells and T-cells, skin-on-chip) that are currently commercially available or which are being used in a laboratory research setting for hazard assessment of potential sensitizers and for investigating the mechanisms (sensitization key events 1-4) related to allergic contact dermatitis. The limitations of the models, their current applications, and their future potential in replacing animals in allergy-related science are discussed.

  16. Failure Bounding And Sensitivity Analysis Applied To Monte Carlo Entry, Descent, And Landing Simulations

    NASA Technical Reports Server (NTRS)

    Gaebler, John A.; Tolson, Robert H.

    2010-01-01

    In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.

  17. A study of parameter identification

    NASA Technical Reports Server (NTRS)

    Herget, C. J.; Patterson, R. E., III

    1978-01-01

    A set of definitions for deterministic parameter identification ability were proposed. Deterministic parameter identificability properties are presented based on four system characteristics: direct parameter recoverability, properties of the system transfer function, properties of output distinguishability, and uniqueness properties of a quadratic cost functional. Stochastic parameter identifiability was defined in terms of the existence of an estimation sequence for the unknown parameters which is consistent in probability. Stochastic parameter identifiability properties are presented based on the following characteristics: convergence properties of the maximum likelihood estimate, properties of the joint probability density functions of the observations, and properties of the information matrix.

  18. Low sensitivity of the metabolic syndrome to identify adolescents with impaired glucose tolerance: an analysis of NHANES 1999-2010.

    PubMed

    DeBoer, Mark D; Gurka, Matthew J

    2014-04-23

    The presence of impaired glucose tolerance (IGT) and metabolic syndrome (MetS) are two risk factors for Type 2 diabetes. The inter-relatedness of these factors among adolescents is unclear. We evaluated the sensitivity and specificity of MetS for identifying IGT in an unselected group of adolescents undergoing oral glucose tolerance tests (OGTT) in the National Health and Nutrition Evaluation Survey 1999-2010. We characterized IGT as a 2-hour glucose ≥140 mg/dL and MetS using ATP-III-based criteria and a continuous sex- and race/ethnicity-specific MetS Z-score at cut-offs of +1.0 and +0.75 standard deviations (SD) above the mean. Among 1513 adolescents, IGT was present in 4.8%, while ATP-III-MetS was present in 7.9%. MetS performed poorly in identifying adolescents with IGT with a sensitivity/specificity of 23.7%/92.9% for ATP-III-MetS, 23.6%/90.8% for the MetS Z-score at +1.0 SD and 35.8%/85.0 for the MetS Z-score at +0.75 SD. Sensitivity was higher (and specificity lower) but was still overall poor among overweight/obese adolescents: 44.7%/83.0% for ATP-III-MetS, 43.1%/77.1% for the MetS Z-score at +1.0 SD and 64.3%/64.3% for MetS Z-score at +0.75 SD. This lack of overlap between MetS and IGT may indicate that assessment of MetS is not likely to be a good indicator of which adolescents to screen using OGTT. These data further underscore the importance of other potential contributors to IGT, including Type 1 diabetes and genetic causes of poor beta-cell function. Practitioners should keep these potential causes of IGT in mind, even when evaluating obese adolescents with IGT.

  19. Sensitivity of NTCP parameter values against a change of dose calculation algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brink, Carsten; Berg, Martin; Nielsen, Morten

    2007-09-15

    Optimization of radiation treatment planning requires estimations of the normal tissue complication probability (NTCP). A number of models exist that estimate NTCP from a calculated dose distribution. Since different dose calculation algorithms use different approximations the dose distributions predicted for a given treatment will in general depend on the algorithm. The purpose of this work is to test whether the optimal NTCP parameter values change significantly when the dose calculation algorithm is changed. The treatment plans for 17 breast cancer patients have retrospectively been recalculated with a collapsed cone algorithm (CC) to compare the NTCP estimates for radiation pneumonitis withmore » those obtained from the clinically used pencil beam algorithm (PB). For the PB calculations the NTCP parameters were taken from previously published values for three different models. For the CC calculations the parameters were fitted to give the same NTCP as for the PB calculations. This paper demonstrates that significant shifts of the NTCP parameter values are observed for three models, comparable in magnitude to the uncertainties of the published parameter values. Thus, it is important to quote the applied dose calculation algorithm when reporting estimates of NTCP parameters in order to ensure correct use of the models.« less

  20. PROCEEDINGS OF THE INTERNATIONAL WORKSHOP ON UNCERTAINTY, SENSITIVITY, AND PARAMETER ESTIMATION FOR MULTIMEDIA ENVIRONMENTAL MODELING. EPA/600/R-04/117, NUREG/CP-0187, ERDC SR-04-2.

    EPA Science Inventory

    An International Workshop on Uncertainty, Sensitivity, and Parameter Estimation for Multimedia Environmental Modeling was held August 1921, 2003, at the U.S. Nuclear Regulatory Commission Headquarters in Rockville, Maryland, USA. The workshop was organized and convened by the Fe...

  1. Stepwise sensitivity analysis from qualitative to quantitative: Application to the terrestrial hydrological modeling of a Conjunctive Surface-Subsurface Process (CSSP) land surface model

    NASA Astrophysics Data System (ADS)

    Gan, Yanjun; Liang, Xin-Zhong; Duan, Qingyun; Choi, Hyun Il; Dai, Yongjiu; Wu, Huan

    2015-06-01

    An uncertainty quantification framework was employed to examine the sensitivities of 24 model parameters from a newly developed Conjunctive Surface-Subsurface Process (CSSP) land surface model (LSM). The sensitivity analysis (SA) was performed over 18 representative watersheds in the contiguous United States to examine the influence of model parameters in the simulation of terrestrial hydrological processes. Two normalized metrics, relative bias (RB) and Nash-Sutcliffe efficiency (NSE), were adopted to assess the fit between simulated and observed streamflow discharge (SD) and evapotranspiration (ET) for a 14 year period. SA was conducted using a multiobjective two-stage approach, in which the first stage was a qualitative SA using the Latin Hypercube-based One-At-a-Time (LH-OAT) screening, and the second stage was a quantitative SA using the Multivariate Adaptive Regression Splines (MARS)-based Sobol' sensitivity indices. This approach combines the merits of qualitative and quantitative global SA methods, and is effective and efficient for understanding and simplifying large, complex system models. Ten of the 24 parameters were identified as important across different watersheds. The contribution of each parameter to the total response variance was then quantified by Sobol' sensitivity indices. Generally, parameter interactions contribute the most to the response variance of the CSSP, and only 5 out of 24 parameters dominate model behavior. Four photosynthetic and respiratory parameters are shown to be influential to ET, whereas reference depth for saturated hydraulic conductivity is the most influential parameter for SD in most watersheds. Parameter sensitivity patterns mainly depend on hydroclimatic regime, as well as vegetation type and soil texture. This article was corrected on 26 JUN 2015. See the end of the full text for details.

  2. Use of long term dermal sensitization followed by intratracheal challenge method to identify low-dose chemical-induced respiratory allergic responses in mice.

    PubMed

    Fukuyama, Tomoki; Ueda, Hideo; Hayashi, Koichi; Tajima, Yukari; Shuto, Yasufumi; Saito, Toru R; Harada, Takanori; Kosaka, Tadashi

    2008-10-01

    The inhalation of many types of chemicals, including pesticides, perfumes, and other low-molecular weight chemicals, is a leading cause of allergic respiratory diseases. We attempted to develop a new test protocol to detect environmental chemical-related respiratory hypersensitivity at low and weakly immunogenic doses. We used long-term dermal sensitization followed by a low-dose intratracheal challenge to evaluate sensitization by the well-known respiratory sensitizers trimellitic anhydride (TMA) and toluene diisocyanate (TDI) and the contact sensitizer 2,4-dinitrochlorobenzene (DNCB). After topically sensitizing BALB/c mice (9 times in 3 weeks) and challenging them intratracheally with TMA, TDI, or DNCB, we assayed differential cell counts and chemokine levels in bronchoalveolar lavage fluid (BALF); lymphocyte counts, surface antigen expression of B cells, and local cytokine production in lung-associated lymph nodes (LNs); and antigen-specific IgE levels in serum and BALF. TMA induced marked increases in antigen-specific IgE levels in both serum and BALF, proliferation of eosinophils and chemokines (MCP-1, eotaxin, and MIP-1beta) in BALF, and proliferation of Th2 cytokines (interleukin (IL)-4, IL-10, and IL-13) in restimulated LN cells. TDI induced marked increases in levels of cytokines (IL-4, IL-10, IL-13, and IFN-gamma) produced by restimulated LN cells. In contrast, DNCB treatment yielded, at most, small, nonsignificant increases in all parameters. Our protocol thus detected respiratory allergic responses to low-molecular weight chemicals and may be useful for detecting environmental chemical-related respiratory allergy.

  3. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    PubMed

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.

  4. Predicting individual contrast sensitivity functions from acuity and letter contrast sensitivity measurements

    PubMed Central

    Thurman, Steven M.; Davey, Pinakin Gunvant; McCray, Kaydee Lynn; Paronian, Violeta; Seitz, Aaron R.

    2016-01-01

    Contrast sensitivity (CS) is widely used as a measure of visual function in both basic research and clinical evaluation. There is conflicting evidence on the extent to which measuring the full contrast sensitivity function (CSF) offers more functionally relevant information than a single measurement from an optotype CS test, such as the Pelli–Robson chart. Here we examine the relationship between functional CSF parameters and other measures of visual function, and establish a framework for predicting individual CSFs with effectively a zero-parameter model that shifts a standard-shaped template CSF horizontally and vertically according to independent measurements of high contrast acuity and letter CS, respectively. This method was evaluated for three different CSF tests: a chart test (CSV-1000), a computerized sine-wave test (M&S Sine Test), and a recently developed adaptive test (quick CSF). Subjects were 43 individuals with healthy vision or impairment too mild to be considered low vision (acuity range of −0.3 to 0.34 logMAR). While each test demands a slightly different normative template, results show that individual subject CSFs can be predicted with roughly the same precision as test–retest repeatability, confirming that individuals predominantly differ in terms of peak CS and peak spatial frequency. In fact, these parameters were sufficiently related to empirical measurements of acuity and letter CS to permit accurate estimation of the entire CSF of any individual with a deterministic model (zero free parameters). These results demonstrate that in many cases, measuring the full CSF may provide little additional information beyond letter acuity and contrast sensitivity. PMID:28006065

  5. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling

    NASA Astrophysics Data System (ADS)

    Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.

    2017-05-01

    Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.

  6. MOVES sensitivity study

    DOT National Transportation Integrated Search

    2012-01-01

    Purpose: : To determine ranking of important parameters and the overall sensitivity to values of variables in MOVES : To allow a greater understanding of the MOVES modeling process for users : Continued support by FHWA to transportation modeling comm...

  7. Sensitivity Analysis Tailored to Constrain 21st Century Terrestrial Carbon-Uptake

    NASA Astrophysics Data System (ADS)

    Muller, S. J.; Gerber, S.

    2013-12-01

    The long-term fate of terrestrial carbon (C) in response to climate change remains a dominant source of uncertainty in Earth-system model projections. Increasing atmospheric CO2 could be mitigated by long-term net uptake of C, through processes such as increased plant productivity due to "CO2-fertilization". Conversely, atmospheric conditions could be exacerbated by long-term net release of C, through processes such as increased decomposition due to higher temperatures. This balance is an important area of study, and a major source of uncertainty in long-term (>year 2050) projections of planetary response to climate change. We present results from an innovative application of sensitivity analysis to LM3V, a dynamic global vegetation model (DGVM), intended to identify observed/observable variables that are useful for constraining long-term projections of C-uptake. We analyzed the sensitivity of cumulative C-uptake by 2100, as modeled by LM3V in response to IPCC AR4 scenario climate data (1860-2100), to perturbations in over 50 model parameters. We concurrently analyzed the sensitivity of over 100 observable model variables, during the extant record period (1970-2010), to the same parameter changes. By correlating the sensitivities of observable variables with the sensitivity of long-term C-uptake we identified model calibration variables that would also constrain long-term C-uptake projections. LM3V employs a coupled carbon-nitrogen cycle to account for N-limitation, and we find that N-related variables have an important role to play in constraining long-term C-uptake. This work has implications for prioritizing field campaigns to collect global data that can help reduce uncertainties in the long-term land-atmosphere C-balance. Though results of this study are specific to LM3V, the processes that characterize this model are not completely divorced from other DGVMs (or reality), and our approach provides valuable insights into how data can be leveraged to be better

  8. A comprehensive approach to identify dominant controls of the behavior of a land surface-hydrology model across various hydroclimatic conditions

    NASA Astrophysics Data System (ADS)

    Haghnegahdar, Amin; Elshamy, Mohamed; Yassin, Fuad; Razavi, Saman; Wheater, Howard; Pietroniro, Al

    2017-04-01

    Complex physically-based environmental models are being increasingly used as the primary tool for watershed planning and management due to advances in computation power and data acquisition. Model sensitivity analysis plays a crucial role in understanding the behavior of these complex models and improving their performance. Due to the non-linearity and interactions within these complex models, Global sensitivity analysis (GSA) techniques should be adopted to provide a comprehensive understanding of model behavior and identify its dominant controls. In this study we adopt a multi-basin multi-criteria GSA approach to systematically assess the behavior of the Modélisation Environmentale-Surface et Hydrologie (MESH) across various hydroclimatic conditions in Canada including areas in the Great Lakes Basin, Mackenzie River Basin, and South Saskatchewan River Basin. MESH is a semi-distributed physically-based coupled land surface-hydrology modelling system developed by Environment and Climate Change Canada (ECCC) for various water resources management purposes in Canada. We use a novel method, called Variogram Analysis of Response Surfaces (VARS), to perform sensitivity analysis. VARS is a variogram-based GSA technique that can efficiently provide a spectrum of sensitivity information across a range of scales within the parameter space. We use multiple metrics to identify dominant controls of model response (e.g. streamflow) to model parameters under various conditions such as high flows, low flows, and flow volume. We also investigate the influence of initial conditions on model behavior as part of this study. Our preliminary results suggest that this type of GSA can significantly help with estimating model parameters, decreasing calibration computational burden, and reducing prediction uncertainty.

  9. Parameter sensitivity analysis of the mixed Green-Ampt/Curve-Number method for rainfall excess estimation in small ungauged catchments

    NASA Astrophysics Data System (ADS)

    Romano, N.; Petroselli, A.; Grimaldi, S.

    2012-04-01

    With the aim of combining the practical advantages of the Soil Conservation Service - Curve Number (SCS-CN) method and Green-Ampt (GA) infiltration model, we have developed a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt). The basic concept is that, for a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model so as to distribute in time the information provided by the SCS-CN method. In a previous contribution, the proposed mixed procedure was evaluated on 100 observed events showing encouraging results. In this study, a sensitivity analysis is carried out to further explore the feasibility of applying the CN4GA tool in small ungauged catchments. The proposed mixed procedure constrains the GA model with boundary and initial conditions so that the GA soil hydraulic parameters are expected to be insensitive toward the net hyetograph peak. To verify and evaluate this behaviour, synthetic design hyetograph and synthetic rainfall time series are selected and used in a Monte Carlo analysis. The results are encouraging and confirm that the parameter variability makes the proposed method an appropriate tool for hydrologic predictions in ungauged catchments. Keywords: SCS-CN method, Green-Ampt method, rainfall excess, ungauged basins, design hydrograph, rainfall-runoff modelling.

  10. Multivariate modelling of prostate cancer combining magnetic resonance derived T2, diffusion, dynamic contrast-enhanced and spectroscopic parameters.

    PubMed

    Riches, S F; Payne, G S; Morgan, V A; Dearnaley, D; Morgan, S; Partridge, M; Livni, N; Ogden, C; deSouza, N M

    2015-05-01

    The objectives are determine the optimal combination of MR parameters for discriminating tumour within the prostate using linear discriminant analysis (LDA) and to compare model accuracy with that of an experienced radiologist. Multiparameter MRIs in 24 patients before prostatectomy were acquired. Tumour outlines from whole-mount histology, T2-defined peripheral zone (PZ), and central gland (CG) were superimposed onto slice-matched parametric maps. T2, Apparent Diffusion Coefficient, initial area under the gadolinium curve, vascular parameters (K(trans),Kep,Ve), and (choline+polyamines+creatine)/citrate were compared between tumour and non-tumour tissues. Receiver operating characteristic (ROC) curves determined sensitivity and specificity at spectroscopic voxel resolution and per lesion, and LDA determined the optimal multiparametric model for identifying tumours. Accuracy was compared with an expert observer. Tumours were significantly different from PZ and CG for all parameters (all p < 0.001). Area under the ROC curve for discriminating tumour from non-tumour was significantly greater (p < 0.001) for the multiparametric model than for individual parameters; at 90 % specificity, sensitivity was 41 % (MRSI voxel resolution) and 59 % per lesion. At this specificity, an expert observer achieved 28 % and 49 % sensitivity, respectively. The model was more accurate when parameters from all techniques were included and performed better than an expert observer evaluating these data. • The combined model increases diagnostic accuracy in prostate cancer compared with individual parameters • The optimal combined model includes parameters from diffusion, spectroscopy, perfusion, and anatominal MRI • The computed model improves tumour detection compared to an expert viewing parametric maps.

  11. Technetium-99m-labeled ceftizoxime loaded long-circulating and pH-sensitive liposomes used to identify osteomyelitis.

    PubMed

    Ferreira, Soraya Maria Zandim Maciel Dias; Domingos, Giselle Pires; Ferreira, Diego dos Santos; Rocha, Talita Guieiro Ribeiro; Serakides, Rogéria; de Faria Rezende, Cleuza Maria; Cardoso, Valbert Nascimento; Fernandes, Simone Odília Antunes; Oliveira, Mônica Cristina

    2012-07-15

    Osteomyelitis is an infectious disease located in the bone or bone marrow. Long-circulating and pH-sensitive liposomes containing a technetium-99m-labeled antibiotic, ceftizoxime, (SpHL-(99m)Tc-CF) were developed to identify osteomyelitis foci. Biodistribution studies and scintigraphic images of bone infection or non infection-bearing rats that had been treated with these liposomes were performed. A high accumulation in infectious foci and high values in the target-non target ratio could be observed. These results indicate the potential of SpHL-(99m)Tc-CF as a potential agent for the diagnosis of bone infections. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Sensitivity and Uncertainty Analysis for Streamflow Prediction Using Different Objective Functions and Optimization Algorithms: San Joaquin California

    NASA Astrophysics Data System (ADS)

    Paul, M.; Negahban-Azar, M.

    2017-12-01

    The hydrologic models usually need to be calibrated against observed streamflow at the outlet of a particular drainage area through a careful model calibration. However, a large number of parameters are required to fit in the model due to their unavailability of the field measurement. Therefore, it is difficult to calibrate the model for a large number of potential uncertain model parameters. This even becomes more challenging if the model is for a large watershed with multiple land uses and various geophysical characteristics. Sensitivity analysis (SA) can be used as a tool to identify most sensitive model parameters which affect the calibrated model performance. There are many different calibration and uncertainty analysis algorithms which can be performed with different objective functions. By incorporating sensitive parameters in streamflow simulation, effects of the suitable algorithm in improving model performance can be demonstrated by the Soil and Water Assessment Tool (SWAT) modeling. In this study, the SWAT was applied in the San Joaquin Watershed in California covering 19704 km2 to calibrate the daily streamflow. Recently, sever water stress escalating due to intensified climate variability, prolonged drought and depleting groundwater for agricultural irrigation in this watershed. Therefore it is important to perform a proper uncertainty analysis given the uncertainties inherent in hydrologic modeling to predict the spatial and temporal variation of the hydrologic process to evaluate the impacts of different hydrologic variables. The purpose of this study was to evaluate the sensitivity and uncertainty of the calibrated parameters for predicting streamflow. To evaluate the sensitivity of the calibrated parameters three different optimization algorithms (Sequential Uncertainty Fitting- SUFI-2, Generalized Likelihood Uncertainty Estimation- GLUE and Parameter Solution- ParaSol) were used with four different objective functions (coefficient of determination

  13. Switch of Sensitivity Dynamics Revealed with DyGloSA Toolbox for Dynamical Global Sensitivity Analysis as an Early Warning for System's Critical Transition

    PubMed Central

    Baumuratova, Tatiana; Dobre, Simona; Bastogne, Thierry; Sauter, Thomas

    2013-01-01

    Systems with bifurcations may experience abrupt irreversible and often unwanted shifts in their performance, called critical transitions. For many systems like climate, economy, ecosystems it is highly desirable to identify indicators serving as early warnings of such regime shifts. Several statistical measures were recently proposed as early warnings of critical transitions including increased variance, autocorrelation and skewness of experimental or model-generated data. The lack of automatized tool for model-based prediction of critical transitions led to designing DyGloSA – a MATLAB toolbox for dynamical global parameter sensitivity analysis (GPSA) of ordinary differential equations models. We suggest that the switch in dynamics of parameter sensitivities revealed by our toolbox is an early warning that a system is approaching a critical transition. We illustrate the efficiency of our toolbox by analyzing several models with bifurcations and predicting the time periods when systems can still avoid going to a critical transition by manipulating certain parameter values, which is not detectable with the existing SA techniques. DyGloSA is based on the SBToolbox2 and contains functions, which compute dynamically the global sensitivity indices of the system by applying four main GPSA methods: eFAST, Sobol's ANOVA, PRCC and WALS. It includes parallelized versions of the functions enabling significant reduction of the computational time (up to 12 times). DyGloSA is freely available as a set of MATLAB scripts at http://bio.uni.lu/systems_biology/software/dyglosa. It requires installation of MATLAB (versions R2008b or later) and the Systems Biology Toolbox2 available at www.sbtoolbox2.org. DyGloSA can be run on Windows and Linux systems, -32 and -64 bits. PMID:24367574

  14. Switch of sensitivity dynamics revealed with DyGloSA toolbox for dynamical global sensitivity analysis as an early warning for system's critical transition.

    PubMed

    Baumuratova, Tatiana; Dobre, Simona; Bastogne, Thierry; Sauter, Thomas

    2013-01-01

    Systems with bifurcations may experience abrupt irreversible and often unwanted shifts in their performance, called critical transitions. For many systems like climate, economy, ecosystems it is highly desirable to identify indicators serving as early warnings of such regime shifts. Several statistical measures were recently proposed as early warnings of critical transitions including increased variance, autocorrelation and skewness of experimental or model-generated data. The lack of automatized tool for model-based prediction of critical transitions led to designing DyGloSA - a MATLAB toolbox for dynamical global parameter sensitivity analysis (GPSA) of ordinary differential equations models. We suggest that the switch in dynamics of parameter sensitivities revealed by our toolbox is an early warning that a system is approaching a critical transition. We illustrate the efficiency of our toolbox by analyzing several models with bifurcations and predicting the time periods when systems can still avoid going to a critical transition by manipulating certain parameter values, which is not detectable with the existing SA techniques. DyGloSA is based on the SBToolbox2 and contains functions, which compute dynamically the global sensitivity indices of the system by applying four main GPSA methods: eFAST, Sobol's ANOVA, PRCC and WALS. It includes parallelized versions of the functions enabling significant reduction of the computational time (up to 12 times). DyGloSA is freely available as a set of MATLAB scripts at http://bio.uni.lu/systems_biology/software/dyglosa. It requires installation of MATLAB (versions R2008b or later) and the Systems Biology Toolbox2 available at www.sbtoolbox2.org. DyGloSA can be run on Windows and Linux systems, -32 and -64 bits.

  15. What Do We Mean By Sensitivity Analysis? The Need For A Comprehensive Characterization Of Sensitivity In Earth System Models

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Gupta, H. V.

    2014-12-01

    Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.

  16. Sensitivity of low-energy incomplete fusion to various entrance-channel parameters

    NASA Astrophysics Data System (ADS)

    Kumar, Harish; Tali, Suhail A.; Afzal Ansari, M.; Singh, D.; Ali, Rahbar; Kumar, Kamal; Sathik, N. P. M.; Ali, Asif; Parashari, Siddharth; Dubey, R.; Bala, Indu; Kumar, R.; Singh, R. P.; Muralithar, S.

    2018-03-01

    The disentangling of incomplete fusion dependence on various entrance channel parameters has been made from the forward recoil range distribution measurement for the 12C+175Lu system at ≈ 88 MeV energy. It gives the direct measure of full and/or partial linear momentum transfer from the projectile to the target nucleus. The comparison of observed recoil ranges with theoretical ranges calculated using the code SRIM infers the production of evaporation residues via complete and/or incomplete fusion process. Present results show that incomplete fusion process contributes significantly in the production of α xn and 2α xn emission channels. The deduced incomplete fusion probability (F_{ICF}) is compared with that obtained for systems available in the literature. An interesting behavior of F_{ICF} with ZP ZT is observed in the reinvestigation of incomplete fusion dependency with the Coulomb factor (ZPZT), contrary to the recent observations. The present results based on (ZPZT) are found in good agreement with recent observations of our group. A larger F_{ICF} value for 12C induced reactions is found than that for 13C, although both have the same ZPZT. A nonsystematic behavior of the incomplete fusion process with the target deformation parameter (β2) is observed, which is further correlated with a new parameter (ZP ZT . β2). The projectile α -Q-value is found to explain more clearly the discrepancy observed in incomplete fusion dependency with parameters ( ZPZT) and (ZP ZT . β2). It may be pointed out that any single entrance channel parameter (mass-asymmetry or (ZPZT) or β2 or projectile α-Q-value) may not be able to explain completely the incomplete fusion process.

  17. Estimating Sobol Sensitivity Indices Using Correlations

    EPA Science Inventory

    Sensitivity analysis is a crucial tool in the development and evaluation of complex mathematical models. Sobol's method is a variance-based global sensitivity analysis technique that has been applied to computational models to assess the relative importance of input parameters on...

  18. Modulators of sensitivity and resistance to inhibition of PI3K identified in a pharmacogenomic screen of the NCI-60 human tumor cell line collection.

    PubMed

    Kwei, Kevin A; Baker, Joffre B; Pelham, Robert J

    2012-01-01

    The phosphoinositide 3-kinase (PI3K) signaling pathway is significantly altered in a wide variety of human cancers, driving cancer cell growth and survival. Consequently, a large number of PI3K inhibitors are now in clinical development. To begin to improve the selection of patients for treatment with PI3K inhibitors and to identify de novo determinants of patient response, we sought to identify and characterize candidate genomic and phosphoproteomic biomarkers predictive of response to the selective PI3K inhibitor, GDC-0941, using the NCI-60 human tumor cell line collection. In this study, sixty diverse tumor cell lines were exposed to GDC-0941 and classified by GI(50) value as sensitive or resistant. The most sensitive and resistant cell lines were analyzed for their baseline levels of gene expression and phosphorylation of key signaling nodes. Phosphorylation or activation status of both the PI3K-Akt signaling axis and PARP were correlated with in vitro response to GDC-0941. A gene expression signature associated with in vitro sensitivity to GDC-0941 was also identified. Furthermore, in vitro siRNA-mediated silencing of two genes in this signature, OGT and DDN, validated their role in modulating sensitivity to GDC-0941 in numerous cell lines and begins to provide biological insights into their role as chemosensitizers. These candidate biomarkers will offer useful tools to begin a more thorough understanding of determinants of patient response to PI3K inhibitors and merit exploration in human cancer patients treated with PI3K inhibitors.

  19. Modulators of Sensitivity and Resistance to Inhibition of PI3K Identified in a Pharmacogenomic Screen of the NCI-60 Human Tumor Cell Line Collection

    PubMed Central

    Kwei, Kevin A.; Baker, Joffre B.; Pelham, Robert J.

    2012-01-01

    The phosphoinositide 3-kinase (PI3K) signaling pathway is significantly altered in a wide variety of human cancers, driving cancer cell growth and survival. Consequently, a large number of PI3K inhibitors are now in clinical development. To begin to improve the selection of patients for treatment with PI3K inhibitors and to identify de novo determinants of patient response, we sought to identify and characterize candidate genomic and phosphoproteomic biomarkers predictive of response to the selective PI3K inhibitor, GDC-0941, using the NCI-60 human tumor cell line collection. In this study, sixty diverse tumor cell lines were exposed to GDC-0941 and classified by GI50 value as sensitive or resistant. The most sensitive and resistant cell lines were analyzed for their baseline levels of gene expression and phosphorylation of key signaling nodes. Phosphorylation or activation status of both the PI3K-Akt signaling axis and PARP were correlated with in vitro response to GDC-0941. A gene expression signature associated with in vitro sensitivity to GDC-0941 was also identified. Furthermore, in vitro siRNA-mediated silencing of two genes in this signature, OGT and DDN, validated their role in modulating sensitivity to GDC-0941 in numerous cell lines and begins to provide biological insights into their role as chemosensitizers. These candidate biomarkers will offer useful tools to begin a more thorough understanding of determinants of patient response to PI3K inhibitors and merit exploration in human cancer patients treated with PI3K inhibitors. PMID:23029544

  20. Global Sensitivity of Simulated Water Balance Indicators Under Future Climate Change in the Colorado Basin

    NASA Astrophysics Data System (ADS)

    Bennett, Katrina E.; Urrego Blanco, Jorge R.; Jonko, Alexandra; Bohn, Theodore J.; Atchley, Adam L.; Urban, Nathan M.; Middleton, Richard S.

    2018-01-01

    The Colorado River Basin is a fundamentally important river for society, ecology, and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent, and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model. We combine global sensitivity analysis with a space-filling Latin Hypercube Sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach. We find that snow-dominated regions are much more sensitive to uncertainties in VIC parameters. Although baseflow and runoff changes respond to parameters used in previous sensitivity studies, we discover new key parameter sensitivities. For instance, changes in runoff and evapotranspiration are sensitive to albedo, while changes in snow water equivalent are sensitive to canopy fraction and Leaf Area Index (LAI) in the VIC model. It is critical for improved modeling to narrow uncertainty in these parameters through improved observations and field studies. This is important because LAI and albedo are anticipated to change under future climate and narrowing uncertainty is paramount to advance our application of models such as VIC for water resource management.

  1. Selection of regularization parameter for l1-regularized damage detection

    NASA Astrophysics Data System (ADS)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  2. Determination of dose distributions and parameter sensitivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Napier, B.A.; Farris, W.T.; Simpson, J.C.

    1992-12-01

    A series of scoping calculations has been undertaken to evaluate the absolute and relative contribution of different radionuclides and exposure pathways to doses that may have been received by individuals living in the vicinity of the Hanford site. This scoping calculation (Calculation 005) examined the contributions of numerous parameters to the uncertainty distribution of doses calculated for environmental exposures and accumulation in foods. This study builds on the work initiated in the first scoping study of iodine in cow's milk and the third scoping study, which added additional pathways. Addressed in this calculation were the contributions to thyroid dose ofmore » infants from (1) air submersion and groundshine external dose, (2) inhalation, (3) ingestion of soil by humans, (4) ingestion of leafy vegetables, (5) ingestion of other vegetables and fruits, (6) ingestion of meat, (7) ingestion of eggs, and (8) ingestion of cows' milk from Feeding Regime 1 as described in Calculation 001.« less

  3. The structural identifiability and parameter estimation of a multispecies model for the transmission of mastitis in dairy cows with postmilking teat disinfection.

    PubMed

    White, L J; Evans, N D; Lam, T J G M; Schukken, Y H; Medley, G F; Godfrey, K R; Chappell, M J

    2002-01-01

    A mathematical model for the transmission of two interacting classes of mastitis causing bacterial pathogens in a herd of dairy cows is presented and applied to a specific data set. The data were derived from a field trial of a specific measure used in the control of these pathogens, where half the individuals were subjected to the control and in the others the treatment was discontinued. The resultant mathematical model (eight non-linear simultaneous ordinary differential equations) therefore incorporates heterogeneity in the host as well as the infectious agent and consequently the effects of control are intrinsic in the model structure. A structural identifiability analysis of the model is presented demonstrating that the scope of the novel method used allows application to high order non-linear systems. The results of a simultaneous estimation of six unknown system parameters are presented. Previous work has only estimated a subset of these either simultaneously or individually. Therefore not only are new estimates provided for the parameters relating to the transmission and control of the classes of pathogens under study, but also information about the relationships between them. We exploit the close link between mathematical modelling, structural identifiability analysis, and parameter estimation to obtain biological insights into the system modelled.

  4. Microarray analysis identifies keratin loci as sensitive biomarkers for thyroid hormone disruption in the salamander Ambystoma mexicanum.

    PubMed

    Page, Robert B; Monaghan, James R; Samuels, Amy K; Smith, Jeramiah J; Beachy, Christopher K; Voss, S Randal

    2007-02-01

    Ambystomatid salamanders offer several advantages for endocrine disruption research, including genomic and bioinformatics resources, an accessible laboratory model (Ambystoma mexicanum), and natural lineages that are broadly distributed among North American habitats. We used microarray analysis to measure the relative abundance of transcripts isolated from A. mexicanum epidermis (skin) after exogenous application of thyroid hormone (TH). Only one gene had a >2-fold change in transcript abundance after 2 days of TH treatment. However, hundreds of genes showed significantly different transcript levels at days 12 and 28 in comparison to day 0. A list of 123 TH-responsive genes was identified using statistical, BLAST, and fold level criteria. Cluster analysis identified two groups of genes with similar transcription patterns: up-regulated versus down-regulated. Most notably, several keratins exhibited dramatic (1000 fold) increases or decreases in transcript abundance. Keratin gene expression changes coincided with morphological remodeling of epithelial tissues. This suggests that keratin loci can be developed as sensitive biomarkers to assay temporal disruptions of larval-to-adult gene expression programs. Our study has identified the first collection of loci that are regulated during TH-induced metamorphosis in a salamander, thus setting the stage for future investigations of TH disruption in the Mexican axolotl and other salamanders of the genus Ambystoma.

  5. Thermodynamic modeling of transcription: sensitivity analysis differentiates biological mechanism from mathematical model-induced effects.

    PubMed

    Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet

    2010-10-24

    Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary

  6. Using Multistate Reweighting to Rapidly and Efficiently Explore Molecular Simulation Parameters Space for Nonbonded Interactions.

    PubMed

    Paliwal, Himanshu; Shirts, Michael R

    2013-11-12

    Multistate reweighting methods such as the multistate Bennett acceptance ratio (MBAR) can predict free energies and expectation values of thermodynamic observables at poorly sampled or unsampled thermodynamic states using simulations performed at only a few sampled states combined with single point energy reevaluations of these samples at the unsampled states. In this study, we demonstrate the power of this general reweighting formalism by exploring the effect of simulation parameters controlling Coulomb and Lennard-Jones cutoffs on free energy calculations and other observables. Using multistate reweighting, we can quickly identify, with very high sensitivity, the computationally least expensive nonbonded parameters required to obtain a specified accuracy in observables compared to the answer obtained using an expensive "gold standard" set of parameters. We specifically examine free energy estimates of three molecular transformations in a benchmark molecular set as well as the enthalpy of vaporization of TIP3P. The results demonstrates the power of this multistate reweighting approach for measuring changes in free energy differences or other estimators with respect to simulation or model parameters with very high precision and/or very low computational effort. The results also help to identify which simulation parameters affect free energy calculations and provide guidance to determine which simulation parameters are both appropriate and computationally efficient in general.

  7. Analysis of the sensitivity properties of a model of vector-borne bubonic plague.

    PubMed

    Buzby, Megan; Neckels, David; Antolin, Michael F; Estep, Donald

    2008-09-06

    Model sensitivity is a key to evaluation of mathematical models in ecology and evolution, especially in complex models with numerous parameters. In this paper, we use some recently developed methods for sensitivity analysis to study the parameter sensitivity of a model of vector-borne bubonic plague in a rodent population proposed by Keeling & Gilligan. The new sensitivity tools are based on a variational analysis involving the adjoint equation. The new approach provides a relatively inexpensive way to obtain derivative information about model output with respect to parameters. We use this approach to determine the sensitivity of a quantity of interest (the force of infection from rats and their fleas to humans) to various model parameters, determine a region over which linearization at a specific parameter reference point is valid, develop a global picture of the output surface, and search for maxima and minima in a given region in the parameter space.

  8. Can nonstandard interactions jeopardize the hierarchy sensitivity of DUNE?

    NASA Astrophysics Data System (ADS)

    Deepthi, K. N.; Goswami, Srubabati; Nath, Newton

    2017-10-01

    We study the effect of nonstandard interactions (NSIs) on the propagation of neutrinos through the Earth's matter and how it affects the hierarchy sensitivity of the DUNE experiment. We emphasize the special case when the diagonal NSI parameter ɛe e=-1 , nullifying the standard matter effect. We show that if, in addition, C P violation is maximal then this gives rise to an exact intrinsic hierarchy degeneracy in the appearance channel, irrespective of the baseline and energy. Introduction of the off diagonal NSI parameter, ɛe τ, shifts the position of this degeneracy to a different ɛe e. Moreover the unknown magnitude and phases of the off diagonal NSI parameters can give rise to additional degeneracies. Overall, given the current model independent limits on NSI parameters, the hierarchy sensitivity of DUNE can get seriously impacted. However, a more precise knowledge of the NSI parameters, especially ɛe e, can give rise to an improved sensitivity. Alternatively, if a NSI exists in nature, and still DUNE shows hierarchy sensitivity, certain ranges of the NSI parameters can be excluded. Additionally, we briefly discuss the implications of ɛe e=-1 (in the Earth) on the Mikheyev-Smirnov-Wolfenstein effect in the Sun.

  9. Physical effects of mechanical design parameters on photon sensitivity and spatial resolution performance of a breast-dedicated PET system.

    PubMed

    Spanoudaki, V C; Lau, F W Y; Vandenbroucke, A; Levin, C S

    2010-11-01

    This study aims to address design considerations of a high resolution, high sensitivity positron emission tomography scanner dedicated to breast imaging. The methodology uses a detailed Monte Carlo model of the system structures to obtain a quantitative evaluation of several performance parameters. Special focus was given to the effect of dense mechanical structures designed to provide mechanical robustness and thermal regulation to the minuscule and temperature sensitive detectors. For the energies of interest around the photopeak (450-700 keV energy window), the simulation results predict a 6.5% reduction in the single photon detection efficiency and a 12.5% reduction in the coincidence photon detection efficiency in the case that the mechanical structures are interspersed between the detectors. However for lower energies, a substantial increase in the number of detected events (approximately 14% and 7% for singles at a 100-200 keV energy window and coincidences at a lower energy threshold of 100 keV, respectively) was observed with the presence of these structures due to backscatter. The number of photon events that involve multiple interactions in various crystal elements is also affected by the presence of the structures. For photon events involving multiple interactions among various crystal elements, the coincidence photon sensitivity is reduced by as much as 20% for a point source at the center of the field of view. There is no observable effect on the intrinsic and the reconstructed spatial resolution and spatial resolution uniformity. Mechanical structures can have a considerable effect on system sensitivity, especially for systems processing multi-interaction photon events. This effect, however, does not impact the spatial resolution. Various mechanical structure designs are currently under evaluation in order to achieve optimum trade-off between temperature stability, accurate detector positioning, and minimum influence on system performance.

  10. Physical effects of mechanical design parameters on photon sensitivity and spatial resolution performance of a breast-dedicated PET system

    PubMed Central

    Spanoudaki, V. C.; Lau, F. W. Y.; Vandenbroucke, A.; Levin, C. S.

    2010-01-01

    Purpose: This study aims to address design considerations of a high resolution, high sensitivity positron emission tomography scanner dedicated to breast imaging. Methods: The methodology uses a detailed Monte Carlo model of the system structures to obtain a quantitative evaluation of several performance parameters. Special focus was given to the effect of dense mechanical structures designed to provide mechanical robustness and thermal regulation to the minuscule and temperature sensitive detectors. Results: For the energies of interest around the photopeak (450–700 keV energy window), the simulation results predict a 6.5% reduction in the single photon detection efficiency and a 12.5% reduction in the coincidence photon detection efficiency in the case that the mechanical structures are interspersed between the detectors. However for lower energies, a substantial increase in the number of detected events (approximately 14% and 7% for singles at a 100–200 keV energy window and coincidences at a lower energy threshold of 100 keV, respectively) was observed with the presence of these structures due to backscatter. The number of photon events that involve multiple interactions in various crystal elements is also affected by the presence of the structures. For photon events involving multiple interactions among various crystal elements, the coincidence photon sensitivity is reduced by as much as 20% for a point source at the center of the field of view. There is no observable effect on the intrinsic and the reconstructed spatial resolution and spatial resolution uniformity. Conclusions: Mechanical structures can have a considerable effect on system sensitivity, especially for systems processing multi-interaction photon events. This effect, however, does not impact the spatial resolution. Various mechanical structure designs are currently under evaluation in order to achieve optimum trade-off between temperature stability, accurate detector positioning, and minimum

  11. Identifying effective connectivity parameters in simulated fMRI: a direct comparison of switching linear dynamic system, stochastic dynamic causal, and multivariate autoregressive models

    PubMed Central

    Smith, Jason F.; Chen, Kewei; Pillai, Ajay S.; Horwitz, Barry

    2013-01-01

    The number and variety of connectivity estimation methods is likely to continue to grow over the coming decade. Comparisons between methods are necessary to prune this growth to only the most accurate and robust methods. However, the nature of connectivity is elusive with different methods potentially attempting to identify different aspects of connectivity. Commonalities of connectivity definitions across methods upon which base direct comparisons can be difficult to derive. Here, we explicitly define “effective connectivity” using a common set of observation and state equations that are appropriate for three connectivity methods: dynamic causal modeling (DCM), multivariate autoregressive modeling (MAR), and switching linear dynamic systems for fMRI (sLDSf). In addition while deriving this set, we show how many other popular functional and effective connectivity methods are actually simplifications of these equations. We discuss implications of these connections for the practice of using one method to simulate data for another method. After mathematically connecting the three effective connectivity methods, simulated fMRI data with varying numbers of regions and task conditions is generated from the common equation. This simulated data explicitly contains the type of the connectivity that the three models were intended to identify. Each method is applied to the simulated data sets and the accuracy of parameter identification is analyzed. All methods perform above chance levels at identifying correct connectivity parameters. The sLDSf method was superior in parameter estimation accuracy to both DCM and MAR for all types of comparisons. PMID:23717258

  12. Rate-equation modelling and ensemble approach to extraction of parameters for viral infection-induced cell apoptosis and necrosis

    NASA Astrophysics Data System (ADS)

    Domanskyi, Sergii; Schilling, Joshua E.; Gorshkov, Vyacheslav; Libert, Sergiy; Privman, Vladimir

    2016-09-01

    We develop a theoretical approach that uses physiochemical kinetics modelling to describe cell population dynamics upon progression of viral infection in cell culture, which results in cell apoptosis (programmed cell death) and necrosis (direct cell death). Several model parameters necessary for computer simulation were determined by reviewing and analyzing available published experimental data. By comparing experimental data to computer modelling results, we identify the parameters that are the most sensitive to the measured system properties and allow for the best data fitting. Our model allows extraction of parameters from experimental data and also has predictive power. Using the model we describe interesting time-dependent quantities that were not directly measured in the experiment and identify correlations among the fitted parameter values. Numerical simulation of viral infection progression is done by a rate-equation approach resulting in a system of "stiff" equations, which are solved by using a novel variant of the stochastic ensemble modelling approach. The latter was originally developed for coupled chemical reactions.

  13. Rate-equation modelling and ensemble approach to extraction of parameters for viral infection-induced cell apoptosis and necrosis

    NASA Astrophysics Data System (ADS)

    Domanskyi, Sergii; Schilling, Joshua; Gorshkov, Vyacheslav; Libert, Sergiy; Privman, Vladimir

    We develop a theoretical approach that uses physiochemical kinetics modelling to describe cell population dynamics upon progression of viral infection in cell culture, which results in cell apoptosis (programmed cell death) and necrosis (direct cell death). Several model parameters necessary for computer simulation were determined by reviewing and analyzing available published experimental data. By comparing experimental data to computer modelling results, we identify the parameters that are the most sensitive to the measured system properties and allow for the best data fitting. Our model allows extraction of parameters from experimental data and also has predictive power. Using the model we describe interesting time-dependent quantities that were not directly measured in the experiment and identify correlations among the fitted parameter values. Numerical simulation of viral infection progression is done by a rate-equation approach resulting in a system of ``stiff'' equations, which are solved by using a novel variant of the stochastic ensemble modelling approach. The latter was originally developed for coupled chemical reactions.

  14. Metabolism of plasma cholesterol and lipoprotein parameters are related to a higher degree of insulin sensitivity in high HDL-C healthy normal weight subjects.

    PubMed

    Leança, Camila C; Nunes, Valéria S; Panzoldo, Natália B; Zago, Vanessa S; Parra, Eliane S; Cazita, Patrícia M; Jauhiainen, Matti; Passarelli, Marisa; Nakandakare, Edna R; de Faria, Eliana C; Quintão, Eder C R

    2013-11-22

    We have searched if plasma high density lipoprotein-cholesterol (HDL-C) concentration interferes simultaneously with whole-body cholesterol metabolism and insulin sensitivity in normal weight healthy adult subjects. We have measured the activities of several plasma components that are critically influenced by insulin and that control lipoprotein metabolism in subjects with low and high HDL-C concentrations. These parameters included cholesteryl ester transfer protein (CETP), phospholipid transfer protein (PLTP), lecithin cholesterol acyl transferase (LCAT), post-heparin lipoprotein lipase (LPL), hepatic lipase (HL), pre-beta-₁HDL, and plasma sterol markers of cholesterol synthesis and intestinal absorption. In the high-HDL-C group, we found lower plasma concentrations of triglycerides, alanine aminotransferase, insulin, HOMA-IR index, activities of LCAT and HL compared with the low HDL-C group; additionally, we found higher activity of LPL and pre-beta-₁HDL concentration in the high-HDL-C group. There were no differences in the plasma CETP and PLTP activities. These findings indicate that in healthy hyperalphalipoproteinemia subjects, several parameters that control the metabolism of plasma cholesterol and lipoproteins are related to a higher degree of insulin sensitivity.

  15. Sensitivity of viscosity Arrhenius parameters to polarity of liquids

    NASA Astrophysics Data System (ADS)

    Kacem, R. B. H.; Alzamel, N. O.; Ouerfelli, N.

    2017-09-01

    Several empirical and semi-empirical equations have been proposed in the literature to estimate the liquid viscosity upon temperature. In this context, this paper aims to study the effect of polarity of liquids on the modeling of the viscosity-temperature dependence, considering particularly the Arrhenius type equations. To achieve this purpose, the solvents are classified into three groups: nonpolar, borderline polar and polar solvents. Based on adequate statistical tests, we found that there is strong evidence that the polarity of solvents affects significantly the distribution of the Arrhenius-type equation parameters and consequently the modeling of the viscosity-temperature dependence. Thus, specific estimated values of parameters for each group of liquids are proposed in this paper. In addition, the comparison of the accuracy of approximation with and without classification of liquids, using the Wilcoxon signed-rank test, shows a significant discrepancy of the borderline polar solvents. For that, we suggested in this paper new specific coefficient values of the simplified Arrhenius-type equation for better estimation accuracy. This result is important given that the accuracy in the estimation of the viscosity-temperature dependence may affect considerably the design and the optimization of several industrial processes.

  16. Challenges in identifying sites climatically matched to the native ranges of animal invaders.

    PubMed

    Rodda, Gordon H; Jarnevich, Catherine S; Reed, Robert N

    2011-02-09

    Species distribution models are often used to characterize a species' native range climate, so as to identify sites elsewhere in the world that may be climatically similar and therefore at risk of invasion by the species. This endeavor provoked intense public controversy over recent attempts to model areas at risk of invasion by the Indian Python (Python molurus). We evaluated a number of MaxEnt models on this species to assess MaxEnt's utility for vertebrate climate matching. Overall, we found MaxEnt models to be very sensitive to modeling choices and selection of input localities and background regions. As used, MaxEnt invoked minimal protections against data dredging, multi-collinearity of explanatory axes, and overfitting. As used, MaxEnt endeavored to identify a single ideal climate, whereas different climatic considerations may determine range boundaries in different parts of the native range. MaxEnt was extremely sensitive to both the choice of background locations for the python, and to selection of presence points: inclusion of just four erroneous localities was responsible for Pyron et al.'s conclusion that no additional portions of the U.S. mainland were at risk of python invasion. When used with default settings, MaxEnt overfit the realized climate space, identifying models with about 60 parameters, about five times the number of parameters justifiable when optimized on the basis of Akaike's Information Criterion. When used with default settings, MaxEnt may not be an appropriate vehicle for identifying all sites at risk of colonization. Model instability and dearth of protections against overfitting, multi-collinearity, and data dredging may combine with a failure to distinguish fundamental from realized climate envelopes to produce models of limited utility. A priori identification of biologically realistic model structure, combined with computational protections against these statistical problems, may produce more robust models of invasion risk.

  17. Challenges in Identifying Sites Climatically Matched to the Native Ranges of Animal Invaders

    PubMed Central

    Rodda, Gordon H.; Jarnevich, Catherine S.; Reed, Robert N.

    2011-01-01

    Background Species distribution models are often used to characterize a species' native range climate, so as to identify sites elsewhere in the world that may be climatically similar and therefore at risk of invasion by the species. This endeavor provoked intense public controversy over recent attempts to model areas at risk of invasion by the Indian Python (Python molurus). We evaluated a number of MaxEnt models on this species to assess MaxEnt's utility for vertebrate climate matching. Methodology/Principal Findings Overall, we found MaxEnt models to be very sensitive to modeling choices and selection of input localities and background regions. As used, MaxEnt invoked minimal protections against data dredging, multi-collinearity of explanatory axes, and overfitting. As used, MaxEnt endeavored to identify a single ideal climate, whereas different climatic considerations may determine range boundaries in different parts of the native range. MaxEnt was extremely sensitive to both the choice of background locations for the python, and to selection of presence points: inclusion of just four erroneous localities was responsible for Pyron et al.'s conclusion that no additional portions of the U.S. mainland were at risk of python invasion. When used with default settings, MaxEnt overfit the realized climate space, identifying models with about 60 parameters, about five times the number of parameters justifiable when optimized on the basis of Akaike's Information Criterion. Conclusions/Significance When used with default settings, MaxEnt may not be an appropriate vehicle for identifying all sites at risk of colonization. Model instability and dearth of protections against overfitting, multi-collinearity, and data dredging may combine with a failure to distinguish fundamental from realized climate envelopes to produce models of limited utility. A priori identification of biologically realistic model structure, combined with computational protections against these

  18. Challenges in identifying sites climatically matched to the native ranges of animal invaders

    USGS Publications Warehouse

    Rodda, G.H.; Jarnevich, C.S.; Reed, R.N.

    2011-01-01

    Background: Species distribution models are often used to characterize a species' native range climate, so as to identify sites elsewhere in the world that may be climatically similar and therefore at risk of invasion by the species. This endeavor provoked intense public controversy over recent attempts to model areas at risk of invasion by the Indian Python (Python molurus). We evaluated a number of MaxEnt models on this species to assess MaxEnt's utility for vertebrate climate matching. Methodology/Principal Findings: Overall, we found MaxEnt models to be very sensitive to modeling choices and selection of input localities and background regions. As used, MaxEnt invoked minimal protections against data dredging, multi-collinearity of explanatory axes, and overfitting. As used, MaxEnt endeavored to identify a single ideal climate, whereas different climatic considerations may determine range boundaries in different parts of the native range. MaxEnt was extremely sensitive to both the choice of background locations for the python, and to selection of presence points: inclusion of just four erroneous localities was responsible for Pyron et al.'s conclusion that no additional portions of the U.S. mainland were at risk of python invasion. When used with default settings, MaxEnt overfit the realized climate space, identifying models with about 60 parameters, about five times the number of parameters justifiable when optimized on the basis of Akaike's Information Criterion. Conclusions/Significance: When used with default settings, MaxEnt may not be an appropriate vehicle for identifying all sites at risk of colonization. Model instability and dearth of protections against overfitting, multi-collinearity, and data dredging may combine with a failure to distinguish fundamental from realized climate envelopes to produce models of limited utility. A priori identification of biologically realistic model structure, combined with computational protections against these

  19. TSUNAMI Primer: A Primer for Sensitivity/Uncertainty Calculations with SCALE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Mueller, Don; Bowman, Stephen M

    2009-01-01

    This primer presents examples in the application of the SCALE/TSUNAMI tools to generate k{sub eff} sensitivity data for one- and three-dimensional models using TSUNAMI-1D and -3D and to examine uncertainties in the computed k{sub eff} values due to uncertainties in the cross-section data used in their calculation. The proper use of unit cell data and need for confirming the appropriate selection of input parameters through direct perturbations are described. The uses of sensitivity and uncertainty data to identify and rank potential sources of computational bias in an application system and TSUNAMI tools for assessment of system similarity using sensitivity andmore » uncertainty criteria are demonstrated. Uses of these criteria in trending analyses to assess computational biases, bias uncertainties, and gap analyses are also described. Additionally, an application of the data adjustment tool TSURFER is provided, including identification of specific details of sources of computational bias.« less

  20. Dimethylsulfide model calibration and parametric sensitivity analysis for the Greenland Sea

    NASA Astrophysics Data System (ADS)

    Qu, Bo; Gabric, Albert J.; Zeng, Meifang; Xi, Jiaojiao; Jiang, Limei; Zhao, Li

    2017-09-01

    Sea-to-air fluxes of marine biogenic aerosols have the potential to modify cloud microphysics and regional radiative budgets, and thus moderate Earth's warming. Polar regions play a critical role in the evolution of global climate. In this work, we use a well-established biogeochemical model to simulate the DMS flux from the Greenland Sea (20°W-10°E and 70°N-80°N) for the period 2003-2004. Parameter sensitivity analysis is employed to identify the most sensitive parameters in the model. A genetic algorithm (GA) technique is used for DMS model parameter calibration. Data from phase 5 of the Coupled Model Intercomparison Project (CMIP5) are used to drive the DMS model under 4 × CO2 conditions. DMS flux under quadrupled CO2 levels increases more than 300% compared with late 20th century levels (1 × CO2). Reasons for the increase in DMS flux include changes in the ocean state-namely an increase in sea surface temperature (SST) and loss of sea ice-and an increase in DMS transfer velocity, especially in spring and summer. Such a large increase in DMS flux could slow the rate of warming in the Arctic via radiative budget changes associated with DMS-derived aerosols.

  1. Reliable change, sensitivity, and specificity of a multidimensional concussion assessment battery: implications for caution in clinical practice.

    PubMed

    Register-Mihalik, Johna K; Guskiewicz, Kevin M; Mihalik, Jason P; Schmidt, Julianne D; Kerr, Zachary Y; McCrea, Michael A

    2013-01-01

    To provide reliable change confidence intervals for common clinical concussion measures using a healthy sample of collegiate athletes and to apply these reliable change parameters to a sample of concussed collegiate athletes. Two independent samples were included in the study and evaluated on common clinical measures of concussion. The healthy sample included male, collegiate football student-athletes (n = 38) assessed at 2 time points. The concussed sample included college-aged student-athletes (n = 132) evaluated before and after a concussion. Outcome measures included symptom severity scores, Automated Neuropsychological Assessment Metrics throughput scores, and Sensory Organization Test composite scores. Application of the reliable change parameters suggests that a small percentage of concussed participants were impaired on each measure. We identified a low sensitivity of the entire battery (all measures combined) of 50% but high specificity of 96%. Clinicians should be trained in understanding clinical concussion measures and should be aware of evidence suggesting the multifaceted battery is more sensitive than any single measure. Clinicians should be cautioned that sensitivity to balance and neurocognitive impairments was low for each individual measure. Applying the confidence intervals to our injured sample suggests that these measures do not adequately identify postconcussion impairments when used in isolation.

  2. Multi-level emulation of a volcanic ash transport and dispersion model to quantify sensitivity to uncertain parameters

    NASA Astrophysics Data System (ADS)

    Harvey, Natalie J.; Huntley, Nathan; Dacre, Helen F.; Goldstein, Michael; Thomson, David; Webster, Helen

    2018-01-01

    ensemble of simulations. The use of an emulator also identifies the input and internal parameters that do not contribute significantly to simulator uncertainty. Finally, the analysis highlights that the faster, less accurate, configuration of NAME can, on its own, provide useful information for the problem of predicting average column load over large areas.

  3. Simulation-based sensitivity analysis for non-ignorably missing data.

    PubMed

    Yin, Peng; Shi, Jian Q

    2017-01-01

    Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.

  4. Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit

    NASA Astrophysics Data System (ADS)

    Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie

    2015-09-01

    The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity

  5. Parameter dimensionality reduction of a conceptual model for streamflow prediction in Canadian, snowmelt dominated ungauged basins

    NASA Astrophysics Data System (ADS)

    Arsenault, Richard; Poissant, Dominique; Brissette, François

    2015-11-01

    This paper evaluated the effects of parametric reduction of a hydrological model on five regionalization methods and 267 catchments in the province of Quebec, Canada. The Sobol' variance-based sensitivity analysis was used to rank the model parameters by their influence on the model results and sequential parameter fixing was performed. The reduction in parameter correlations improved parameter identifiability, however this improvement was found to be minimal and was not transposed in the regionalization mode. It was shown that 11 of the HSAMI models' 23 parameters could be fixed with little or no loss in regionalization skill. The main conclusions were that (1) the conceptual lumped models used in this study did not represent physical processes sufficiently well to warrant parameter reduction for physics-based regionalization methods for the Canadian basins examined and (2) catchment descriptors did not adequately represent the relevant hydrological processes, namely snow accumulation and melt.

  6. Sensitivity Testing of the NSTAR Ion Thruster

    NASA Technical Reports Server (NTRS)

    Sengupta, Anita; Anderson, John; Brophy, John

    2007-01-01

    During the Extended Life Test of the DS1 flight spare ion thruster, the engine was subjected to sensitvity testing in order to characterize the macroscopic dependence of discharge chamber sensitivity to a +\\-3% vatiation in main flow, cathode flow and beam current, and to +\\5% variation in beam and accelerator voltage, was determined for the minimum- (THO), half- (TH8) and full power (TH15) throttle levels. For each power level investigared, 16 high/low operating conditions were chosen to vary the flows, beam current, and grid voltages in in a matrix that mapped out the entire parameter space. The matrix of data generated was used to determine the partial derivative or senitivity of the dependent parameters--discharge voltage, discharge current, discharge loss, double-to-single-ion current ratio, and neutralizer-keeper voltage--to the variation in the independent parameters--main flow, cathode flow, beam current, and beam voltage. The sensititivities of each dependent parameter with respect to each independent parameter were determined using a least-square fit routine. Variation in these sensitivities with thruster runtime was recorded over the duration of the ELT, to detemine if discharge performance changed with thruster wear. Several key findings have been ascertained from the sensitivity testing. Discharge operation is most sensitve to changes in cathode flow and to a lesser degree main flow. The data also confirms that for the NSTAR configuration plasma production is limited by primary electron input due to the fixed neutral population. Key sensitivities along with their change with thruster wear (operating time) will be presented. In addition double ion content measurements with an ExB probe will also be presented to illustrate beam ion production and content sensitivity to the discharge chamber operating parameteres.

  7. The Accuracy of Eyelid Movement Parameters for Drowsiness Detection

    PubMed Central

    Wilkinson, Vanessa E.; Jackson, Melinda L.; Westlake, Justine; Stevens, Bronwyn; Barnes, Maree; Swann, Philip; Rajaratnam, Shantha M. W.; Howard, Mark E.

    2013-01-01

    Study Objectives: Drowsiness is a major risk factor for motor vehicle and occupational accidents. Real-time objective indicators of drowsiness could potentially identify drowsy individuals with the goal of intervening before an accident occurs. Several ocular measures are promising objective indicators of drowsiness; however, there is a lack of studies evaluating their accuracy for detecting behavioral impairment due to drowsiness in real time. Methods: In this study, eye movement parameters were measured during vigilance tasks following restricted sleep and in a rested state (n = 33 participants) at three testing points (n = 71 data points) to compare ocular measures to a gold standard measure of drowsiness (OSLER). The utility of these parameters for detecting drowsiness-related errors was evaluated using receiver operating characteristic curves (ROC) (adjusted by clustering for participant) and identification of optimal cutoff levels for identifying frequent drowsiness-related errors (4 missed signals in a minute using OSLER). Their accuracy was tested for detecting increasing frequencies of behavioral lapses on a different task (psychomotor vigilance task [PVT]). Results: Ocular variables which measured the average duration of eyelid closure (inter-event duration [IED]) and the ratio of the amplitude to velocity of eyelid closure were reliable indicators of frequent errors (area under the curve for ROC of 0.73 to 0.83, p < 0.05). IED produced a sensitivity and specificity of 71% and 88% for detecting ≥ 3 lapses (PVT) in a minute and 100% and 86% for ≥ 5 lapses. A composite measure of several eye movement characteristics (Johns Drowsiness Scale) provided sensitivities of 77% and 100% for detecting 3 and ≥ 5 lapses in a minute, with specificities of 85% and 83%, respectively. Conclusions: Ocular measures, particularly those measuring the average duration of episodes of eye closure are promising real-time indicators of drowsiness. Citation: Wilkinson VE

  8. Dynamic sensitivity analysis of biological systems

    PubMed Central

    Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang

    2008-01-01

    Background A mathematical model to understand, predict, control, or even design a real biological system is a central theme in systems biology. A dynamic biological system is always modeled as a nonlinear ordinary differential equation (ODE) system. How to simulate the dynamic behavior and dynamic parameter sensitivities of systems described by ODEs efficiently and accurately is a critical job. In many practical applications, e.g., the fed-batch fermentation systems, the system admissible input (corresponding to independent variables of the system) can be time-dependent. The main difficulty for investigating the dynamic log gains of these systems is the infinite dimension due to the time-dependent input. The classical dynamic sensitivity analysis does not take into account this case for the dynamic log gains. Results We present an algorithm with an adaptive step size control that can be used for computing the solution and dynamic sensitivities of an autonomous ODE system simultaneously. Although our algorithm is one of the decouple direct methods in computing dynamic sensitivities of an ODE system, the step size determined by model equations can be used on the computations of the time profile and dynamic sensitivities with moderate accuracy even when sensitivity equations are more stiff than model equations. To show this algorithm can perform the dynamic sensitivity analysis on very stiff ODE systems with moderate accuracy, it is implemented and applied to two sets of chemical reactions: pyrolysis of ethane and oxidation of formaldehyde. The accuracy of this algorithm is demonstrated by comparing the dynamic parameter sensitivities obtained from this new algorithm and from the direct method with Rosenbrock stiff integrator based on the indirect method. The same dynamic sensitivity analysis was performed on an ethanol fed-batch fermentation system with a time-varying feed rate to evaluate the applicability of the algorithm to realistic models with time

  9. Understanding identifiability as a crucial step in uncertainty assessment

    NASA Astrophysics Data System (ADS)

    Jakeman, A. J.; Guillaume, J. H. A.; Hill, M. C.; Seo, L.

    2016-12-01

    The topic of identifiability analysis offers concepts and approaches to identify why unique model parameter values cannot be identified, and can suggest possible responses that either increase uniqueness or help to understand the effect of non-uniqueness on predictions. Identifiability analysis typically involves evaluation of the model equations and the parameter estimation process. Non-identifiability can have a number of undesirable effects. In terms of model parameters these effects include: parameters not being estimated uniquely even with ideal data; wildly different values being returned for different initialisations of a parameter optimisation algorithm; and parameters not being physically meaningful in a model attempting to represent a process. This presentation illustrates some of the drastic consequences of ignoring model identifiability analysis. It argues for a more cogent framework and use of identifiability analysis as a way of understanding model limitations and systematically learning about sources of uncertainty and their importance. The presentation specifically distinguishes between five sources of parameter non-uniqueness (and hence uncertainty) within the modelling process, pragmatically capturing key distinctions within existing identifiability literature. It enumerates many of the various approaches discussed in the literature. Admittedly, improving identifiability is often non-trivial. It requires thorough understanding of the cause of non-identifiability, and the time, knowledge and resources to collect or select new data, modify model structures or objective functions, or improve conditioning. But ignoring these problems is not a viable solution. Even simple approaches such as fixing parameter values or naively using a different model structure may have significant impacts on results which are too often overlooked because identifiability analysis is neglected.

  10. A Geostatistics-Informed Hierarchical Sensitivity Analysis Method for Complex Groundwater Flow and Transport Modeling

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2017-12-01

    Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.

  11. Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities

    NASA Astrophysics Data System (ADS)

    Esposito, Gaetano

    identifying sources of uncertainty affecting relevant reaction pathways are usually addressed by resorting to Global Sensitivity Analysis (GSA) techniques. In particular, the most sensitive reactions controlling combustion phenomena are first identified using the Morris Method and then analyzed under the Random Sampling -- High Dimensional Model Representation (RS-HDMR) framework. The HDMR decomposition shows that 10% of the variance seen in the extinction strain rate of non-premixed flames is due to second-order effects between parameters, whereas the maximum concentration of acetylene, a key soot precursor, is affected by mostly only first-order contributions. Moreover, the analysis of the global sensitivity indices demonstrates that improving the accuracy of the reaction rates including the vinyl radical, C2H3, can drastically reduce the uncertainty of predicting targeted flame properties. Finally, the back-propagation of the experimental uncertainty of the extinction strain rate to the parameter space is also performed. This exercise, achieved by recycling the numerical solutions of the RS-HDMR, shows that some regions of the parameter space have a high probability of reproducing the experimental value of the extinction strain rate between its own uncertainty bounds. Therefore this study demonstrates that the uncertainty analysis of bulk flame properties can effectively provide information on relevant chemical reactions.

  12. TU-AB-BRA-04: Quantitative Radiomics: Sensitivity of PET Textural Features to Image Acquisition and Reconstruction Parameters Implies the Need for Standards

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nyflot, MJ; Yang, F; Byrd, D

    Purpose: Despite increased use of heterogeneity metrics for PET imaging, standards for metrics such as textural features have yet to be developed. We evaluated the quantitative variability caused by image acquisition and reconstruction parameters on PET textural features. Methods: PET images of the NEMA IQ phantom were simulated with realistic image acquisition noise. 35 features based on intensity histograms (IH), co-occurrence matrices (COM), neighborhood-difference matrices (NDM), and zone-size matrices (ZSM) were evaluated within lesions (13, 17, 22, 28, 33 mm diameter). Variability in metrics across 50 independent images was evaluated as percent difference from mean for three phantom girths (850,more » 1030, 1200 mm) and two OSEM reconstructions (2 iterations, 28 subsets, 5 mm FWHM filtration vs 6 iterations, 28 subsets, 8.6 mm FWHM filtration). Also, patient sample size to detect a clinical effect of 30% with Bonferroni-corrected α=0.001 and 95% power was estimated. Results: As a class, NDM features demonstrated greatest sensitivity in means (5–50% difference for medium girth and reconstruction comparisons and 10–100% for large girth comparisons). Some IH features (standard deviation, energy, entropy) had variability below 10% for all sensitivity studies, while others (kurtosis, skewness) had variability above 30%. COM and ZSM features had complex sensitivities; correlation, energy, entropy (COM) and zone percentage, short-zone emphasis, zone-size non-uniformity (ZSM) had variability less than 5% while other metrics had differences up to 30%. Trends were similar for sample size estimation; for example, coarseness, contrast, and strength required 12, 38, and 52 patients to detect a 30% effect for the small girth case but 38, 88, and 128 patients in the large girth case. Conclusion: The sensitivity of PET textural features to image acquisition and reconstruction parameters is large and feature-dependent. Standards are needed to ensure that prospective

  13. On Theoretical Limits of Dynamic Model Updating Using a Sensitivity-Based Approach

    NASA Astrophysics Data System (ADS)

    GOLA, M. M.; SOMÀ, A.; BOTTO, D.

    2001-07-01

    The present work deals with the determination of the newly discovered conditions necessary for model updating with the eigensensitivity approach. The treatment concerns the maximum number of identifiable parameters regarding the structure of the eigenvectors derivatives. A mathematical demonstration is based on the evaluation of the rank of the least-squares matrix and produces the algebraic limiting conditions. Numerical application to a lumped parameter structure is employed to validate the mathematical limits taking into account different subsets of mode shapes. The demonstration is extended to the calculation of the eigenvector derivatives with both the Fox and Kapoor, and Nelson methods. III conditioning of the least-squares sensitivity matrix is revealed through the covariance jump.

  14. Statistical sensitivity analysis of a simple nuclear waste repository model

    NASA Astrophysics Data System (ADS)

    Ronen, Y.; Lucius, J. L.; Blow, E. M.

    1980-06-01

    A preliminary step in a comprehensive sensitivity analysis of the modeling of a nuclear waste repository. The purpose of the complete analysis is to determine which modeling parameters and physical data are most important in determining key design performance criteria and then to obtain the uncertainty in the design for safety considerations. The theory for a statistical screening design methodology is developed for later use in the overall program. The theory was applied to the test case of determining the relative importance of the sensitivity of near field temperature distribution in a single level salt repository to modeling parameters. The exact values of the sensitivities to these physical and modeling parameters were then obtained using direct methods of recalculation. The sensitivity coefficients found to be important for the sample problem were thermal loading, distance between the spent fuel canisters and their radius. Other important parameters were those related to salt properties at a point of interest in the repository.

  15. MOVES sensitivity analysis update : Transportation Research Board Summer Meeting 2012 : ADC-20 Air Quality Committee

    DOT National Transportation Integrated Search

    2012-01-01

    OVERVIEW OF PRESENTATION : Evaluation Parameters : EPAs Sensitivity Analysis : Comparison to Baseline Case : MOVES Sensitivity Run Specification : MOVES Sensitivity Input Parameters : Results : Uses of Study

  16. Mid arm circumference (MAC) and body mass index (BMI)--the two important auxologic parameters in neonates.

    PubMed

    Nair, R Bindu; Elizabeth, K E; Geetha, S; Varghese, Sarath

    2006-10-01

    Even though birth weight is the most sensitive predictor of health and outcome, accurate weighing and proper recording are not done in most developing countries. Most neonates lose 10% of body weight soon after birth and when such babies subsequently come for medical care, it becomes difficult to know whether the baby was low birth weight (LBW) at birth or not, to predict the outcome. Among the many surrogate auxologic parameters to identify LBW babies, mid arm circumference (MAC) was found to be the most useful and simplest. At a cut off of 9 cm, with a sensitivity of 92% and a specificity of 90.5% to identify LBW, MAC is recommended as an alternative measurement. Ponderal index is measured in neonatal period to identify growth retardation. Body mass index (BMI) is a very useful index in children and adults to identify obesity/chronic energy deficiency (CED). Tracking of BMI from neonatal period to adulthood is recommended to plan intervention and predict outcome. The mean BMI observed in the present study was 12.86 kg/m2 close to the expected of 13.

  17. Results of an integrated structure-control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1988-01-01

    Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.

  18. Chicken lines divergently selected for antibody responses to sheep red blood cells show line-specific differences in sensitivity to immunomodulation by diet. Part I: Humoral parameters.

    PubMed

    Adriaansen-Tennekes, R; de Vries Reilingh, G; Nieuwland, M G B; Parmentier, H K; Savelkoul, H F J

    2009-09-01

    Individual differences in nutrient sensitivity have been suggested to be related with differences in stress sensitivity. Here we used layer hens divergently selected for high and low specific antibody responses to SRBC (i.e., low line hens and high line hens), reflecting a genetically based differential immune competence. The parental line of these hens was randomly bred as the control line and was used as well. Recently, we showed that these selection lines differ in their stress reactivity; the low line birds show a higher hypothalamic-pituitary-adrenal (HPA) axis reactivity. To examine maternal effects and neonatal nutritional exposure on nutrient sensitivity, we studied 2 subsequent generations. This also created the opportunity to examine egg production in these birds. The 3 lines were fed 2 different nutritionally complete layer feeds for a period of 22 wk in the first generation. The second generation was fed from hatch with the experimental diets. At several time intervals, parameters reflecting humoral immunity were determined such as specific antibody to Newcastle disease and infectious bursal disease vaccines; levels of natural antibodies binding lipopolysaccharide, lipoteichoic acid, and keyhole limpet hemocyanin; and classical and alternative complement activity. The most pronounced dietary-induced effects were found in the low line birds of the first generation: specific antibody titers to Newcastle disease vaccine were significantly elevated by 1 of the 2 diets. In the second generation, significant differences were found in lipoteichoic acid natural antibodies of the control and low line hens. At the end of the observation period of egg parameters, a significant difference in egg weight was found in birds of the high line. Our results suggest that nutritional differences have immunomodulatory effects on innate and adaptive humoral immune parameters in birds with high HPA axis reactivity and affect egg production in birds with low HPA axis reactivity.

  19. Parameter Uncertainty on AGCM-simulated Tropical Cyclones

    NASA Astrophysics Data System (ADS)

    He, F.

    2015-12-01

    This work studies the parameter uncertainty on tropical cyclone (TC) simulations in Atmospheric General Circulation Models (AGCMs) using the Reed-Jablonowski TC test case, which is illustrated in Community Atmosphere Model (CAM). It examines the impact from 24 parameters across the physical parameterization schemes that represent the convection, turbulence, precipitation and cloud processes in AGCMs. The one-at-a-time (OAT) sensitivity analysis method first quantifies their relative importance on TC simulations and identifies the key parameters to the six different TC characteristics: intensity, precipitation, longwave cloud radiative forcing (LWCF), shortwave cloud radiative forcing (SWCF), cloud liquid water path (LWP) and ice water path (IWP). Then, 8 physical parameters are chosen and perturbed using the Latin-Hypercube Sampling (LHS) method. The comparison between OAT ensemble run and LHS ensemble run shows that the simulated TC intensity is mainly affected by the parcel fractional mass entrainment rate in Zhang-McFarlane (ZM) deep convection scheme. The nonlinear interactive effect among different physical parameters is negligible on simulated TC intensity. In contrast, this nonlinear interactive effect plays a significant role in other simulated tropical cyclone characteristics (precipitation, LWCF, SWCF, LWP and IWP) and greatly enlarge their simulated uncertainties. The statistical emulator Extended Multivariate Adaptive Regression Splines (EMARS) is applied to characterize the response functions for nonlinear effect. Last, we find that the intensity uncertainty caused by physical parameters is in a degree comparable to uncertainty caused by model structure (e.g. grid) and initial conditions (e.g. sea surface temperature, atmospheric moisture). These findings suggest the importance of using the perturbed physics ensemble (PPE) method to revisit tropical cyclone prediction under climate change scenario.

  20. Effects of correlated parameters and uncertainty in electronic-structure-based chemical kinetic modelling

    NASA Astrophysics Data System (ADS)

    Sutton, Jonathan E.; Guo, Wei; Katsoulakis, Markos A.; Vlachos, Dionisios G.

    2016-04-01

    Kinetic models based on first principles are becoming common place in heterogeneous catalysis because of their ability to interpret experimental data, identify the rate-controlling step, guide experiments and predict novel materials. To overcome the tremendous computational cost of estimating parameters of complex networks on metal catalysts, approximate quantum mechanical calculations are employed that render models potentially inaccurate. Here, by introducing correlative global sensitivity analysis and uncertainty quantification, we show that neglecting correlations in the energies of species and reactions can lead to an incorrect identification of influential parameters and key reaction intermediates and reactions. We rationalize why models often underpredict reaction rates and show that, despite the uncertainty being large, the method can, in conjunction with experimental data, identify influential missing reaction pathways and provide insights into the catalyst active site and the kinetic reliability of a model. The method is demonstrated in ethanol steam reforming for hydrogen production for fuel cells.