Sensitivity analysis of geometric errors in additive manufacturing medical models.
Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian
2015-03-01
Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms.
Sensitivity Analysis in Engineering
NASA Technical Reports Server (NTRS)
Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)
1987-01-01
The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.
1992-02-20
SENSIT,MUSIG,COMSEN is a set of three related programs for sensitivity test analysis. SENSIT conducts sensitivity tests. These tests are also known as threshold tests, LD50 tests, gap tests, drop weight tests, etc. SENSIT interactively instructs the experimenter on the proper level at which to stress the next specimen, based on the results of previous responses. MUSIG analyzes the results of a sensitivity test to determine the mean and standard deviation of the underlying population bymore » computing maximum likelihood estimates of these parameters. MUSIG also computes likelihood ratio joint confidence regions and individual confidence intervals. COMSEN compares the results of two sensitivity tests to see if the underlying populations are significantly different. COMSEN provides an unbiased method of distinguishing between statistical variation of the estimates of the parameters of the population and true population difference.« less
LISA Telescope Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Waluschka, Eugene; Krebs, Carolyn (Technical Monitor)
2001-01-01
The results of a LISA telescope sensitivity analysis will be presented, The emphasis will be on the outgoing beam of the Dall-Kirkham' telescope and its far field phase patterns. The computed sensitivity analysis will include motions of the secondary with respect to the primary, changes in shape of the primary and secondary, effect of aberrations of the input laser beam and the effect the telescope thin film coatings on polarization. An end-to-end optical model will also be discussed.
Sensitivity analysis of thermodynamic calculations
NASA Astrophysics Data System (ADS)
Irwin, C. L.; Obrien, T. J.
Iterative solution methods and sensitivity analysis for mathematical models of chemical equilibrium are formally similar. For models which are a Newton-type iterative solution scheme, such as the NASA-Lewis CEC code or the R-Gibbs unit of ASPEN, it is shown that extensive sensitivity information is available for approximately the cost of one additional Newton iteration. All matrices and vectors required for implementation of first and second order sensitivity analysis in the CEC code are given in an appendix. A simple problem for which an analytical solution is possible is presented to illustrate the calculations and verify the computer calculations.
[Structural sensitivity analysis].
Carrera-Hueso, F J; Ramón-Barrios, A
2011-05-01
The aim of this study was to perform a structural sensitivity analysis of a decision model and to identify its advantages and limitations. A previously published model of dinoprostone was modified, taking two scenarios into account: eliminating postpartum hemorrhages and including both hemorrhages and uterine hyperstimulation among the adverse effects. The result of the structural sensitivity analysis shows the robustness of the underlying model and confirmed the initial results: the intrauterine device is more cost-effective than intracervical dinoprostone gel. Structural sensitivity analyses should be congruent with the situation studied and clinically validated. Although uncertainty may be only slightly reduced, these analyses provide information and add greater validity and reliability to the model.
RESRAD parameter sensitivity analysis
Cheng, J.J.; Yu, C.; Zielen, A.J.
1991-08-01
Three methods were used to perform a sensitivity analysis of RESRAD code input parameters -- enhancement of RESRAD by the Gradient Enhanced Software System (GRESS) package, direct parameter perturbation, and graphic comparison. Evaluation of these methods indicated that (1) the enhancement of RESRAD by GRESS has limitations and should be used cautiously, (2) direct parameter perturbation is tedious to implement, and (3) the graphics capability of RESRAD 4.0 is the most direct and convenient method for performing sensitivity analyses. This report describes procedures for implementing these methods and presents a comparison of results. 3 refs., 9 figs., 8 tabs.
Scaling in sensitivity analysis
Link, W.A.; Doherty, P.F.
2002-01-01
Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.
LISA Telescope Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Waluschka, Eugene; Krebs, Carolyn (Technical Monitor)
2002-01-01
The Laser Interferometer Space Antenna (LISA) for the detection of Gravitational Waves is a very long baseline interferometer which will measure the changes in the distance of a five million kilometer arm to picometer accuracies. As with any optical system, even one with such very large separations between the transmitting and receiving, telescopes, a sensitivity analysis should be performed to see how, in this case, the far field phase varies when the telescope parameters change as a result of small temperature changes.
Sensitivity testing and analysis
Neyer, B.T.
1991-01-01
New methods of sensitivity testing and analysis are proposed. The new test method utilizes Maximum Likelihood Estimates to pick the next test level in order to maximize knowledge of both the mean, {mu}, and the standard deviation, {sigma} of the population. Simulation results demonstrate that this new test provides better estimators (less bias and smaller variance) of both {mu} and {sigma} than the other commonly used tests (Probit, Bruceton, Robbins-Monro, Langlie). A new method of analyzing sensitivity tests is also proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions, for {mu}, {sigma}, and arbitrary percentiles. Unlike presently used methods, such as the program ASENT which is based on the Cramer-Rao theorem, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The new test and analysis methods will be explained and compared to the presently used methods. 19 refs., 12 figs.
Sensitivity analysis in computational aerodynamics
NASA Technical Reports Server (NTRS)
Bristow, D. R.
1984-01-01
Information on sensitivity analysis in computational aerodynamics is given in outline, graphical, and chart form. The prediction accuracy if the MCAERO program, a perturbation analysis method, is discussed. A procedure for calculating perturbation matrix, baseline wing paneling for perturbation analysis test cases and applications of an inviscid sensitivity matrix are among the topics covered.
Nitrogen Addition Enhances Drought Sensitivity of Young Deciduous Tree Species.
Dziedek, Christoph; Härdtle, Werner; von Oheimb, Goddert; Fichtner, Andreas
2016-01-01
Understanding how trees respond to global change drivers is central to predict changes in forest structure and functions. Although there is evidence on the mode of nitrogen (N) and drought (D) effects on tree growth, our understanding of the interplay of these factors is still limited. Simultaneously, as mixtures are expected to be less sensitive to global change as compared to monocultures, we aimed to investigate the combined effects of N addition and D on the productivity of three tree species (Fagus sylvatica, Quercus petraea, Pseudotsuga menziesii) in relation to functional diverse species mixtures using data from a 4-year field experiment in Northwest Germany. Here we show that species mixing can mitigate the negative effects of combined N fertilization and D events, but the community response is mainly driven by the combination of certain traits rather than the tree species richness of a community. For beech, we found that negative effects of D on growth rates were amplified by N fertilization (i.e., combined treatment effects were non-additive), while for oak and fir, the simultaneous effects of N and D were additive. Beech and oak were identified as most sensitive to combined N+D effects with a strong size-dependency observed for beech, suggesting that the negative impact of N+D becomes stronger with time as beech grows larger. As a consequence, the net biodiversity effect declined at the community level, which can be mainly assigned to a distinct loss of complementarity in beech-oak mixtures. This pattern, however, was not evident in the other species-mixtures, indicating that neighborhood composition (i.e., trait combination), but not tree species richness mediated the relationship between tree diversity and treatment effects on tree growth. Our findings point to the importance of the qualitative role ('trait portfolio') that biodiversity play in determining resistance of diverse tree communities to environmental changes. As such, they provide further
Recent developments in structural sensitivity analysis
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Adelman, Howard M.
1988-01-01
Recent developments are reviewed in two major areas of structural sensitivity analysis: sensitivity of static and transient response; and sensitivity of vibration and buckling eigenproblems. Recent developments from the standpoint of computational cost, accuracy, and ease of implementation are presented. In the area of static response, current interest is focused on sensitivity to shape variation and sensitivity of nonlinear response. Two general approaches are used for computing sensitivities: differentiation of the continuum equations followed by discretization, and the reverse approach of discretization followed by differentiation. It is shown that the choice of methods has important accuracy and implementation implications. In the area of eigenproblem sensitivity, there is a great deal of interest and significant progress in sensitivity of problems with repeated eigenvalues. In addition to reviewing recent contributions in this area, the paper raises the issue of differentiability and continuity associated with the occurrence of repeated eigenvalues.
Nitrogen Addition Enhances Drought Sensitivity of Young Deciduous Tree Species
Dziedek, Christoph; Härdtle, Werner; von Oheimb, Goddert; Fichtner, Andreas
2016-01-01
Understanding how trees respond to global change drivers is central to predict changes in forest structure and functions. Although there is evidence on the mode of nitrogen (N) and drought (D) effects on tree growth, our understanding of the interplay of these factors is still limited. Simultaneously, as mixtures are expected to be less sensitive to global change as compared to monocultures, we aimed to investigate the combined effects of N addition and D on the productivity of three tree species (Fagus sylvatica, Quercus petraea, Pseudotsuga menziesii) in relation to functional diverse species mixtures using data from a 4-year field experiment in Northwest Germany. Here we show that species mixing can mitigate the negative effects of combined N fertilization and D events, but the community response is mainly driven by the combination of certain traits rather than the tree species richness of a community. For beech, we found that negative effects of D on growth rates were amplified by N fertilization (i.e., combined treatment effects were non-additive), while for oak and fir, the simultaneous effects of N and D were additive. Beech and oak were identified as most sensitive to combined N+D effects with a strong size-dependency observed for beech, suggesting that the negative impact of N+D becomes stronger with time as beech grows larger. As a consequence, the net biodiversity effect declined at the community level, which can be mainly assigned to a distinct loss of complementarity in beech-oak mixtures. This pattern, however, was not evident in the other species-mixtures, indicating that neighborhood composition (i.e., trait combination), but not tree species richness mediated the relationship between tree diversity and treatment effects on tree growth. Our findings point to the importance of the qualitative role (‘trait portfolio’) that biodiversity play in determining resistance of diverse tree communities to environmental changes. As such, they provide
Nitrogen Addition Enhances Drought Sensitivity of Young Deciduous Tree Species.
Dziedek, Christoph; Härdtle, Werner; von Oheimb, Goddert; Fichtner, Andreas
2016-01-01
Understanding how trees respond to global change drivers is central to predict changes in forest structure and functions. Although there is evidence on the mode of nitrogen (N) and drought (D) effects on tree growth, our understanding of the interplay of these factors is still limited. Simultaneously, as mixtures are expected to be less sensitive to global change as compared to monocultures, we aimed to investigate the combined effects of N addition and D on the productivity of three tree species (Fagus sylvatica, Quercus petraea, Pseudotsuga menziesii) in relation to functional diverse species mixtures using data from a 4-year field experiment in Northwest Germany. Here we show that species mixing can mitigate the negative effects of combined N fertilization and D events, but the community response is mainly driven by the combination of certain traits rather than the tree species richness of a community. For beech, we found that negative effects of D on growth rates were amplified by N fertilization (i.e., combined treatment effects were non-additive), while for oak and fir, the simultaneous effects of N and D were additive. Beech and oak were identified as most sensitive to combined N+D effects with a strong size-dependency observed for beech, suggesting that the negative impact of N+D becomes stronger with time as beech grows larger. As a consequence, the net biodiversity effect declined at the community level, which can be mainly assigned to a distinct loss of complementarity in beech-oak mixtures. This pattern, however, was not evident in the other species-mixtures, indicating that neighborhood composition (i.e., trait combination), but not tree species richness mediated the relationship between tree diversity and treatment effects on tree growth. Our findings point to the importance of the qualitative role ('trait portfolio') that biodiversity play in determining resistance of diverse tree communities to environmental changes. As such, they provide further
Additional EIPC Study Analysis. Final Report
Hadley, Stanton W; Gotham, Douglas J.; Luciani, Ralph L.
2014-12-01
Between 2010 and 2012 the Eastern Interconnection Planning Collaborative (EIPC) conducted a major long-term resource and transmission study of the Eastern Interconnection (EI). With guidance from a Stakeholder Steering Committee (SSC) that included representatives from the Eastern Interconnection States Planning Council (EISPC) among others, the project was conducted in two phases. Phase 1 involved a long-term capacity expansion analysis that involved creation of eight major futures plus 72 sensitivities. Three scenarios were selected for more extensive transmission- focused evaluation in Phase 2. Five power flow analyses, nine production cost model runs (including six sensitivities), and three capital cost estimations were developed during this second phase. The results from Phase 1 and 2 provided a wealth of data that could be examined further to address energy-related questions. A list of 14 topics was developed for further analysis. This paper brings together the earlier interim reports of the first 13 topics plus one additional topic into a single final report.
Sensitivity Analysis Using Risk Measures.
Tsanakas, Andreas; Millossovich, Pietro
2016-01-01
In a quantitative model with uncertain inputs, the uncertainty of the output can be summarized by a risk measure. We propose a sensitivity analysis method based on derivatives of the output risk measure, in the direction of model inputs. This produces a global sensitivity measure, explicitly linking sensitivity and uncertainty analyses. We focus on the case of distortion risk measures, defined as weighted averages of output percentiles, and prove a representation of the sensitivity measure that can be evaluated on a Monte Carlo sample, as a weighted average of gradients over the input space. When the analytical model is unknown or hard to work with, nonparametric techniques are used for gradient estimation. This process is demonstrated through the example of a nonlinear insurance loss model. Furthermore, the proposed framework is extended in order to measure sensitivity to constant model parameters, uncertain statistical parameters, and random factors driving dependence between model inputs.
Multiple predictor smoothing methods for sensitivity analysis.
Helton, Jon Craig; Storlie, Curtis B.
2006-08-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.
Lombardi, D.P.
1992-08-01
The Chemical Hazard Prediction Model (D2PC) developed by the US Army will play a critical role in the Chemical Stockpile Emergency Preparedness Program by predicting chemical agent transport and dispersion through the atmosphere after an accidental release. To aid in the analysis of the output calculated by D2PC, this sensitivity analysis was conducted to provide information on model response to a variety of input parameters. The sensitivity analysis focused on six accidental release scenarios involving chemical agents VX, GB, and HD (sulfur mustard). Two categories, corresponding to conservative most likely and worst case meteorological conditions, provided the reference for standard input values. D2PC displayed a wide variety of sensitivity to the various input parameters. The model displayed the greatest overall sensitivity to wind speed, mixing height, and breathing rate. For other input parameters, sensitivity was mixed but generally lower. Sensitivity varied not only with parameter, but also over the range of values input for a single parameter. This information on model response can provide useful data for interpreting D2PC output.
SEP thrust subsystem performance sensitivity analysis
NASA Technical Reports Server (NTRS)
Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.
1973-01-01
This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.
Additional Investigations of Ice Shape Sensitivity to Parameter Variations
NASA Technical Reports Server (NTRS)
Miller, Dean R.; Potapczuk, Mark G.; Langhals, Tammy J.
2006-01-01
A second parameter sensitivity study was conducted at the NASA Glenn Research Center's Icing Research Tunnel (IRT) using a 36 in. chord (0.91 m) NACA-0012 airfoil. The objective of this work was to further investigate the feasibility of using ice shape feature changes to define requirements for the simulation and measurement of SLD and appendix C icing conditions. A previous study concluded that it was feasible to use changes in ice shape features (e.g., ice horn angle, ice horn thickness, and ice shape mass) to detect relatively small variations in icing spray condition parameters (LWC, MVD, and temperature). The subject of this current investigation extends the scope of this previous work, by also examining the effect of icing tunnel spray-bar parameter variations (water pressure, air pressure) on ice shape feature changes. The approach was to vary spray-bar water pressure and air pressure, and then evaluate the effects of these parameter changes on the resulting ice shapes. This paper will provide a description of the experimental method, present selected experimental results, and conclude with an evaluation of these results.
Sensitivity analysis for interactions under unmeasured confounding.
Vanderweele, Tyler J; Mukherjee, Bhramar; Chen, Jinbo
2012-09-28
We develop a sensitivity analysis technique to assess the sensitivity of interaction analyses to unmeasured confounding. We give bias formulas for sensitivity analysis for interaction under unmeasured confounding on both additive and multiplicative scales. We provide simplified formulas in the case in which either one of the two factors does not interact with the unmeasured confounder in its effects on the outcome. An interesting consequence of the results is that if the two exposures of interest are independent (e.g., gene-environment independence), even under unmeasured confounding, if the estimate of the interaction is nonzero, then either there is a true interaction between the two factors or there is an interaction between one of the factors and the unmeasured confounder; an interaction must be present in either scenario. We apply the results to two examples drawn from the literature.
Comparative Sensitivity Analysis of Muscle Activation Dynamics.
Rockenfeller, Robert; Günther, Michael; Schmitt, Syn; Götz, Thomas
2015-01-01
We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379
[Sensitivity analysis in health investment projects].
Arroyave-Loaiza, G; Isaza-Nieto, P; Jarillo-Soto, E C
1994-01-01
This paper discusses some of the concepts and methodologies frequently used in sensitivity analyses in the evaluation of investment programs. In addition, a concrete example is presented: a hospital investment in which four indicators were used to design different scenarios and their impact on investment costs. This paper emphasizes the importance of this type of analysis in the field of management of health services, and more specifically in the formulation of investment programs.
Sensitivity and Uncertainty Analysis Shell
1999-04-20
SUNS (Sensitivity and Uncertainty Analysis Shell) is a 32-bit application that runs under Windows 95/98 and Windows NT. It is designed to aid in statistical analyses for a broad range of applications. The class of problems for which SUNS is suitable is generally defined by two requirements: 1. A computer code is developed or acquired that models some processes for which input is uncertain and the user is interested in statistical analysis of the outputmore » of that code. 2. The statistical analysis of interest can be accomplished using the Monte Carlo analysis. The implementation then requires that the user identify which input to the process model is to be manipulated for statistical analysis. With this information, the changes required to loosely couple SUNS with the process model can be completed. SUNS is then used to generate the required statistical sample and the user-supplied process model analyses the sample. The SUNS post processor displays statistical results from any existing file that contains sampled input and output values.« less
Using Dynamic Sensitivity Analysis to Assess Testability
NASA Technical Reports Server (NTRS)
Voas, Jeffrey; Morell, Larry; Miller, Keith
1990-01-01
This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.
Stiff DAE integrator with sensitivity analysis capabilities
2007-11-26
IDAS is a general purpose (serial and parallel) solver for differential equation (ODE) systems with senstivity analysis capabilities. It provides both forward and adjoint sensitivity analysis options.
Cost-Sensitive Boosting: Fitting an Additive Asymmetric Logistic Regression Model
NASA Astrophysics Data System (ADS)
Li, Qiu-Jie; Mao, Yao-Bin; Wang, Zhi-Quan; Xiang, Wen-Bo
Conventional machine learning algorithms like boosting tend to equally treat misclassification errors that are not adequate to process certain cost-sensitive classification problems such as object detection. Although many cost-sensitive extensions of boosting by directly modifying the weighting strategy of correspond original algorithms have been proposed and reported, they are heuristic in nature and only proved effective by empirical results but lack sound theoretical analysis. This paper develops a framework from a statistical insight that can embody almost all existing cost-sensitive boosting algorithms: fitting an additive asymmetric logistic regression model by stage-wise optimization of certain criterions. Four cost-sensitive versions of boosting algorithms are derived, namely CSDA, CSRA, CSGA and CSLB which respectively correspond to Discrete AdaBoost, Real AdaBoost, Gentle AdaBoost and LogitBoost. Experimental results on the application of face detection have shown the effectiveness of the proposed learning framework in the reduction of the cumulative misclassification cost.
Data fusion qualitative sensitivity analysis
Clayton, E.A.; Lewis, R.E.
1995-09-01
Pacific Northwest Laboratory was tasked with testing, debugging, and refining the Hanford Site data fusion workstation (DFW), with the assistance of Coleman Research Corporation (CRC), before delivering the DFW to the environmental restoration client at the Hanford Site. Data fusion is the mathematical combination (or fusion) of disparate data sets into a single interpretation. The data fusion software used in this study was developed by CRC. The data fusion software developed by CRC was initially demonstrated on a data set collected at the Hanford Site where three types of data were combined. These data were (1) seismic reflection, (2) seismic refraction, and (3) depth to geologic horizons. The fused results included a contour map of the top of a low-permeability horizon. This report discusses the results of a sensitivity analysis of data fusion software to variations in its input parameters. The data fusion software developed by CRC has a large number of input parameters that can be varied by the user and that influence the results of data fusion. Many of these parameters are defined as part of the earth model. The earth model is a series of 3-dimensional polynomials with horizontal spatial coordinates as the independent variables and either subsurface layer depth or values of various properties within these layers (e.g., compression wave velocity, resistivity) as the dependent variables.
Scalable analysis tools for sensitivity analysis and UQ (3160) results.
Karelitz, David B.; Ice, Lisa G.; Thompson, David C.; Bennett, Janine C.; Fabian, Nathan; Scott, W. Alan; Moreland, Kenneth D.
2009-09-01
The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.
A review of sensitivity analysis techniques
Hamby, D.M.
1993-12-31
Mathematical models are utilized to approximate various highly complex engineering, physical, environmental, social, and economic phenomena. Model parameters exerting the most influence on model results are identified through a {open_quotes}sensitivity analysis.{close_quotes} A comprehensive review is presented of more than a dozen sensitivity analysis methods. The most fundamental of sensitivity techniques utilizes partial differentiation whereas the simplest approach requires varying parameter values one-at-a-time. Correlation analysis is used to determine relationships between independent and dependent variables. Regression analysis provides the most comprehensive sensitivity measure and is commonly utilized to build response surfaces that approximate complex models.
Shape design sensitivity analysis using domain information
NASA Technical Reports Server (NTRS)
Seong, Hwal-Gyeong; Choi, Kyung K.
1985-01-01
A numerical method for obtaining accurate shape design sensitivity information for built-up structures is developed and demonstrated through analysis of examples. The basic character of the finite element method, which gives more accurate domain information than boundary information, is utilized for shape design sensitivity improvement. A domain approach for shape design sensitivity analysis of built-up structures is derived using the material derivative idea of structural mechanics and the adjoint variable method of design sensitivity analysis. Velocity elements and B-spline curves are introduced to alleviate difficulties in generating domain velocity fields. The regularity requirements of the design velocity field are studied.
Sensitivity analysis of a wing aeroelastic response
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.; Eldred, Lloyd B.; Barthelemy, Jean-Francois M.
1991-01-01
A variation of Sobieski's Global Sensitivity Equations (GSE) approach is implemented to obtain the sensitivity of the static aeroelastic response of a three-dimensional wing model. The formulation is quite general and accepts any aerodynamics and structural analysis capability. An interface code is written to convert one analysis's output to the other's input, and visa versa. Local sensitivity derivatives are calculated by either analytic methods or finite difference techniques. A program to combine the local sensitivities, such as the sensitivity of the stiffness matrix or the aerodynamic kernel matrix, into global sensitivity derivatives is developed. The aerodynamic analysis package FAST, using a lifting surface theory, and a structural package, ELAPS, implementing Giles' equivalent plate model are used.
Structural sensitivity analysis: Methods, applications and needs
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.; Camarda, C. J.; Walsh, J. L.
1984-01-01
Innovative techniques applicable to sensitivity analysis of discretized structural systems are reviewed. The techniques include a finite difference step size selection algorithm, a method for derivatives of iterative solutions, a Green's function technique for derivatives of transient response, simultaneous calculation of temperatures and their derivatives, derivatives with respect to shape, and derivatives of optimum designs with respect to problem parameters. Computerized implementations of sensitivity analysis and applications of sensitivity derivatives are also discussed. Some of the critical needs in the structural sensitivity area are indicated along with plans for dealing with some of those needs.
Structural sensitivity analysis: Methods, applications, and needs
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.; Camarda, C. J.; Walsh, J. L.
1984-01-01
Some innovative techniques applicable to sensitivity analysis of discretized structural systems are reviewed. These techniques include a finite-difference step-size selection algorithm, a method for derivatives of iterative solutions, a Green's function technique for derivatives of transient response, a simultaneous calculation of temperatures and their derivatives, derivatives with respect to shape, and derivatives of optimum designs with respect to problem parameters. Computerized implementations of sensitivity analysis and applications of sensitivity derivatives are also discussed. Finally, some of the critical needs in the structural sensitivity area are indicated along with Langley plans for dealing with some of these needs.
NASA Technical Reports Server (NTRS)
Kirshen, N.; Mill, T.
1973-01-01
The effect of formulation components and the addition of fire retardants on the impact sensitivity of Viton B fluoroelastomer in liquid oxygen was studied with the objective of developing a procedure for reliably reducing this sensitivity. Component evaluation, carried out on more than 40 combinations of components and cure cycles, showed that almost all the standard formulation agents, including carbon, MgO, Diak-3, and PbO2, will sensitize the Viton stock either singly or in combinations, some combinations being much more sensitive than others. Cure and postcure treatments usually reduced the sensitivity of a given formulation, often dramatically, but no formulated Viton was as insensitive as the pure Viton B stock. Coating formulated Viton with a thin layer of pure Viton gave some indication of reduced sensitivity, but additional tests are needed. It is concluded that sensitivity in formulated Viton arises from a variety of sources, some physical and some chemical in origin. Elemental analyses for all the formulated Vitons are reported as are the results of a literature search on the subject of LOX impact sensitivity.
Sensitivity Analysis for some Water Pollution Problem
NASA Astrophysics Data System (ADS)
Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff
2014-05-01
Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2013-01-01
This paper presents the extended forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed to run at optimized time and space steps without affecting the confidence of the physical parameter sensitivity results. The time and space steps forward sensitivity analysis method can also replace the traditional time step and grid convergence study with much less computational cost. Two well-defined benchmark problems with manufactured solutions are utilized to demonstrate the method.
Updated Chemical Kinetics and Sensitivity Analysis Code
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan
2005-01-01
An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.
Sensitivity analysis for large-scale problems
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Whitworth, Sandra L.
1987-01-01
The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.
Coal Transportation Rate Sensitivity Analysis
2005-01-01
On December 21, 2004, the Surface Transportation Board (STB) requested that the Energy Information Administration (EIA) analyze the impact of changes in coal transportation rates on projected levels of electric power sector energy use and emissions. Specifically, the STB requested an analysis of changes in national and regional coal consumption and emissions resulting from adjustments in railroad transportation rates for Wyoming's Powder River Basin (PRB) coal using the National Energy Modeling System (NEMS). However, because NEMS operates at a relatively aggregate regional level and does not represent the costs of transporting coal over specific rail lines, this analysis reports on the impacts of interregional changes in transportation rates from those used in the Annual Energy Outlook 2005 (AEO2005) reference case.
A global analysis of soil acidification caused by nitrogen addition
NASA Astrophysics Data System (ADS)
Tian, Dashuan; Niu, Shuli
2015-02-01
Nitrogen (N) deposition-induced soil acidification has become a global problem. However, the response patterns of soil acidification to N addition and the underlying mechanisms remain far from clear. Here, we conducted a meta-analysis of 106 studies to reveal global patterns of soil acidification in responses to N addition. We found that N addition significantly reduced soil pH by 0.26 on average globally. However, the responses of soil pH varied with ecosystem types, N addition rate, N fertilization forms, and experimental durations. Soil pH decreased most in grassland, whereas boreal forest was not observed a decrease to N addition in soil acidification. Soil pH decreased linearly with N addition rates. Addition of urea and NH4NO3 contributed more to soil acidification than NH4-form fertilizer. When experimental duration was longer than 20 years, N addition effects on soil acidification diminished. Environmental factors such as initial soil pH, soil carbon and nitrogen content, precipitation, and temperature all influenced the responses of soil pH. Base cations of Ca2+, Mg2+ and K+ were critical important in buffering against N-induced soil acidification at the early stage. However, N addition has shifted global soils into the Al3+ buffering phase. Overall, this study indicates that acidification in global soils is very sensitive to N deposition, which is greatly modified by biotic and abiotic factors. Global soils are now at a buffering transition from base cations (Ca2+, Mg2+ and K+) to non-base cations (Mn2+ and Al3+). This calls our attention to care about the limitation of base cations and the toxic impact of non-base cations for terrestrial ecosystems with N deposition.
Acid Rain Analysis by Standard Addition Titration.
ERIC Educational Resources Information Center
Ophardt, Charles E.
1985-01-01
The standard addition titration is a precise and rapid method for the determination of the acidity in rain or snow samples. The method requires use of a standard buret, a pH meter, and Gran's plot to determine the equivalence point. Experimental procedures used and typical results obtained are presented. (JN)
Electrophoretic analysis of Allium alien addition lines.
Peffley, E B; Corgan, J N; Horak, K E; Tanksley, S D
1985-12-01
Meiotic pairing in an interspecific triploid of Allium cepa and A. fistulosum, 'Delta Giant', exhibits preferential pairing between the two A. cepa genomes, leaving the A. fistulosum genome as univalents. Multivalent pairing involving A. fistulosum chromosomes occurs at a low level, allowing for recombination between the genomes. Ten trisomies were recovered from the backcross of 'Delta Giant' x A. cepa cv., 'Temprana', representing a minimum of four of the eight possible alien addition lines. The alien addition lines possessed different A. fistulosum enzyme markers. Those markers, Adh-1, Idh-1 and Pgm-1 reside on different A. fistulosum chromosomes, whereas Pgi-1 and Idh-1 may be linked. Diploid, trisomic and hyperploid progeny were recovered that exhibited putative pink root resistance. The use of interspecific plants as a means to introgress A. fistulosum genes into A. cepa appears to be successful at both the trisomic and the diploid levels. If introgression can be accomplished using an interspecific triploid such as 'Delta Giant' to generate fertile alien addition lines and subsequent fertile diploids, or if introgression can be accomplished directly at the diploid level, this will have accomplished gene flow that has not been possible at the interspecific diploid level.
Seo, Sujin; Zhou, Xiangfei; Liu, Gang Logan
2016-07-01
Plasmonic substrates have fixed sensitivity once the geometry of the structure is defined. In order to improve the sensitivity, significant research effort has been focused on designing new plasmonic structures, which involves high fabrication costs; however, a method is reported for improving sensitivity not by redesigning the structure but by simply assembling plasmonic nanoparticles (NPs) near the evanescent field of the underlying 3D plasmonic nanostructure. Here, a nanoscale Lycurgus cup array (nanoLCA) is employed as a base colorimetric plasmonic substrate and an assembly template. Compared to the nanoLCA, the NP assembled nanoLCA (NP-nanoLCA) exhibits much higher sensitivity for both bulk refractive index sensing and biotin-streptavidin binding detection. The limit of detection of the NP-nanoLCA is at least ten times smaller when detecting biotin-streptavidin conjugation. The numerical calculations confirm the importance of the additive plasmon coupling between the NPs and the nanoLCA for a denser and stronger electric field in the same 3D volumetric space. Tunable sensitivity is accomplished by controlling the number of NPs in each nanocup, or the number density of the hot spots. This simple yet scalable and cost-effective method of using additive heterogeneous plasmon coupling effects will benefit various chemical, medical, and environmental plasmon-based sensors.
NASA Technical Reports Server (NTRS)
Smalheer, C. V.
1973-01-01
The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.
Adjoint sensitivity analysis of an ultrawideband antenna
Stephanson, M B; White, D A
2011-07-28
The frequency domain finite element method using H(curl)-conforming finite elements is a robust technique for full-wave analysis of antennas. As computers become more powerful, it is becoming feasible to not only predict antenna performance, but also to compute sensitivity of antenna performance with respect to multiple parameters. This sensitivity information can then be used for optimization of the design or specification of manufacturing tolerances. In this paper we review the Adjoint Method for sensitivity calculation, and apply it to the problem of optimizing a Ultrawideband antenna.
Sensitivity Analysis in the Model Web
NASA Astrophysics Data System (ADS)
Jones, R.; Cornford, D.; Boukouvalas, A.
2012-04-01
The Model Web, and in particular the Uncertainty enabled Model Web being developed in the UncertWeb project aims to allow model developers and model users to deploy and discover models exposed as services on the Web. In particular model users will be able to compose model and data resources to construct and evaluate complex workflows. When discovering such workflows and models on the Web it is likely that the users might not have prior experience of the model behaviour in detail. It would be particularly beneficial if users could undertake a sensitivity analysis of the models and workflows they have discovered and constructed to allow them to assess the sensitivity to their assumptions and parameters. This work presents a Web-based sensitivity analysis tool which provides computationally efficient sensitivity analysis methods for models exposed on the Web. In particular the tool is tailored to the UncertWeb profiles for both information models (NetCDF and Observations and Measurements) and service specifications (WPS and SOAP/WSDL). The tool employs emulation technology where this is found to be possible, constructing statistical surrogate models for the models or workflows, to allow very fast variance based sensitivity analysis. Where models are too complex for emulation to be possible, or evaluate too fast for this to be necessary the original models are used with a carefully designed sampling strategy. A particular benefit of constructing emulators of the models or workflow components is that within the framework these can be communicated and evaluated at any physical location. The Web-based tool and backend API provide several functions to facilitate the process of creating an emulator and performing sensitivity analysis. A user can select a model exposed on the Web and specify the input ranges. Once this process is complete, they are able to perform screening to discover important inputs, train an emulator, and validate the accuracy of the trained emulator. In
Sensitivity analysis and application in exploration geophysics
NASA Astrophysics Data System (ADS)
Tang, R.
2013-12-01
In exploration geophysics, the usual way of dealing with geophysical data is to form an Earth model describing underground structure in the area of investigation. The resolved model, however, is based on the inversion of survey data which is unavoidable contaminated by various noises and is sampled in a limited number of observation sites. Furthermore, due to the inherent non-unique weakness of inverse geophysical problem, the result is ambiguous. And it is not clear that which part of model features is well-resolved by the data. Therefore the interpretation of the result is intractable. We applied a sensitivity analysis to address this problem in magnetotelluric(MT). The sensitivity, also named Jacobian matrix or the sensitivity matrix, is comprised of the partial derivatives of the data with respect to the model parameters. In practical inversion, the matrix can be calculated by direct modeling of the theoretical response for the given model perturbation, or by the application of perturbation approach and reciprocity theory. We now acquired visualized sensitivity plot by calculating the sensitivity matrix and the solution is therefore under investigation that the less-resolved part is indicated and should not be considered in interpretation, while the well-resolved parameters can relatively be convincing. The sensitivity analysis is hereby a necessary and helpful tool for increasing the reliability of inverse models. Another main problem of exploration geophysics is about the design strategies of joint geophysical survey, i.e. gravity, magnetic & electromagnetic method. Since geophysical methods are based on the linear or nonlinear relationship between observed data and subsurface parameters, an appropriate design scheme which provides maximum information content within a restricted budget is quite difficult. Here we firstly studied sensitivity of different geophysical methods by mapping the spatial distribution of different survey sensitivity with respect to the
Dynamic sensitivity analysis of biological systems
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2008-01-01
Background A mathematical model to understand, predict, control, or even design a real biological system is a central theme in systems biology. A dynamic biological system is always modeled as a nonlinear ordinary differential equation (ODE) system. How to simulate the dynamic behavior and dynamic parameter sensitivities of systems described by ODEs efficiently and accurately is a critical job. In many practical applications, e.g., the fed-batch fermentation systems, the system admissible input (corresponding to independent variables of the system) can be time-dependent. The main difficulty for investigating the dynamic log gains of these systems is the infinite dimension due to the time-dependent input. The classical dynamic sensitivity analysis does not take into account this case for the dynamic log gains. Results We present an algorithm with an adaptive step size control that can be used for computing the solution and dynamic sensitivities of an autonomous ODE system simultaneously. Although our algorithm is one of the decouple direct methods in computing dynamic sensitivities of an ODE system, the step size determined by model equations can be used on the computations of the time profile and dynamic sensitivities with moderate accuracy even when sensitivity equations are more stiff than model equations. To show this algorithm can perform the dynamic sensitivity analysis on very stiff ODE systems with moderate accuracy, it is implemented and applied to two sets of chemical reactions: pyrolysis of ethane and oxidation of formaldehyde. The accuracy of this algorithm is demonstrated by comparing the dynamic parameter sensitivities obtained from this new algorithm and from the direct method with Rosenbrock stiff integrator based on the indirect method. The same dynamic sensitivity analysis was performed on an ethanol fed-batch fermentation system with a time-varying feed rate to evaluate the applicability of the algorithm to realistic models with time
Sensitivity analysis of retrovirus HTLV-1 transactivation.
Corradin, Alberto; Di Camillo, Barbara; Ciminale, Vincenzo; Toffolo, Gianna; Cobelli, Claudio
2011-02-01
Human T-cell leukemia virus type 1 is a human retrovirus endemic in many areas of the world. Although many studies indicated a key role of the viral protein Tax in the control of viral transcription, the mechanisms controlling HTLV-1 expression and its persistence in vivo are still poorly understood. To assess Tax effects on viral kinetics, we developed a HTLV-1 model. Two parameters that capture both its deterministic and stochastic behavior were quantified: Tax signal-to-noise ratio (SNR), which measures the effect of stochastic phenomena on Tax expression as the ratio between the protein steady-state level and the variance of the noise causing fluctuations around this value; t(1/2), a parameter representative of the duration of Tax transient expression pulses, that is, of Tax bursts due to stochastic phenomena. Sensitivity analysis indicates that the major determinant of Tax SNR is the transactivation constant, the system parameter weighting the enhancement of retrovirus transcription due to transactivation. In contrast, t(1/2) is strongly influenced by the degradation rate of the mRNA. In addition to shedding light into the mechanism of Tax transactivation, the obtained results are of potential interest for novel drug development strategies since the two parameters most affecting Tax transactivation can be experimentally tuned, e.g. by perturbing protein phosphorylation and by RNA interference.
Sensitive chiral analysis by capillary electrophoresis.
García-Ruiz, Carmen; Marina, María Luisa
2006-01-01
In this review, an updated view of the different strategies used up to now to enhance the sensitivity of detection in chiral analysis by CE will be provided to the readers. With this aim, it will include a brief description of the fundamentals and most of the recent applications performed in sensitive chiral analysis by CE using offline and online sample treatment techniques (SPE, liquid-liquid extraction, microdialysis, etc.), on-column preconcentration techniques based on electrophoretic principles (ITP, stacking, and sweeping), and alternative detection systems (spectroscopic, spectrometric, and electrochemical) to the widely used UV-Vis absorption detection.
The Effect of Gaseous Additives on Dynamic Pressure Output and Ignition Sensitivity of Nanothermites
NASA Astrophysics Data System (ADS)
Puszynski, Jan; Doorenbos, Zac; Walters, Ian; Redner, Paul; Kapoor, Deepak; Swiatkiewicz, Jacek
2011-06-01
This contribution addresses important combustion characteristics of nanothermite systems. In this research the following nanothermites were investigated: a) Al-Bi2O3, b)Al-Fe2O3 and c)Al-Bi2O3-Fe2O3. The effect of various gasifying additives (such as nitrocellulose (NC) and cellulose acetate butyrate (CAB)) as well as reactant stoichiometry, reactant particle size and shape on processability, ignition delay time and dynamic pressure outputs at different locations in a combustion chamber will be presented. In addition, this contribution will report electrostatic and friction sensitivities of standard and modified nanothermites.
A numerical comparison of sensitivity analysis techniques
Hamby, D.M.
1993-12-31
Engineering and scientific phenomena are often studied with the aid of mathematical models designed to simulate complex physical processes. In the nuclear industry, modeling the movement and consequence of radioactive pollutants is extremely important for environmental protection and facility control. One of the steps in model development is the determination of the parameters most influential on model results. A {open_quotes}sensitivity analysis{close_quotes} of these parameters is not only critical to model validation but also serves to guide future research. A previous manuscript (Hamby) detailed many of the available methods for conducting sensitivity analyses. The current paper is a comparative assessment of several methods for estimating relative parameter sensitivity. Method practicality is based on calculational ease and usefulness of the results. It is the intent of this report to demonstrate calculational rigor and to compare parameter sensitivity rankings resulting from various sensitivity analysis techniques. An atmospheric tritium dosimetry model (Hamby) is used here as an example, but the techniques described can be applied to many different modeling problems. Other investigators (Rose; Dalrymple and Broyd) present comparisons of sensitivity analyses methodologies, but none as comprehensive as the current work.
Design sensitivity analysis of boundary element substructures
NASA Technical Reports Server (NTRS)
Kane, James H.; Saigal, Sunil; Gallagher, Richard H.
1989-01-01
The ability to reduce or condense a three-dimensional model exactly, and then iterate on this reduced size model representing the parts of the design that are allowed to change in an optimization loop is discussed. The discussion presents the results obtained from an ongoing research effort to exploit the concept of substructuring within the structural shape optimization context using a Boundary Element Analysis (BEA) formulation. The first part contains a formulation for the exact condensation of portions of the overall boundary element model designated as substructures. The use of reduced boundary element models in shape optimization requires that structural sensitivity analysis can be performed. A reduced sensitivity analysis formulation is then presented that allows for the calculation of structural response sensitivities of both the substructured (reduced) and unsubstructured parts of the model. It is shown that this approach produces significant computational economy in the design sensitivity analysis and reanalysis process by facilitating the block triangular factorization and forward reduction and backward substitution of smaller matrices. The implementatior of this formulation is discussed and timings and accuracies of representative test cases presented.
Pediatric Pain, Predictive Inference, and Sensitivity Analysis.
ERIC Educational Resources Information Center
Weiss, Robert
1994-01-01
Coping style and effects of counseling intervention on pain tolerance was studied for 61 elementary school students through immersion of hands in cold water. Bayesian predictive inference tools are able to distinguish between subject characteristics and manipulable treatments. Sensitivity analysis strengthens the certainty of conclusions about…
Geothermal well cost sensitivity analysis: current status
Carson, C.C.; Lin, Y.T.
1980-01-01
The geothermal well-cost model developed by Sandia National Laboratories is being used to analyze the sensitivity of well costs to improvements in geothermal drilling technology. Three interim results from this modeling effort are discussed. The sensitivity of well costs to bit parameters, rig parameters, and material costs; an analysis of the cost reduction potential of an advanced bit; and a consideration of breakeven costs for new cementing technology. All three results illustrate that the well-cost savings arising from any new technology will be highly site-dependent but that in specific wells the advances considered can result in significant cost reductions.
NIR sensitivity analysis with the VANE
NASA Astrophysics Data System (ADS)
Carrillo, Justin T.; Goodin, Christopher T.; Baylot, Alex E.
2016-05-01
Near infrared (NIR) cameras, with peak sensitivity around 905-nm wavelengths, are increasingly used in object detection applications such as pedestrian detection, occupant detection in vehicles, and vehicle detection. In this work, we present the results of simulated sensitivity analysis for object detection with NIR cameras. The analysis was conducted using high performance computing (HPC) to determine the environmental effects on object detection in different terrains and environmental conditions. The Virtual Autonomous Navigation Environment (VANE) was used to simulate highresolution models for environment, terrain, vehicles, and sensors. In the experiment, an active fiducial marker was attached to the rear bumper of a vehicle. The camera was mounted on a following vehicle that trailed at varying standoff distances. Three different terrain conditions (rural, urban, and forest), two environmental conditions (clear and hazy), three different times of day (morning, noon, and evening), and six different standoff distances were used to perform the sensor sensitivity analysis. The NIR camera that was used for the simulation is the DMK firewire monochrome on a pan-tilt motor. Standoff distance was varied along with environment and environmental conditions to determine the critical failure points for the sensor. Feature matching was used to detect the markers in each frame of the simulation, and the percentage of frames in which one of the markers was detected was recorded. The standoff distance produced the biggest impact on the performance of the camera system, while the camera system was not sensitive to environment conditions.
Wideband sensitivity analysis of plasmonic structures
NASA Astrophysics Data System (ADS)
Ahmed, Osman S.; Bakr, Mohamed H.; Li, Xun; Nomura, Tsuyoshi
2013-03-01
We propose an adjoint variable method (AVM) for efficient wideband sensitivity analysis of the dispersive plasmonic structures. Transmission Line Modeling (TLM) is exploited for calculation of the structure sensitivities. The theory is developed for general dispersive materials modeled by Drude or Lorentz model. Utilizing the dispersive AVM, sensitivities are calculated with respect to all the designable parameters regardless of their number using at most one extra simulation. This is significantly more efficient than the regular finite difference approaches whose computational overhead scales linearly with the number of design parameters. A Z-domain formulation is utilized to allow for the extension of the theory to a general material model. The theory has been successfully applied to a structure with teethshaped plasmonic resonator. The design variables are the shape parameters (widths and thicknesses) of these teeth. The results are compared to the accurate yet expensive finite difference approach and good agreement is achieved.
Nursing-sensitive indicators: a concept analysis
Heslop, Liza; Lu, Sai
2014-01-01
Aim To report a concept analysis of nursing-sensitive indicators within the applied context of the acute care setting. Background The concept of ‘nursing sensitive indicators’ is valuable to elaborate nursing care performance. The conceptual foundation, theoretical role, meaning, use and interpretation of the concept tend to differ. The elusiveness of the concept and the ambiguity of its attributes may have hindered research efforts to advance its application in practice. Design Concept analysis. Data sources Using ‘clinical indicators’ or ‘quality of nursing care’ as subject headings and incorporating keyword combinations of ‘acute care’ and ‘nurs*’, CINAHL and MEDLINE with full text in EBSCOhost databases were searched for English language journal articles published between 2000–2012. Only primary research articles were selected. Methods A hybrid approach was undertaken, incorporating traditional strategies as per Walker and Avant and a conceptual matrix based on Holzemer's Outcomes Model for Health Care Research. Results The analysis revealed two main attributes of nursing-sensitive indicators. Structural attributes related to health service operation included: hours of nursing care per patient day, nurse staffing. Outcome attributes related to patient care included: the prevalence of pressure ulcer, falls and falls with injury, nosocomial selective infection and patient/family satisfaction with nursing care. Conclusion This concept analysis may be used as a basis to advance understandings of the theoretical structures that underpin both research and practical application of quality dimensions of nursing care performance. PMID:25113388
SENSITIVITY ANALYSIS FOR OSCILLATING DYNAMICAL SYSTEMS
WILKINS, A. KATHARINA; TIDOR, BRUCE; WHITE, JACOB; BARTON, PAUL I.
2012-01-01
Boundary value formulations are presented for exact and efficient sensitivity analysis, with respect to model parameters and initial conditions, of different classes of oscillating systems. Methods for the computation of sensitivities of derived quantities of oscillations such as period, amplitude and different types of phases are first developed for limit-cycle oscillators. In particular, a novel decomposition of the state sensitivities into three parts is proposed to provide an intuitive classification of the influence of parameter changes on period, amplitude and relative phase. The importance of the choice of time reference, i.e., the phase locking condition, is demonstrated and discussed, and its influence on the sensitivity solution is quantified. The methods are then extended to other classes of oscillatory systems in a general formulation. Numerical techniques are presented to facilitate the solution of the boundary value problem, and the computation of different types of sensitivities. Numerical results are verified by demonstrating consistency with finite difference approximations and are superior both in computational efficiency and in numerical precision to existing partial methods. PMID:23296349
Additives and salts for dye-sensitized solar cells electrolytes: what is the best choice?
NASA Astrophysics Data System (ADS)
Bella, Federico; Sacco, Adriano; Pugliese, Diego; Laurenti, Marco; Bianco, Stefano
2014-10-01
A multivariate chemometric approach is proposed for the first time for performance optimization of I-/I3- liquid electrolytes for dye-sensitized solar cells (DSSCs). Over the years the system composed by iodide/triiodide redox shuttle dissolved in organic solvent has been enriched with the addition of different specific cations and chemical compounds to improve the photoelectrochemical behavior of the cell. However, usually such additives act favorably with respect to some of the cell parameters and negatively to others. Moreover, the combined action of different compounds often yields contradictory results, and from the literature it is not possible to identify an optimal recipe. We report here a systematic work, based on a multivariate experimental design, to statistically and quantitatively evaluate the effect of different additives on the photovoltaic performances of the device. The effect of cation size in iodine salts, the iodine/iodide ratio in the electrolyte and the effect of type and concentration of additives are mutually evaluated by means of a Design of Experiment (DoE) approach. Through this statistical method, the optimization of the overall parameters is demonstrated with a limited number of experimental trials. A 25% improvement on the photovoltaic conversion efficiency compared with that obtained with a commercial electrolyte is demonstrated.
The Volatility of Data Space: Topology Oriented Sensitivity Analysis
Du, Jing; Ligmann-Zielinska, Arika
2015-01-01
Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929
Sensitivity analysis of a ground-water-flow model
Torak, Lynn J.; ,
1991-01-01
A sensitivity analysis was performed on 18 hydrological factors affecting steady-state groundwater flow in the Upper Floridan aquifer near Albany, southwestern Georgia. Computations were based on a calibrated, two-dimensional, finite-element digital model of the stream-aquifer system and the corresponding data inputs. Flow-system sensitivity was analyzed by computing water-level residuals obtained from simulations involving individual changes to each hydrological factor. Hydrological factors to which computed water levels were most sensitive were those that produced the largest change in the sum-of-squares of residuals for the smallest change in factor value. Plots of the sum-of-squares of residuals against multiplier or additive values that effect change in the hydrological factors are used to evaluate the influence of each factor on the simulated flow system. The shapes of these 'sensitivity curves' indicate the importance of each hydrological factor to the flow system. Because the sensitivity analysis can be performed during the preliminary phase of a water-resource investigation, it can be used to identify the types of hydrological data required to accurately characterize the flow system prior to collecting additional data or making management decisions.
Efficient sensitivity analysis method for chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Liao, Haitao
2016-05-01
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
Additive interaction in survival analysis: use of the additive hazards model.
Rod, Naja Hulvej; Lange, Theis; Andersen, Ingelise; Marott, Jacob Louis; Diderichsen, Finn
2012-09-01
It is a widely held belief in public health and clinical decision-making that interventions or preventive strategies should be aimed at patients or population subgroups where most cases could potentially be prevented. To identify such subgroups, deviation from additivity of absolute effects is the relevant measure of interest. Multiplicative survival models, such as the Cox proportional hazards model, are often used to estimate the association between exposure and risk of disease in prospective studies. In Cox models, deviations from additivity have usually been assessed by surrogate measures of additive interaction derived from multiplicative models-an approach that is both counter-intuitive and sometimes invalid. This paper presents a straightforward and intuitive way of assessing deviation from additivity of effects in survival analysis by use of the additive hazards model. The model directly estimates the absolute size of the deviation from additivity and provides confidence intervals. In addition, the model can accommodate both continuous and categorical exposures and models both exposures and potential confounders on the same underlying scale. To illustrate the approach, we present an empirical example of interaction between education and smoking on risk of lung cancer. We argue that deviations from additivity of effects are important for public health interventions and clinical decision-making, and such estimations should be encouraged in prospective studies on health. A detailed implementation guide of the additive hazards model is provided in the appendix.
Bayesian sensitivity analysis of a nonlinear finite element model
NASA Astrophysics Data System (ADS)
Becker, W.; Oakley, J. E.; Surace, C.; Gili, P.; Rowson, J.; Worden, K.
2012-10-01
A major problem in uncertainty and sensitivity analysis is that the computational cost of propagating probabilistic uncertainty through large nonlinear models can be prohibitive when using conventional methods (such as Monte Carlo methods). A powerful solution to this problem is to use an emulator, which is a mathematical representation of the model built from a small set of model runs at specified points in input space. Such emulators are massively cheaper to run and can be used to mimic the "true" model, with the result that uncertainty analysis and sensitivity analysis can be performed for a greatly reduced computational cost. The work here investigates the use of an emulator known as a Gaussian process (GP), which is an advanced probabilistic form of regression. The GP is particularly suited to uncertainty analysis since it is able to emulate a wide class of models, and accounts for its own emulation uncertainty. Additionally, uncertainty and sensitivity measures can be estimated analytically, given certain assumptions. The GP approach is explained in detail here, and a case study of a finite element model of an airship is used to demonstrate the method. It is concluded that the GP is a very attractive way of performing uncertainty and sensitivity analysis on large models, provided that the dimensionality is not too high.
The Theoretical Foundation of Sensitivity Analysis for GPS
NASA Astrophysics Data System (ADS)
Shikoska, U.; Davchev, D.; Shikoski, J.
2008-10-01
In this paper the equations of sensitivity analysis are derived and established theoretical underpinnings for the analyses. Paper propounds a land-vehicle navigation concepts and definition for sensitivity analysis. Equations of sensitivity analysis are presented for a linear Kalman filter and case study is given to illustrate the use of sensitivity analysis to the reader. At the end of the paper, extensions that are required for this research are made to the basic equations of sensitivity analysis specifically; the equations of sensitivity analysis are re-derived for a linearized Kalman filter.
LCA data quality: sensitivity and uncertainty analysis.
Guo, M; Murphy, R J
2012-10-01
Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions. PMID:22854094
Simple Sensitivity Analysis for Orion GNC
NASA Technical Reports Server (NTRS)
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
Bayesian sensitivity analysis of bifurcating nonlinear models
NASA Astrophysics Data System (ADS)
Becker, W.; Worden, K.; Rowson, J.
2013-01-01
Sensitivity analysis allows one to investigate how changes in input parameters to a system affect the output. When computational expense is a concern, metamodels such as Gaussian processes can offer considerable computational savings over Monte Carlo methods, albeit at the expense of introducing a data modelling problem. In particular, Gaussian processes assume a smooth, non-bifurcating response surface. This work highlights a recent extension to Gaussian processes which uses a decision tree to partition the input space into homogeneous regions, and then fits separate Gaussian processes to each region. In this way, bifurcations can be modelled at region boundaries and different regions can have different covariance properties. To test this method, both the treed and standard methods were applied to the bifurcating response of a Duffing oscillator and a bifurcating FE model of a heart valve. It was found that the treed Gaussian process provides a practical way of performing uncertainty and sensitivity analysis on large, potentially-bifurcating models, which cannot be dealt with by using a single GP, although an open problem remains how to manage bifurcation boundaries that are not parallel to coordinate axes.
NASA Astrophysics Data System (ADS)
Kaiser, C.; Solaiman, Z. M.; Kilburn, M. R.; Clode, P. L.; Fuchslueger, L.; Koranda, M.; Murphy, D. V.
2012-04-01
The release of carbon through plant roots to the soil has been recognized as a governing factor for soil microbial community composition and decomposition processes, constituting an important control for ecosystem biogeochemical cycles. Moreover, there is increasing awareness that the flux of recently assimilated carbon from plants to the soil may regulate ecosystem response to environmental change, as the rate of the plant-soil carbon transfer will likely be affected by increased plant C assimilation caused by increasing atmospheric CO2 levels. What has received less attention so far is how sensitive the plant-soil C transfer would be to possible regulations coming from belowground, such as soil N addition or microbial community changes resulting from anthropogenic inputs such as biochar amendments. In this study we investigated the size, rate and sensitivity of the transfer of recently assimilated plant C through the root-soil-mycorrhiza-microbial continuum. Wheat plants associated with arbuscular mycorrhizal fungi were grown in split-boxes which were filled either with soil or a soil-biochar mixture. Each split-box consisted of two compartments separated by a membrane which was penetrable for mycorrhizal hyphae but not for roots. Wheat plants were only grown in one compartment while the other compartment served as an extended soil volume which was only accessible by mycorrhizal hyphae associated with the plant roots. After plants were grown for four weeks we used a double-labeling approach with 13C and 15N in order to investigate interactions between C and N flows in the plant-soil-microorganism system. Plants were subjected to an enriched 13CO2 atmosphere for 8 hours during which 15NH4 was added to a subset of split-boxes to either the root-containing or the root-free compartment. Both, 13C and 15N fluxes through the plant-soil continuum were monitored over 24 hours by stable isotope methods (13C phospho-lipid fatty acids by GC-IRMS, 15N/13C in bulk plant
A Post-Monte-Carlo Sensitivity Analysis Code
2000-04-04
SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmore » also quantifies the relative importance among the sensitive variables.« less
Design sensitivity analysis of rotorcraft airframe structures for vibration reduction
NASA Technical Reports Server (NTRS)
Murthy, T. Sreekanta
1987-01-01
Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.
Additive toxicity of herbicide mixtures and comparative sensitivity of tropical benthic microalgae.
Magnusson, Marie; Heimann, Kirsten; Quayle, Pamela; Negri, Andrew P
2010-11-01
Natural waters often contain complex mixtures of unknown contaminants potentially posing a threat to marine communities through chemical interactions. Here, acute effects of the photosystem II-inhibiting herbicides diuron, tebuthiuron, atrazine, simazine, and hexazinone, herbicide breakdown products (desethyl-atrazine (DEA) and 3,4-dichloroaniline (3,4-DCA)) and binary mixtures, were investigated using three tropical benthic microalgae; Navicula sp. and Cylindrotheca closterium (Ochrophyta) and Nephroselmis pyriformis (Chlorophyta), and one standard test species, Phaeodactylum tricornutum (Ochrophyta), in a high-throughput Maxi-Imaging-PAM bioassay (Maxi-IPAM). The order of toxicity was; diuron > hexazinone > tebuthiuron > atrazine > simazine > DEA > 3,4-DCA for all species. The tropical green alga N. pyriformis was up to 10-fold more sensitive than the diatoms tested here and reported for coral symbionts, and is recommended as a standard tropical test species for future research. All binary mixtures exhibited additive toxicity, and the use of herbicide equivalents (HEq) is therefore recommended in order to incorporate total-maximum-load measures for environmental regulatory purposes.
Cedergreen, Nina; Nørhave, Nils Jakob; Svendsen, Claus; Spurgeon, David J
2016-01-01
A wealth of studies has investigated how chemical sensitivity is affected by temperature, however, almost always under different constant rather than more realistic fluctuating regimes. Here we compared how the nematode Caenorhabditis elegans responds to copper at constant temperatures (8-24°C) and under fluctuation conditions of low (±4°C) and high (±8°C) amplitude (averages of 12, 16, 20°C and 16°C respectively). The DEBkiss model was used to interpret effects on energy budgets. Increasing constant temperature from 12-24°C reduced time to first egg, life-span and population growth rates consistent with temperature driven metabolic rate change. Responses at 8°C did not, however, accord with this pattern (including a deviation from the Temperature Size Rule), identifying a cold stress effect. High amplitude variation and low amplitude variation around a mean temperature of 12°C impacted reproduction and body size compared to nematodes kept at the matching average constant temperatures. Copper exposure affected reproduction, body size and life-span and consequently population growth. Sensitivity to copper (EC50 values), was similar at intermediate temperatures (12, 16, 20°C) and higher at 24°C and especially the innately stressful 8°C condition. Temperature variation did not increase copper sensitivity. Indeed under variable conditions including time at the stressful 8°C condition, sensitivity was reduced. DEBkiss identified increased maintenance costs and increased assimilation as possible mechanisms for cold and higher copper concentration effects. Model analysis of combined variable temperature effects, however, demonstrated no additional joint stressor response. Hence, concerns that exposure to temperature fluctuations may sensitise species to co-stressor effects seem unfounded in this case.
Cedergreen, Nina; Nørhave, Nils Jakob; Svendsen, Claus; Spurgeon, David J
2016-01-01
A wealth of studies has investigated how chemical sensitivity is affected by temperature, however, almost always under different constant rather than more realistic fluctuating regimes. Here we compared how the nematode Caenorhabditis elegans responds to copper at constant temperatures (8-24°C) and under fluctuation conditions of low (±4°C) and high (±8°C) amplitude (averages of 12, 16, 20°C and 16°C respectively). The DEBkiss model was used to interpret effects on energy budgets. Increasing constant temperature from 12-24°C reduced time to first egg, life-span and population growth rates consistent with temperature driven metabolic rate change. Responses at 8°C did not, however, accord with this pattern (including a deviation from the Temperature Size Rule), identifying a cold stress effect. High amplitude variation and low amplitude variation around a mean temperature of 12°C impacted reproduction and body size compared to nematodes kept at the matching average constant temperatures. Copper exposure affected reproduction, body size and life-span and consequently population growth. Sensitivity to copper (EC50 values), was similar at intermediate temperatures (12, 16, 20°C) and higher at 24°C and especially the innately stressful 8°C condition. Temperature variation did not increase copper sensitivity. Indeed under variable conditions including time at the stressful 8°C condition, sensitivity was reduced. DEBkiss identified increased maintenance costs and increased assimilation as possible mechanisms for cold and higher copper concentration effects. Model analysis of combined variable temperature effects, however, demonstrated no additional joint stressor response. Hence, concerns that exposure to temperature fluctuations may sensitise species to co-stressor effects seem unfounded in this case. PMID:26784453
Svendsen, Claus; Spurgeon, David J.
2016-01-01
A wealth of studies has investigated how chemical sensitivity is affected by temperature, however, almost always under different constant rather than more realistic fluctuating regimes. Here we compared how the nematode Caenorhabditis elegans responds to copper at constant temperatures (8–24°C) and under fluctuation conditions of low (±4°C) and high (±8°C) amplitude (averages of 12, 16, 20°C and 16°C respectively). The DEBkiss model was used to interpret effects on energy budgets. Increasing constant temperature from 12–24°C reduced time to first egg, life-span and population growth rates consistent with temperature driven metabolic rate change. Responses at 8°C did not, however, accord with this pattern (including a deviation from the Temperature Size Rule), identifying a cold stress effect. High amplitude variation and low amplitude variation around a mean temperature of 12°C impacted reproduction and body size compared to nematodes kept at the matching average constant temperatures. Copper exposure affected reproduction, body size and life-span and consequently population growth. Sensitivity to copper (EC50 values), was similar at intermediate temperatures (12, 16, 20°C) and higher at 24°C and especially the innately stressful 8°C condition. Temperature variation did not increase copper sensitivity. Indeed under variable conditions including time at the stressful 8°C condition, sensitivity was reduced. DEBkiss identified increased maintenance costs and increased assimilation as possible mechanisms for cold and higher copper concentration effects. Model analysis of combined variable temperature effects, however, demonstrated no additional joint stressor response. Hence, concerns that exposure to temperature fluctuations may sensitise species to co-stressor effects seem unfounded in this case. PMID:26784453
Sensitivity analysis of transport modeling in a fractured gneiss aquifer
NASA Astrophysics Data System (ADS)
Abdelaziz, Ramadan; Merkel, Broder J.
2015-03-01
Modeling solute transport in fractured aquifers is still challenging for scientists and engineers. Tracer tests are a powerful tool to investigate fractured aquifers with complex geometry and variable heterogeneity. This research focuses on obtaining hydraulic and transport parameters from an experimental site with several wells. At the site, a tracer test with NaCl was performed under natural gradient conditions. Observed concentrations of tracer test were used to calibrate a conservative solute transport model by inverse modeling based on UCODE2013, MODFLOW, and MT3DMS. In addition, several statistics are employed for sensitivity analysis. Sensitivity analysis results indicate that hydraulic conductivity and immobile porosity play important role in the late arrive for breakthrough curve. The results proved that the calibrated model fits well with the observed data set.
Stormwater quality models: performance and sensitivity analysis.
Dotto, C B S; Kleidorfer, M; Deletic, A; Fletcher, T D; McCarthy, D T; Rauch, W
2010-01-01
The complex nature of pollutant accumulation and washoff, along with high temporal and spatial variations, pose challenges for the development and establishment of accurate and reliable models of the pollution generation process in urban environments. Therefore, the search for reliable stormwater quality models remains an important area of research. Model calibration and sensitivity analysis of such models are essential in order to evaluate model performance; it is very unlikely that non-calibrated models will lead to reasonable results. This paper reports on the testing of three models which aim to represent pollutant generation from urban catchments. Assessment of the models was undertaken using a simplified Monte Carlo Markov Chain (MCMC) method. Results are presented in terms of performance, sensitivity to the parameters and correlation between these parameters. In general, it was suggested that the tested models poorly represent reality and result in a high level of uncertainty. The conclusions provide useful information for the improvement of existing models and insights for the development of new model formulations.
Sensitivity to food additives, vaso-active amines and salicylates: a review of the evidence.
Skypala, Isabel J; Williams, M; Reeves, L; Meyer, R; Venter, C
2015-01-01
Although there is considerable literature pertaining to IgE and non IgE-mediated food allergy, there is a paucity of information on non-immune mediated reactions to foods, other than metabolic disorders such as lactose intolerance. Food additives and naturally occurring 'food chemicals' have long been reported as having the potential to provoke symptoms in those who are more sensitive to their effects. Diets low in 'food chemicals' gained prominence in the 1970s and 1980s, and their popularity remains, although the evidence of their efficacy is very limited. This review focuses on the available evidence for the role and likely adverse effects of both added and natural 'food chemicals' including benzoate, sulphite, monosodium glutamate, vaso-active or biogenic amines and salicylate. Studies assessing the efficacy of the restriction of these substances in the diet have mainly been undertaken in adults, but the paper will also touch on the use of such diets in children. The difficulty of reviewing the available evidence is that few of the studies have been controlled and, for many, considerable time has elapsed since their publication. Meanwhile dietary patterns and habits have changed hugely in the interim, so the conclusions may not be relevant for our current dietary norms. The conclusion of the review is that there may be some benefit in the removal of an additive or a group of foods high in natural food chemicals from the diet for a limited period for certain individuals, providing the diagnostic pathway is followed and the foods are reintroduced back into the diet to assess for the efficacy of removal. However diets involving the removal of multiple additives and food chemicals have the very great potential to lead to nutritional deficiency especially in the paediatric population. Any dietary intervention, whether for the purposes of diagnosis or management of food allergy or food intolerance, should be adapted to the individual's dietary habits and a suitably
Sensitivity to food additives, vaso-active amines and salicylates: a review of the evidence.
Skypala, Isabel J; Williams, M; Reeves, L; Meyer, R; Venter, C
2015-01-01
Although there is considerable literature pertaining to IgE and non IgE-mediated food allergy, there is a paucity of information on non-immune mediated reactions to foods, other than metabolic disorders such as lactose intolerance. Food additives and naturally occurring 'food chemicals' have long been reported as having the potential to provoke symptoms in those who are more sensitive to their effects. Diets low in 'food chemicals' gained prominence in the 1970s and 1980s, and their popularity remains, although the evidence of their efficacy is very limited. This review focuses on the available evidence for the role and likely adverse effects of both added and natural 'food chemicals' including benzoate, sulphite, monosodium glutamate, vaso-active or biogenic amines and salicylate. Studies assessing the efficacy of the restriction of these substances in the diet have mainly been undertaken in adults, but the paper will also touch on the use of such diets in children. The difficulty of reviewing the available evidence is that few of the studies have been controlled and, for many, considerable time has elapsed since their publication. Meanwhile dietary patterns and habits have changed hugely in the interim, so the conclusions may not be relevant for our current dietary norms. The conclusion of the review is that there may be some benefit in the removal of an additive or a group of foods high in natural food chemicals from the diet for a limited period for certain individuals, providing the diagnostic pathway is followed and the foods are reintroduced back into the diet to assess for the efficacy of removal. However diets involving the removal of multiple additives and food chemicals have the very great potential to lead to nutritional deficiency especially in the paediatric population. Any dietary intervention, whether for the purposes of diagnosis or management of food allergy or food intolerance, should be adapted to the individual's dietary habits and a suitably
Phase sensitivity analysis of circadian rhythm entrainment.
Gunawan, Rudiyanto; Doyle, Francis J
2007-04-01
As a biological clock, circadian rhythms evolve to accomplish a stable (robust) entrainment to environmental cycles, of which light is the most obvious. The mechanism of photic entrainment is not known, but two models of entrainment have been proposed based on whether light has a continuous (parametric) or discrete (nonparametric) effect on the circadian pacemaker. A novel sensitivity analysis is developed to study the circadian entrainment in silico based on a limit cycle approach and applied to a model of Drosophila circadian rhythm. The comparative analyses of complete and skeleton photoperiods suggest a trade-off between the contribution of period modulation (parametric effect) and phase shift (nonparametric effect) in Drosophila circadian entrainment. The results also give suggestions for an experimental study to (in)validate the two models of entrainment.
Sensitivity analysis of distributed volcanic source inversion
NASA Astrophysics Data System (ADS)
Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José
2016-04-01
A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep
Additional EIPC Study Analysis: Interim Report on High Priority Topics
Hadley, Stanton W
2013-11-01
Between 2010 and 2012 the Eastern Interconnection Planning Collaborative (EIPC) conducted a major long-term resource and transmission study of the Eastern Interconnection (EI). With guidance from a Stakeholder Steering Committee (SSC) that included representatives from the Eastern Interconnection States Planning Council (EISPC) among others, the project was conducted in two phases. Phase 1 involved a long-term capacity expansion analysis that involved creation of eight major futures plus 72 sensitivities. Three scenarios were selected for more extensive transmission- focused evaluation in Phase 2. Five power flow analyses, nine production cost model runs (including six sensitivities), and three capital cost estimations were developed during this second phase. The results from Phase 1 and 2 provided a wealth of data that could be examined further to address energy-related questions. A list of 13 topics was developed for further analysis; this paper discusses the first five.
The Effect of Additives on the Behavior of Phase Sensitive In Situ Forming Implants.
Solorio, Luis; Sundarapandiyan, Divya; Olear, Alex; Exner, Agata A
2015-10-01
Phase-sensitive in situ forming implants (ISFI) are a promising platform for the controlled release of therapeutic agents. The simple manufacturing, ease of placement, and diverse payload capacity make these implants an appealing delivery system for a wide range of applications. Tailoring the release profile is paramount for effective treatment of disease. In this study, three innovative formulation modifications were used to control drug release. Specifically, water, 1,1'-dioctadecyl-3,3,3',3'-tetramethylindocarbocyanine perchlorate (DiI), and bovine serum albumin (BSA) were incorporated into an ISFI solution containing the small molecular weight mock drug, sodium fluorescein. The effects of these additives on drug release, swelling, phase inversion, erosion, and implant microstructure were evaluated. Diagnostic ultrasound was used to monitor changes in swelling and phase inversion over time noninvasively. Water, DiI, and the combination of BSA/DiI functioned to reduce burst release 47.6%, 76.6%, and 59.0%, respectively. Incorporation of water into the casting solution also enhanced the release of drug during the diffusion period of release by 165.2% relative to the excipient free control. Incorporation of BSA into the polymer solution did not significantly alter the burst release (p < 0.05); however, the onset of degradation facilitated release was delayed relative to the excipient-free control by 5 days. This study demonstrates that the use of excipients provides a facile method to tailor the release profile and degradation rate of implants without changing the polymer or solvent used in the implant formulation, providing fine control of drug dissolution during distinct phases of release. PMID:26175342
Longitudinal Genetic Analysis of Anxiety Sensitivity
ERIC Educational Resources Information Center
Zavos, Helena M. S.; Gregory, Alice M.; Eley, Thalia C.
2012-01-01
Anxiety sensitivity is associated with both anxiety and depression and has been shown to be heritable. Little, however, is known about the role of genetic influence on continuity and change of symptoms over time. The authors' aim was to examine the stability of anxiety sensitivity during adolescence. By using a genetically sensitive design, the…
Computed Tomography Inspection and Analysis for Additive Manufacturing Components
NASA Technical Reports Server (NTRS)
Beshears, Ronald D.
2016-01-01
Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws were inspected using a 2MeV linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed using standard image analysis techniques to determine the impact of additive manufacturing on inspectability of objects with complex geometries.
Objective analysis of the ARM IOP data: method and sensitivity
Cedarwall, R; Lin, J L; Xie, S C; Yio, J J; Zhang, M H
1999-04-01
Motivated by the need of to obtain accurate objective analysis of field experimental data to force physical parameterizations in numerical models, this paper -first reviews the existing objective analysis methods and interpolation schemes that are used to derive atmospheric wind divergence, vertical velocity, and advective tendencies. Advantages and disadvantages of each method are discussed. It is shown that considerable uncertainties in the analyzed products can result from the use of different analysis schemes and even more from different implementations of a particular scheme. The paper then describes a hybrid approach to combine the strengths of the regular grid method and the line-integral method, together with a variational constraining procedure for the analysis of field experimental data. In addition to the use of upper air data, measurements at the surface and at the top-of-the-atmosphere are used to constrain the upper air analysis to conserve column-integrated mass, water, energy, and momentum. Analyses are shown for measurements taken in the Atmospheric Radiation Measurement Programs (ARM) July 1995 Intensive Observational Period (IOP). Sensitivity experiments are carried out to test the robustness of the analyzed data and to reveal the uncertainties in the analysis. It is shown that the variational constraining process significantly reduces the sensitivity of the final data products.
Sensitivity Analysis of Wing Aeroelastic Responses
NASA Technical Reports Server (NTRS)
Issac, Jason Cherian
1995-01-01
Design for prevention of aeroelastic instability (that is, the critical speeds leading to aeroelastic instability lie outside the operating range) is an integral part of the wing design process. Availability of the sensitivity derivatives of the various critical speeds with respect to shape parameters of the wing could be very useful to a designer in the initial design phase, when several design changes are made and the shape of the final configuration is not yet frozen. These derivatives are also indispensable for a gradient-based optimization with aeroelastic constraints. In this study, flutter characteristic of a typical section in subsonic compressible flow is examined using a state-space unsteady aerodynamic representation. The sensitivity of the flutter speed of the typical section with respect to its mass and stiffness parameters, namely, mass ratio, static unbalance, radius of gyration, bending frequency, and torsional frequency is calculated analytically. A strip theory formulation is newly developed to represent the unsteady aerodynamic forces on a wing. This is coupled with an equivalent plate structural model and solved as an eigenvalue problem to determine the critical speed of the wing. Flutter analysis of the wing is also carried out using a lifting-surface subsonic kernel function aerodynamic theory (FAST) and an equivalent plate structural model. Finite element modeling of the wing is done using NASTRAN so that wing structures made of spars and ribs and top and bottom wing skins could be analyzed. The free vibration modes of the wing obtained from NASTRAN are input into FAST to compute the flutter speed. An equivalent plate model which incorporates first-order shear deformation theory is then examined so it can be used to model thick wings, where shear deformations are important. The sensitivity of natural frequencies to changes in shape parameters is obtained using ADIFOR. A simple optimization effort is made towards obtaining a minimum weight
Climate sensitivity: Analysis of feedback mechanisms
NASA Astrophysics Data System (ADS)
Hansen, J.; Lacis, A.; Rind, D.; Russell, G.; Stone, P.; Fung, I.; Ruedy, R.; Lerner, J.
, vegetation) to the total cooling at 18K. The temperature increase believed to have occurred in the past 130 years (approximately 0.5°C) is also found to imply a climate sensitivity of 2.5-5°C for doubled C02 (f = 2-4), if (1) the temperature increase is due to the added greenhouse gases, (2) the 1850 CO2 abundance was 270±10 ppm, and (3) the heat perturbation is mixed like a passive tracer in the ocean with vertical mixing coefficient k ˜ 1 cm2 s-1. These analyses indicate that f is substantially greater than unity on all time scales. Our best estimate for the current climate due to processes operating on the 10-100 year time scale is f = 2-4, corresponding to a climate sensitivity of 2.5-5°C for doubled CO2. The physical process contributing the greatest uncertainty to f on this time scale appears to be the cloud feedback. We show that the ocean's thermal relaxation time depends strongly on f. The e-folding time constant for response of the isolated ocean mixed layer is about 15 years, for the estimated value of f. This time is sufficiently long to allow substantial heat exchange between the mixed layer and deeper layers. For f = 3-4 the response time of the surface temperature to a heating perturbation is of order 100 years, if the perturbation is sufficiently small that it does not alter the rate of heat exchange with the deeper ocean. The climate sensitivity we have inferred is larger than that stated in the Carbon Dioxide Assessment Committee report (CDAC, 1983). Their result is based on the empirical temperature increase in the past 130 years, but their analysis did not account for the dependence of the ocean response time on climate sensitivity. Their choice of a fixed 15 year response time biased their result to low sensitivities. We infer that, because of recent increases in atmospheric CO2 and trace gases, there is a large, rapidly growing gap between current climate and the equilibrium climate for current atmospheric composition. Based on the climate
Additivity in the Analysis and Design of HIV Protease Inhibitors
Jorissen, Robert N.; Kiran Kumar Reddy, G. S.; Ali, Akbar; Altman, Michael D.; Chellappan, Sripriya; Anjum, Saima G.; Tidor, Bruce; Schiffer, Celia A.; Rana, Tariq M.; Gilson, Michael K.
2009-01-01
We explore the applicability of an additive treatment of substituent effects to the analysis and design of HIV protease inhibitors. Affinity data for a set of inhibitors with a common chemical framework were analyzed to provide estimates of the free energy contribution of each chemical substituent. These estimates were then used to design new inhibitors, whose high affinities were confirmed by synthesis and experimental testing. Derivations of additive models by least-squares and ridge-regression methods were found to yield statistically similar results. The additivity approach was also compared with standard molecular descriptor-based QSAR; the latter was not found to provide superior predictions. Crystallographic studies of HIV protease-inhibitor complexes help explain the perhaps surprisingly high degree of substituent additivity in this system, and allow some of the additivity coefficients to be rationalized on a structural basis. PMID:19193159
Kade H. Poper; Eric S. Collins; Michelle L. Pantoya; Michael Daniels
2014-10-01
Powder energetic materials are highly sensitive to electrostatic discharge (ESD) ignition. This study shows that small concentrations of carbon nanotubes (CNT) added to the highly reactive mixture of aluminum and copper oxide (Al + CuO) significantly reduces ESD ignition sensitivity. CNT act as a conduit for electric energy, bypassing energy buildup and desensitizing the mixture to ESD ignition. The lowest CNT concentration needed to desensitize ignition is 3.8 vol.% corresponding to percolation corresponding to an electrical conductivity of 0.04 S/cm. Conversely, added CNT increased Al + CuO thermal ignition sensitivity to a hot wire igniter.
Optimal Multicomponent Analysis Using the Generalized Standard Addition Method.
ERIC Educational Resources Information Center
Raymond, Margaret; And Others
1983-01-01
Describes an experiment on the simultaneous determination of chromium and magnesium by spectophotometry modified to include the Generalized Standard Addition Method computer program, a multivariate calibration method that provides optimal multicomponent analysis in the presence of interference and matrix effects. Provides instructions for…
Tilt-Sensitivity Analysis for Space Telescopes
NASA Technical Reports Server (NTRS)
Papalexandris, Miltiadis; Waluschka, Eugene
2003-01-01
A report discusses a computational-simulation study of phase-front propagation in the Laser Interferometer Space Antenna (LISA), in which space telescopes would transmit and receive metrological laser beams along 5-Gm interferometer arms. The main objective of the study was to determine the sensitivity of the average phase of a beam with respect to fluctuations in pointing of the beam. The simulations account for the effects of obscurations by a secondary mirror and its supporting struts in a telescope, and for the effects of optical imperfections (especially tilt) of a telescope. A significant innovation introduced in this study is a methodology, applicable to space telescopes in general, for predicting the effects of optical imperfections. This methodology involves a Monte Carlo simulation in which one generates many random wavefront distortions and studies their effects through computational simulations of propagation. Then one performs a statistical analysis of the results of the simulations and computes the functional relations among such important design parameters as the sizes of distortions and the mean value and the variance of the loss of performance. These functional relations provide information regarding position and orientation tolerances relevant to design and operation.
Wear-Out Sensitivity Analysis Project Abstract
NASA Technical Reports Server (NTRS)
Harris, Adam
2015-01-01
During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.
Sensitivity analysis of volume scattering phase functions.
Tuchow, Noah; Broughton, Jennifer; Kudela, Raphael
2016-08-01
To solve the radiative transfer equation and relate inherent optical properties (IOPs) to apparent optical properties (AOPs), knowledge of the volume scattering phase function is required. Due to the difficulty of measuring the phase function, it is frequently approximated. We explore the sensitivity of derived AOPs to the phase function parameterization, and compare measured and modeled values of both the AOPs and estimated phase functions using data from Monterey Bay, California during an extreme "red tide" bloom event. Using in situ measurements of absorption and attenuation coefficients, as well as two sets of measurements of the volume scattering function (VSF), we compared output from the Hydrolight radiative transfer model to direct measurements. We found that several common assumptions used in parameterizing the radiative transfer model consistently introduced overestimates of modeled versus measured remote-sensing reflectance values. Phase functions from VSF data derived from measurements at multiple wavelengths and a single scattering single angle significantly overestimated reflectances when using the manufacturer-supplied corrections, but were substantially improved using newly published corrections; phase functions calculated from VSF measurements using three angles and three wavelengths and processed using manufacture-supplied corrections were comparable, demonstrating that reasonable predictions can be made using two commercially available instruments. While other studies have reached similar conclusions, our work extends the analysis to coastal waters dominated by an extreme algal bloom with surface chlorophyll concentrations in excess of 100 mg m^{-3}. PMID:27505819
LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.
2000-01-01
A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).
[Kinetic analysis of additive effect on desulfurization activity].
Han, Kui-hua; Zhao, Jian-li; Lu, Chun-mei; Wang, Yong-zheng; Zhao, Gai-ju; Cheng, Shi-qing
2006-02-01
The additive effects of A12O3, Fe2O3 and MnCO3 on CaO sulfation kinetics were investigated by thermogravimetic analysis method and modified grain model. The activation energy (Ea) and the pre-exponential factor (k0) of surface reaction, the activation energy (Ep) and the pre-exponential factor (D0) of product layer diffusion reaction were calculated according to the model. Additions of MnCO3 can enhance the initial reaction rate, product layer diffusion and the final CaO conversion of sorbents, the effect mechanism of which is similar to that of Fe2O3. The method based isokinetic temperature Ts and activation energy can not estimate the contribution of additive to the sulfation reactivity, the rate constant of the surface reaction (k), and the effective diffusivity of reactant in the product layer (Ds) under certain experimental conditions can reflect the effect of additives on the activation. Unstoichiometric metal oxide may catalyze the surface reaction and promote the diffusivity of reactant in the product layer by the crystal defect and distinct diffusion of cation and anion. According to the mechanism and effect of additive on the sulfation, the effective temperature and the stoichiometric relation of reaction, it is possible to improve the utilization of sorbent by compounding more additives to the calcium-based sorbent.
Derivative based sensitivity analysis of gamma index.
Sarkar, Biplab; Pradhan, Anirudh; Ganesh, T
2015-01-01
Originally developed as a tool for patient-specific quality assurance in advanced treatment delivery methods to compare between measured and calculated dose distributions, the gamma index (γ) concept was later extended to compare between any two dose distributions. It takes into effect both the dose difference (DD) and distance-to-agreement (DTA) measurements in the comparison. Its strength lies in its capability to give a quantitative value for the analysis, unlike other methods. For every point on the reference curve, if there is at least one point in the evaluated curve that satisfies the pass criteria (e.g., δDD = 1%, δDTA = 1 mm), the point is included in the quantitative score as "pass." Gamma analysis does not account for the gradient of the evaluated curve - it looks at only the minimum gamma value, and if it is <1, then the point passes, no matter what the gradient of evaluated curve is. In this work, an attempt has been made to present a derivative-based method for the identification of dose gradient. A mathematically derived reference profile (RP) representing the penumbral region of 6 MV 10 cm × 10 cm field was generated from an error function. A general test profile (GTP) was created from this RP by introducing 1 mm distance error and 1% dose error at each point. This was considered as the first of the two evaluated curves. By its nature, this curve is a smooth curve and would satisfy the pass criteria for all points in it. The second evaluated profile was generated as a sawtooth test profile (STTP) which again would satisfy the pass criteria for every point on the RP. However, being a sawtooth curve, it is not a smooth one and would be obviously poor when compared with the smooth profile. Considering the smooth GTP as an acceptable profile when it passed the gamma pass criteria (1% DD and 1 mm DTA) against the RP, the first and second order derivatives of the DDs (δD', δD") between these two curves were derived and used as the boundary values
A discourse on sensitivity analysis for discretely-modeled structures
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Haftka, Raphael T.
1991-01-01
A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.
ANALYSIS OF MPC ACCESS REQUIREMENTS FOR ADDITION OF FILLER MATERIALS
W. Wallin
1996-09-03
This analysis is prepared by the Mined Geologic Disposal System (MGDS) Waste Package Development Department (WPDD) in response to a request received via a QAP-3-12 Design Input Data Request (Ref. 5.1) from WAST Design (formerly MRSMPC Design). The request is to provide: Specific MPC access requirements for the addition of filler materials at the MGDS (i.e., location and size of access required). The objective of this analysis is to provide a response to the foregoing request. The purpose of this analysis is to provide a documented record of the basis for the response. The response is stated in Section 8 herein. The response is based upon requirements from an MGDS perspective.
Extended forward sensitivity analysis of one-dimensional isothermal flow
Johnson, M.; Zhao, H.
2013-07-01
Sensitivity analysis and uncertainty quantification is an important part of nuclear safety analysis. In this work, forward sensitivity analysis is used to compute solution sensitivities on 1-D fluid flow equations typical of those found in system level codes. Time step sensitivity analysis is included as a method for determining the accumulated error from time discretization. The ability to quantify numerical error arising from the time discretization is a unique and important feature of this method. By knowing the relative sensitivity of time step with other physical parameters, the simulation is allowed to run at optimized time steps without affecting the confidence of the physical parameter sensitivity results. The time step forward sensitivity analysis method can also replace the traditional time step convergence studies that are a key part of code verification with much less computational cost. One well-defined benchmark problem with manufactured solutions is utilized to verify the method; another test isothermal flow problem is used to demonstrate the extended forward sensitivity analysis process. Through these sample problems, the paper shows the feasibility and potential of using the forward sensitivity analysis method to quantify uncertainty in input parameters and time step size for a 1-D system-level thermal-hydraulic safety code. (authors)
Attainability analysis in the stochastic sensitivity control
NASA Astrophysics Data System (ADS)
Bashkirtseva, Irina
2015-02-01
For nonlinear dynamic stochastic control system, we construct a feedback regulator that stabilises an equilibrium and synthesises a required dispersion of random states around this equilibrium. Our approach is based on the stochastic sensitivity functions technique. We focus on the investigation of attainability sets for 2-D systems. A detailed parametric description of the attainability domains for various types of control inputs for stochastic Brusselator is presented. It is shown that the new regulator provides a low level of stochastic sensitivity and can suppress oscillations of large amplitude.
Implementation of efficient sensitivity analysis for optimization of large structures
NASA Technical Reports Server (NTRS)
Umaretiya, J. R.; Kamil, H.
1990-01-01
The paper presents the theoretical bases and implementation techniques of sensitivity analyses for efficient structural optimization of large structures, based on finite element static and dynamic analysis methods. The sensitivity analyses have been implemented in conjunction with two methods for optimization, namely, the Mathematical Programming and Optimality Criteria methods. The paper discusses the implementation of the sensitivity analysis method into our in-house software package, AutoDesign.
Grid sensitivity for aerodynamic optimization and flow analysis
NASA Technical Reports Server (NTRS)
Sadrehaghighi, I.; Tiwari, S. N.
1993-01-01
After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.
Frey, H Christopher
2002-06-01
This guest editorial is a summary of the NCSU/USDA Workshop on Sensitivity Analysis held June 11-12, 2001 at North Carolina State University and sponsored by the U.S. Department of Agriculture's Office of Risk Assessment and Cost Benefit Analysis. The objective of the workshop was to learn across disciplines in identifying, evaluating, and recommending sensitivity analysis methods and practices for application to food-safety process risk models. The workshop included presentations regarding the Hazard Assessment and Critical Control Points (HACCP) framework used in food-safety risk assessment, a survey of sensitivity analysis methods, invited white papers on sensitivity analysis, and invited case studies regarding risk assessment of microbial pathogens in food. Based on the sharing of interdisciplinary information represented by the presentations, the workshop participants, divided into breakout sessions, responded to three trigger questions: What are the key criteria for sensitivity analysis methods applied to food-safety risk assessment? What sensitivity analysis methods are most promising for application to food safety and risk assessment? and What are the key needs for implementation and demonstration of such methods? The workshop produced agreement regarding key criteria for sensitivity analysis methods and the need to use two or more methods to try to obtain robust insights. Recommendations were made regarding a guideline document to assist practitioners in selecting, applying, interpreting, and reporting the results of sensitivity analysis.
Trends in sensitivity analysis practice in the last decade.
Ferretti, Federico; Saltelli, Andrea; Tarantola, Stefano
2016-10-15
The majority of published sensitivity analyses (SAs) are either local or one factor-at-a-time (OAT) analyses, relying on unjustified assumptions of model linearity and additivity. Global approaches to sensitivity analyses (GSA) which would obviate these shortcomings, are applied by a minority of researchers. By reviewing the academic literature on SA, we here present a bibliometric analysis of the trends of different SA practices in last decade. The review has been conducted both on some top ranking journals (Nature and Science) and through an extended analysis in the Elsevier's Scopus database of scientific publications. After correcting for the global growth in publications, the amount of papers performing a generic SA has notably increased over the last decade. Even if OAT is still the most largely used technique in SA, there is a clear increase in the use of GSA with preference respectively for regression and variance-based techniques. Even after adjusting for the growth of publications in the sole modelling field, to which SA and GSA normally apply, the trend is confirmed. Data about regions of origin and discipline are also briefly discussed. The results above are confirmed when zooming on the sole articles published in chemical modelling, a field historically proficient in the use of SA methods. PMID:26934843
Trends in sensitivity analysis practice in the last decade.
Ferretti, Federico; Saltelli, Andrea; Tarantola, Stefano
2016-10-15
The majority of published sensitivity analyses (SAs) are either local or one factor-at-a-time (OAT) analyses, relying on unjustified assumptions of model linearity and additivity. Global approaches to sensitivity analyses (GSA) which would obviate these shortcomings, are applied by a minority of researchers. By reviewing the academic literature on SA, we here present a bibliometric analysis of the trends of different SA practices in last decade. The review has been conducted both on some top ranking journals (Nature and Science) and through an extended analysis in the Elsevier's Scopus database of scientific publications. After correcting for the global growth in publications, the amount of papers performing a generic SA has notably increased over the last decade. Even if OAT is still the most largely used technique in SA, there is a clear increase in the use of GSA with preference respectively for regression and variance-based techniques. Even after adjusting for the growth of publications in the sole modelling field, to which SA and GSA normally apply, the trend is confirmed. Data about regions of origin and discipline are also briefly discussed. The results above are confirmed when zooming on the sole articles published in chemical modelling, a field historically proficient in the use of SA methods.
Discrete analysis of spatial-sensitivity models
NASA Technical Reports Server (NTRS)
Nielsen, Kenneth R. K.; Wandell, Brian A.
1988-01-01
Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.
Sensitivity analysis of Stirling engine design parameters
Naso, V.; Dong, W.; Lucentini, M.; Capata, R.
1998-07-01
In the preliminary Stirling engine design process, the values of some design parameters (temperature ratio, swept volume ratio, phase angle and dead volume ratio) have to be assumed; as a matter of fact it can be difficult to determine the best values of these parameters for a particular engine design. In this paper, a mathematical model is developed to analyze the sensitivity of engine's performance variations corresponding to variations of these parameters.
NASA Technical Reports Server (NTRS)
Winters, J. M.; Stark, L.
1984-01-01
Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.
Towards More Efficient and Effective Global Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin
2014-05-01
Sensitivity analysis (SA) is an important paradigm in the context of model development and application. There are a variety of approaches towards sensitivity analysis that formally describe different "intuitive" understandings of the sensitivity of a single or multiple model responses to different factors such as model parameters or forcings. These approaches are based on different philosophies and theoretical definitions of sensitivity and range from simple local derivatives to rigorous Sobol-type analysis-of-variance approaches. In general, different SA methods focus and identify different properties of the model response and may lead to different, sometimes even conflicting conclusions about the underlying sensitivities. This presentation revisits the theoretical basis for sensitivity analysis, critically evaluates the existing approaches in the literature, and demonstrates their shortcomings through simple examples. Important properties of response surfaces that are associated with the understanding and interpretation of sensitivities are outlined. A new approach towards global sensitivity analysis is developed that attempts to encompass the important, sensitivity-related properties of response surfaces. Preliminary results show that the new approach is superior to the standard approaches in the literature in terms of effectiveness and efficiency.
Sensitivity Analysis of Situational Awareness Measures
NASA Technical Reports Server (NTRS)
Shively, R. J.; Davison, H. J.; Burdick, M. D.; Rutkowski, Michael (Technical Monitor)
2000-01-01
A great deal of effort has been invested in attempts to define situational awareness, and subsequently to measure this construct. However, relatively less work has focused on the sensitivity of these measures to manipulations that affect the SA of the pilot. This investigation was designed to manipulate SA and examine the sensitivity of commonly used measures of SA. In this experiment, we tested the most commonly accepted measures of SA: SAGAT, objective performance measures, and SART, against different levels of SA manipulation to determine the sensitivity of such measures in the rotorcraft flight environment. SAGAT is a measure in which the simulation blanks in the middle of a trial and the pilot is asked specific, situation-relevant questions about the state of the aircraft or the objective of a particular maneuver. In this experiment, after the pilot responded verbally to several questions, the trial continued from the point frozen. SART is a post-trial questionnaire that asked for subjective SA ratings from the pilot at certain points in the previous flight. The objective performance measures included: contacts with hazards (power lines and towers) that impeded the flight path, lateral and vertical anticipation of these hazards, response time to detection of other air traffic, and response time until an aberrant fuel gauge was detected. An SA manipulation of the flight environment was chosen that undisputedly affects a pilot's SA-- visibility. Four variations of weather conditions (clear, light rain, haze, and fog) resulted in a different level of visibility for each trial. Pilot SA was measured by either SAGAT or the objective performance measures within each level of visibility. This enabled us to not only determine the sensitivity within a measure, but also between the measures. The SART questionnaire and the NASA-TLX, a measure of workload, were distributed after every trial. Using the newly developed rotorcraft part-task laboratory (RPTL) at NASA Ames
Analysis of the measurement sensitivity of multidimensional vibrating microprobes
NASA Astrophysics Data System (ADS)
van Riel, M. C. J. M.; Bos, E. J. C.; Homburg, F. G. A.
2014-07-01
A comparison is made between tactile and vibrating microprobes regarding the measurement of typical high aspect ratio microfeatures. It is found that vibrating probes enable the use of styli with higher aspect ratios than tactile probes and are still capable of measuring with high sensitivity. In addition to the one dimensional sensitivity, the directional measurement sensitivity of a vibrating probe is investigated. A vibrating microprobe can perform measurements with high sensitivity in a space spanned by its mode shapes. If the natural frequencies that correspond to these mode shapes are different, the probe shows anisotropic and sub-optimal measurement sensitivity. It is shown that the closer the natural frequencies of the probe are, the better its performance is when regarding optimal and isotropic measurement sensitivity. A novel proof-of-principle setup of a vibrating probe with two nearly equal natural frequencies is realized. This system is able to perform measurements with high and isotropic sensitivity.
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
Modifying structure-sensitive reactions by addition of Zn to Pd
Childers, David J.; Schweitzer, Neil M.; Kamali Shahari, Seyed Mehdi; Rioux, Robert M.; Miller, Jeffrey T.; Meyer, Randall J.
2014-10-01
Silica-supported Pd and PdZn nanoparticles of a similar size were evaluated for neopentane hydrogenolysis/isomerization and propane hydrogenolysis/dehydrogenation. Monometallic Pd showed high neopentane hydrogenolysis selectivity. Addition of small amounts of Zn to Pd lead Pd–Zn scatters in the EXAFS spectrum and an increase in the linear bonded CO by IR. In addition, the neopentane turnover rate decreased by nearly 10 times with little change in the selectivity. Increasing amounts of Zn lead to greater Pd–Zn interactions, higher linear-to-bridging CO ratios by IR and complete loss of neopentane conversion. Pd NPs also had high selectivity for propane hydrogenolysis and thus were poorly selective for propylene. The PdZn bimetallic catalysts, however, were able to preferentially catalyze dehydrogenation, were not active for propane hydrogenolysis, and thus were highly selective for propylene formation. The decrease in hydrogenolysis selectivity was attributed to the isolation of active Pd atoms by inactive metallic Zn,demonstrating that hydrogenolysis requires a particular reactive ensemble whereas propane dehydrogenation does not.
Sensitivity analysis and optimization of thin-film thermoelectric coolers
NASA Astrophysics Data System (ADS)
Harsha Choday, Sri; Roy, Kaushik
2013-06-01
The cooling performance of a thermoelectric (TE) material is dependent on the figure-of-merit (ZT = S2σT/κ), where S is the Seebeck coefficient, σ and κ are the electrical and thermal conductivities, respectively. The standard definition of ZT assigns equal importance to power factor (S2σ) and thermal conductivity. In this paper, we analyze the relative importance of each thermoelectric parameter on the cooling performance using the mathematical framework of sensitivity analysis. In addition, the impact of the electrical/thermal contact parasitics on bulk and superlattice Bi2Te3 is also investigated. In the presence of significant contact parasitics, we find that the carrier concentration that results in best cooling is lower than that of the highest ZT. We also establish the level of contact parasitics that are needed such that their impact on TE cooling is negligible.
NASA Astrophysics Data System (ADS)
Hedgpeth, A.; Beilman, D.; Crow, S. E.
2014-12-01
Arctic soil organic matter (SOM) mineralization processes are fundamental to the functioning of high latitude soils in relation to nutrients, stability, and feedbacks to atmospheric CO2 and climate. The arctic permafrost zone covers 25% of the northern hemisphere and contains 1672Pg of soil carbon (C). 88% of this C currently resides in frozen soils that are vulnerable to environmental change. For instance, arctic growing seasons may be lengthened, resulting in an increase in plant productivity and rate of below ground labile C inputs as root exudates. Understanding controls on Arctic SOM dynamics requires recognition that labile C inputs have the potential to significantly affect mineralization of previously stable SOM, also known as 'priming effects'. We conducted a substrate addition incubation experiment to quantify and compare respiration in highly organic (42-48 %C) permafrost soils along a north-south transect in western Canada. Near surface soils (10-20 cm) were collected from permafrost peatland sites in the Mackenzie River Basin from 69.2-62.6°N. The surface soils are fairly young (Δ14C values > -140.0) and can be assumed to contain relatively reactive soil carbon. To assess whether addition of labile substrate alters SOM decomposition dynamics, 4.77-11.75 g of permafrost soil were spiked with 0.5 mg D-glucose g-1 soil and incubated at 5°C. A mass balance approach was used to determin substrate-induced respiration and preliminary results suggest a potential for positive priming in these C-rich soils. Baseline respiration rates from the three sites were similar (0.067-0.263 mg CO2 g-1 soil C) yet show some site-specific trends. The rate at which added substrate was utilized within these soils suggests that other factors besides temperature and soil C content are controlling substrate consumption and its effect on SOM decomposition. Microbial activity can be stimulated by substrate addition to such an extent that SOM turnover is enhanced, suggesting that
Sensitivity analysis for aeroacoustic and aeroelastic design of turbomachinery blades
NASA Technical Reports Server (NTRS)
Lorence, Christopher B.; Hall, Kenneth C.
1995-01-01
A new method for computing the effect that small changes in the airfoil shape and cascade geometry have on the aeroacoustic and aeroelastic behavior of turbomachinery cascades is presented. The nonlinear unsteady flow is assumed to be composed of a nonlinear steady flow plus a small perturbation unsteady flow that is harmonic in time. First, the full potential equation is used to describe the behavior of the nonlinear mean (steady) flow through a two-dimensional cascade. The small disturbance unsteady flow through the cascade is described by the linearized Euler equations. Using rapid distortion theory, the unsteady velocity is split into a rotational part that contains the vorticity and an irrotational part described by a scalar potential. The unsteady vorticity transport is described analytically in terms of the drift and stream functions computed from the steady flow. Hence, the solution of the linearized Euler equations may be reduced to a single inhomogeneous equation for the unsteady potential. The steady flow and small disturbance unsteady flow equations are discretized using bilinear quadrilateral isoparametric finite elements. The nonlinear mean flow solution and streamline computational grid are computed simultaneously using Newton iteration. At each step of the Newton iteration, LU decomposition is used to solve the resulting set of linear equations. The unsteady flow problem is linear, and is also solved using LU decomposition. Next, a sensitivity analysis is performed to determine the effect small changes in cascade and airfoil geometry have on the mean and unsteady flow fields. The sensitivity analysis makes use of the nominal steady and unsteady flow LU decompositions so that no additional matrices need to be factored. Hence, the present method is computationally very efficient. To demonstrate how the sensitivity analysis may be used to redesign cascades, a compressor is redesigned for improved aeroelastic stability and two different fan exit guide
Spectroscopic analysis and DFT calculations of a food additive Carmoisine
NASA Astrophysics Data System (ADS)
Snehalatha, M.; Ravikumar, C.; Hubert Joe, I.; Sekar, N.; Jayakumar, V. S.
2009-04-01
FT-IR and Raman techniques were employed for the vibrational characterization of the food additive Carmoisine (E122). The equilibrium geometry, various bonding features, and harmonic vibrational wavenumbers have been investigated with the help of density functional theory (DFT) calculations. A good correlation was found between the computed and experimental wavenumbers. Azo stretching wavenumbers have been lowered due to conjugation and π-electron delocalization. Predicted electronic absorption spectra from TD-DFT calculation have been analysed comparing with the UV-vis spectrum. The first hyperpolarizability of the molecule is calculated. Intramolecular charge transfer (ICT) responsible for the optical nonlinearity of the dye molecule has been discussed theoretically and experimentally. Stability of the molecule arising from hyperconjugative interactions, charge delocalization and C-H⋯O, improper, blue shifted hydrogen bonds have been analysed using natural bond orbital (NBO) analysis.
[Analysis of constituents in urushi wax, a natural food additive].
Jin, Zhe-Long; Tada, Atsuko; Sugimoto, Naoki; Sato, Kyoko; Masuda, Aino; Yamagata, Kazuo; Yamazaki, Takeshi; Tanamoto, Kenichi
2006-08-01
Urushi wax is a natural gum base used as a food additive. In order to evaluate the quality of urushi wax as a food additive and to obtain information useful for setting official standards, we investigated the constituents and their concentrations in urushi wax, using the same sample as scheduled for toxicity testing. After methanolysis of urushi wax, the composition of fatty acids was analyzed by GC/MS. The results indicated that the main fatty acids were palmitic acid, oleic acid and stearic acid. LC/MS analysis of urushi wax provided molecular-related ions of the main constituents. The main constituents were identified as triglycerides, namely glyceryl tripalmitate (30.7%), glyceryl dipalmitate monooleate (21.2%), glyceryl dioleate monopalmitate (2.1%), glyceryl monooleate monopalmitate monostearate (2.6%), glyceryl dipalmitate monostearate (5.6%), glyceryl distearate monopalmitate (1.4%). Glyceryl dipalmitate monooleate isomers differing in the binding sites of each constituent fatty acid could be separately determined by LC/MS/MS. PMID:16984037
Decreasing Cloudiness Over China: An Updated Analysis Examining Additional Variables
Kaiser, D.P.
2000-01-14
As preparation of the IPCC's Third Assessment Report takes place, one of the many observed climate variables of key interest is cloud amount. For several nations of the world, there exist records of surface-observed cloud amount dating back to the middle of the 20th Century or earlier, offering valuable information on variations and trends. Studies using such databases include Sun and Groisman (1999) and Kaiser and Razuvaev (1995) for the former Soviet Union, Angel1 et al. (1984) for the United States, Henderson-Sellers (1986) for Europe, Jones and Henderson-Sellers (1992) for Australia, and Kaiser (1998) for China. The findings of Kaiser (1998) differ from the other studies in that much of China appears to have experienced decreased cloudiness over recent decades (1954-1994), whereas the other land regions for the most part show evidence of increasing cloud cover. This paper expands on Kaiser (1998) by analyzing trends in additional meteorological variables for Chi na [station pressure (p), water vapor pressure (e), and relative humidity (rh)] and extending the total cloud amount (N) analysis an additional two years (through 1996).
Partial Differential Algebraic Sensitivity Analysis Code
1995-05-15
PDASAC solves stiff, nonlinear initial-boundary-value in a timelike dimension t and a space dimension x. Plane, circular cylindrical or spherical boundaries can be handled. Mixed-order systems of partial differential and algebraic equations can be analyzed with members of order or 0 or 1 in t, 0,1 or 2 in x. Parametric sensitivities of the calculated states are compted simultaneously on request, via the Jacobian of the state equations. Initial and boundary conditions are efficiently reconciled.more » Local error control (in the max-norm or the 2-norm) is provided for the state vector and can include the parametric sensitivites if desired.« less
Chen, Chungwen; Bajpai, Lakshmikant; Mollova, Nevena; Leung, Kwan
2009-04-01
CVT-6883, a novel selective A(2B) adenosine receptor antagonist currently under clinical development, is highly lipophilic and exhibits high affinity for non-specific binding to container surfaces, resulting in very low recovery in urine assays. Our study showed the use of sodium dodecylbenzenesulfonate (SDBS), a low-cost additive, eliminated non-specific binding problems in the analysis of CVT-6883 in human urine without compromising sensitivity. A new sensitive and selective LC-MS/MS method for quantitation of CVT-6883 in the range of 0.200-80.0ng/mL using SDBS additive was therefore developed and validated for the analysis of human urine samples. The recoveries during sample collection, handling and extraction for the analyte and internal standard (d(5)-CVT-6883) were higher than 87%. CVT-6883 was found stable under the following conditions: in extract - at ambient temperature for 3 days, under refrigeration (5 degrees C) for 6 days; in human urine (containing 4mM SDBS) - after three freeze/thaw cycles, at ambient temperature for 26h, under refrigeration (5 degrees C) for 94h, and in a freezer set to -20 degrees C for at least 2 months. The results demonstrated that the validated method is sufficiently sensitive, specific, and cost-effective for the analysis of CVT-6883 in human urine and will provide a powerful tool to support the clinical programs for CVT-6883.
Sensitivity analysis of limit cycles with application to the Brusselator
Larter, R.; Rabitz, H.; Kramer, M.
1984-05-01
Sensitivity analysis, by which it is possible to determine the dependence of the solution of a system of differential equations to variations in the parameters, is applied to systems which have a limit cycle solution in some region of parameter space. The resulting expressions for the sensitivity coefficients, which are the gradients of the limit cycle solution in parameter space, are analyzed by a Fourier series approach; the sensitivity coefficients are found to contain information on the sensitivity of the period and other features of the limit cycle. The intimate relationship between Lyapounov stability analysis and sensitivity analysis is discussed. The results of our general derivation are applied to two limit cycle oscillators: (1) an exactly soluble two-species oscillator and (2) the Brusselator.
Aero-Structural Interaction, Analysis, and Shape Sensitivity
NASA Technical Reports Server (NTRS)
Newman, James C., III
1999-01-01
A multidisciplinary sensitivity analysis technique that has been shown to be independent of step-size selection is examined further. The accuracy of this step-size independent technique, which uses complex variables for determining sensitivity derivatives, has been previously established. The primary focus of this work is to validate the aero-structural analysis procedure currently being used. This validation consists of comparing computed and experimental data obtained for an Aeroelastic Research Wing (ARW-2). Since the aero-structural analysis procedure has the complex variable modifications already included into the software, sensitivity derivatives can automatically be computed. Other than for design purposes, sensitivity derivatives can be used for predicting the solution at nearby conditions. The use of sensitivity derivatives for predicting the aero-structural characteristics of this configuration is demonstrated.
Sensitivity Analysis of Boundary Value Problems: Application to Nonlinear Reaction-Diffusion Systems
NASA Astrophysics Data System (ADS)
Reuven, Yakir; Smooke, Mitchell D.; Rabitz, Herschel
1986-05-01
A direct and very efficient approach for obtaining sensitivities of two-point boundary value problems solved by Newton's method is studied. The link between the solution method and the sensitivity equations is investigated together with matters of numerical accuracy and efficiency. This approach is employed in the analysis of a model three species, unimolecular, steady-state, premixed laminar flame. The numerical accuracy of the sensitivities is verified and their values are utilized for interpretation of the model results. It is found that parameters associated directly with the temperature play a dominant role. The system's Green's functions relating dependent variables are also controlled strongly by the temperature. In addition, flame speed sensitivities are calculated and shown to be a special class of derived sensitivity coefficients. Finally, some suggestions for the physical interpretation of sensitivities in model analysis are given.
Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis
Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad
2015-10-02
Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.
Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint
Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad
2015-12-08
Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.
Global and Local Sensitivity Analysis Methods for a Physical System
ERIC Educational Resources Information Center
Morio, Jerome
2011-01-01
Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…
Advanced Fuel Cycle Economic Sensitivity Analysis
David Shropshire; Kent Williams; J.D. Smith; Brent Boore
2006-12-01
A fuel cycle economic analysis was performed on four fuel cycles to provide a baseline for initial cost comparison using the Gen IV Economic Modeling Work Group G4 ECON spreadsheet model, Decision Programming Language software, the 2006 Advanced Fuel Cycle Cost Basis report, industry cost data, international papers, the nuclear power related cost study from MIT, Harvard, and the University of Chicago. The analysis developed and compared the fuel cycle cost component of the total cost of energy for a wide range of fuel cycles including: once through, thermal with fast recycle, continuous fast recycle, and thermal recycle.
Moghtaderi, Mozhgan; Hejrati, Zinatosadat; Dehghani, Zahra; Dehghani, Faranak; Kolahi, Niloofar
2016-06-01
There has been a great increase in the consumption of various food additives in recent years. The purpose of this study was to identify the incidence of sensitization to food additives by using skin prick test in patients with allergy and to determine the concordance rate between positive skin tests and oral challenge in hypersensitivity to additives. This cross-sectional study included 125 (female 71, male 54) patients aged 2-76 years with allergy and 100 healthy individuals. Skin tests were performed in both patient and control groups with 25 fresh food additives. Among patients with allergy, 22.4% showed positive skin test at least to one of the applied materials. Skin test was negative to all tested food additives in control group. Oral food challenge was done in 28 patients with positive skin test, in whom 9 patients showed reaction to culprit (Concordance rate=32.1%). The present study suggested that about one-third of allergic patients with positive reaction to food additives showed positive oral challenge; it may be considered the potential utility of skin test to identify the role of food additives in patients with allergy.
Moghtaderi, Mozhgan; Hejrati, Zinatosadat; Dehghani, Zahra; Dehghani, Faranak; Kolahi, Niloofar
2016-06-01
There has been a great increase in the consumption of various food additives in recent years. The purpose of this study was to identify the incidence of sensitization to food additives by using skin prick test in patients with allergy and to determine the concordance rate between positive skin tests and oral challenge in hypersensitivity to additives. This cross-sectional study included 125 (female 71, male 54) patients aged 2-76 years with allergy and 100 healthy individuals. Skin tests were performed in both patient and control groups with 25 fresh food additives. Among patients with allergy, 22.4% showed positive skin test at least to one of the applied materials. Skin test was negative to all tested food additives in control group. Oral food challenge was done in 28 patients with positive skin test, in whom 9 patients showed reaction to culprit (Concordance rate=32.1%). The present study suggested that about one-third of allergic patients with positive reaction to food additives showed positive oral challenge; it may be considered the potential utility of skin test to identify the role of food additives in patients with allergy. PMID:27424134
NASA Astrophysics Data System (ADS)
Afrooz, Malihe; Dehghani, Hossein
2014-09-01
In this study, we report the influence of a phosphate additive on the performance of dye-sensitized solar cells (DSSCs) based on 2-cyano-3-(4-(diphenylamino)phenyl)acrylic acid (TPA) as sensitizer. The DSSCs are fabricated by incorporating tributyl phosphate (TBPP) as an additive in the electrolyte and is attained an efficiency of about 3.03% under standard air mass 1.5 global (AM 1.5G) simulated sunlight, corresponding to 35% efficiency increment compare to the standard liquid electrolyte. An improvement in both open circuit voltage (Voc) and short circuit current (Jsc) obtains by adjusting the concentration of TBPP in the electrolyte, which attributes to enlarge energy difference between the Fermi level (EF) of TiO2 and the redox potential of electrolyte and suppression of charge recombination from the conduction band (CB) of TiO2 to the oxidized ions in the redox electrolyte. Electrochemical impedance analyses (EIS) reveals a dramatic increase in charge transfer resistance at the dyed-TiO2/electrolyte interface and the electron density in the CB of TiO2 that the more prominent photoelectric conversion efficiency (η) improvement with TBPP additive results by the efficient inhibition of recombination processes. This striking result leads to use a family of electron donor groups in many compounds as highly efficient additive.
Sensitivity Analysis in Complex Plasma Chemistry Models
NASA Astrophysics Data System (ADS)
Turner, Miles
2015-09-01
The purpose of a plasma chemistry model is prediction of chemical species densities, including understanding the mechanisms by which such species are formed. These aims are compromised by an uncertain knowledge of the rate constants included in the model, which directly causes uncertainty in the model predictions. We recently showed that this predictive uncertainty can be large--a factor of ten or more in some cases. There is probably no context in which a plasma chemistry model might be used where the existence of uncertainty on this scale could not be a matter of concern. A question that at once follows is: Which rate constants cause such uncertainty? In the present paper we show how this question can be answered by applying a systematic screening procedure--the so-called Morris method--to identify sensitive rate constants. We investigate the topical example of the helium-oxygen chemistry. Beginning with a model with almost four hundred reactions, we show that only about fifty rate constants materially affect the model results, and as few as ten cause most of the uncertainty. This means that the model can be improved, and the uncertainty substantially reduced, by focussing attention on this tractably small set of rate constants. Work supported by Science Foundation Ireland under grant08/SRC/I1411, and by COST Action MP1101 ``Biomedical Applications of Atmospheric Pressure Plasmas.''
Selecting step sizes in sensitivity analysis by finite differences
NASA Technical Reports Server (NTRS)
Iott, J.; Haftka, R. T.; Adelman, H. M.
1985-01-01
This paper deals with methods for obtaining near-optimum step sizes for finite difference approximations to first derivatives with particular application to sensitivity analysis. A technique denoted the finite difference (FD) algorithm, previously described in the literature and applicable to one derivative at a time, is extended to the calculation of several simultaneously. Both the original and extended FD algorithms are applied to sensitivity analysis for a data-fitting problem in which derivatives of the coefficients of an interpolation polynomial are calculated with respect to uncertainties in the data. The methods are also applied to sensitivity analysis of the structural response of a finite-element-modeled swept wing. In a previous study, this sensitivity analysis of the swept wing required a time-consuming trial-and-error effort to obtain a suitable step size, but it proved to be a routine application for the extended FD algorithm herein.
Parameter sensitivity analysis for pesticide impacts on honeybee colonies
We employ Monte Carlo simulation and linear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed that simulate hive population trajectories, taking into account queen strength, foraging success, weather, colo...
Adjoint sensitivity analysis of plasmonic structures using the FDTD method.
Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H
2014-05-15
We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.
Sensitivity Analysis of the Gap Heat Transfer Model in BISON.
Swiler, Laura Painton; Schmidt, Rodney C.; Williamson, Richard; Perez, Danielle
2014-10-01
This report summarizes the result of a NEAMS project focused on sensitivity analysis of the heat transfer model in the gap between the fuel rod and the cladding used in the BISON fuel performance code of Idaho National Laboratory. Using the gap heat transfer models in BISON, the sensitivity of the modeling parameters and the associated responses is investigated. The study results in a quantitative assessment of the role of various parameters in the analysis of gap heat transfer in nuclear fuel.
Mathematical Modeling and Sensitivity Analysis of Acid Deposition
NASA Astrophysics Data System (ADS)
Cho, Seog-Yeon
Atmospheric processes influencing acid deposition are investigated by using mathematical model and sensitivity analysis. Sensitivity analysis techniques including Green's function analysis, constraint sensitivities, and lumped sensitivities are applied to temporal problems describing gas and liquid phase chemistry and to space-time problems describing pollutant transport and deposition. The sensitivity analysis techniques are used to; (1) investigate the chemical and physical processes related to acid depositions and (2) evaluate the linearity hypothesis, and source and receptor relationships. Results from analysis of the chemistry processes show that the relationship between SO(,2) concentration and the amount of sulfate produced is linear in gas phase but it may be nonlinear in liquid phase when there exists an excess amount of SO(,2) compared to H(,2)O(,2). Under the simulated conditions, the deviation of linearity between ambient sulfur present and the amount of sulfur deposited after 2 hours, is less than 10% in a convective storm situation when the liquid phase chemistry, gas phases chemistry, and cloud processes are considered simultaneously. Efficient ways of sensitivity analysis of time-space problems are also developed and used to evaluate the source and receptor relationships in an Eulerian transport, chemistry, removal model.
Kim, JaeHwang; Hayashi, Minoru; Kobayashi, Equo; Sato, Tatsuo
2016-02-01
The age-hardening is enhanced with the high cooling rate since more vacancies are formed during quenching, whereas the stable beta phase is formed with the slow cooling rate just after solid solution treatment resulting in lower increase of hardness during aging. Meanwhile, the nanoclusters are formed during natural aging in Al-Mg-Si alloys. The formation of nanoclusters is enhanced with increasing the Si amount. High quench sensitivity based on mechanical property changes was confirmed with increasing the Si amount. Moreover, the nano-size beta" phase, main hardening phase, is more formed by the Si addition resulting in enhancement of the age-hardening. The quench sensitivity and the formation behavior of precipitates are discussed based on the age-hardening phenomena. PMID:27433677
Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.
2014-01-01
Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544
Adkins, D E; McClay, J L; Vunck, S A; Batman, A M; Vann, R E; Clark, S L; Souza, R P; Crowley, J J; Sullivan, P F; van den Oord, E J C G; Beardsley, P M
2013-11-01
Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In this study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine (MA)-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate, FDR <0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent MA levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization.
Nonparametric survival analysis using Bayesian Additive Regression Trees (BART).
Sparapani, Rodney A; Logan, Brent R; McCulloch, Robert E; Laud, Purushottam W
2016-07-20
Bayesian additive regression trees (BART) provide a framework for flexible nonparametric modeling of relationships of covariates to outcomes. Recently, BART models have been shown to provide excellent predictive performance, for both continuous and binary outcomes, and exceeding that of its competitors. Software is also readily available for such outcomes. In this article, we introduce modeling that extends the usefulness of BART in medical applications by addressing needs arising in survival analysis. Simulation studies of one-sample and two-sample scenarios, in comparison with long-standing traditional methods, establish face validity of the new approach. We then demonstrate the model's ability to accommodate data from complex regression models with a simulation study of a nonproportional hazards scenario with crossing survival functions and survival function estimation in a scenario where hazards are multiplicatively modified by a highly nonlinear function of the covariates. Using data from a recently published study of patients undergoing hematopoietic stem cell transplantation, we illustrate the use and some advantages of the proposed method in medical investigations. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26854022
Seismoelectric beamforming imaging: a sensitivity analysis
NASA Astrophysics Data System (ADS)
El Khoury, P.; Revil, A.; Sava, P.
2015-06-01
The electrical current density generated by the propagation of a seismic wave at the interface characterized by a drop in electrical, hydraulic or mechanical properties produces an electrical field of electrokinetic nature. This field can be measured remotely with a signal-to-noise ratio depending on the background noise and signal attenuation. The seismoelectric beamforming approach is an emerging imaging technique based on scanning a porous material using appropriately delayed seismic sources. The idea is to focus the hydromechanical energy on a regular spatial grid and measure the converted electric field remotely at each focus time. This method can be used to image heterogeneities with a high definition and to provide structural information to classical geophysical methods. A numerical experiment is performed to investigate the resolution of the seismoelectric beamforming approach with respect to the main wavelength of the seismic waves. The 2-D model consists of a fictitious water-filled bucket in which a cylindrical sandstone core sample is set up vertically. The hydrophones/seismic sources are located on a 50-cm diameter circle in the bucket and the seismic energy is focused on the grid points in order to scan the medium and determine the geometry of the porous plug using the output electric potential image. We observe that the resolution of the method is given by a density of eight scanning points per wavelength. Additional numerical tests were also performed to see the impact of a wrong velocity model upon the seismoelectric map displaying the heterogeneities of the material.
FOCUS - An experimental environment for fault sensitivity analysis
NASA Technical Reports Server (NTRS)
Choi, Gwan S.; Iyer, Ravishankar K.
1992-01-01
FOCUS, a simulation environment for conducting fault-sensitivity analysis of chip-level designs, is described. The environment can be used to evaluate alternative design tactics at an early design stage. A range of user specified faults is automatically injected at runtime, and their propagation to the chip I/O pins is measured through the gate and higher levels. A number of techniques for fault-sensitivity analysis are proposed and implemented in the FOCUS environment. These include transient impact assessment on latch, pin and functional errors, external pin error distribution due to in-chip transients, charge-level sensitivity analysis, and error propagation models to depict the dynamic behavior of latch errors. A case study of the impact of transient faults on a microprocessor-based jet-engine controller is used to identify the critical fault propagation paths, the module most sensitive to fault propagation, and the module with the highest potential for causing external errors.
Design sensitivity analysis using EAL. Part 1: Conventional design parameters
NASA Technical Reports Server (NTRS)
Dopker, B.; Choi, Kyung K.; Lee, J.
1986-01-01
A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.
A near-infrared spectroscopic study of young field ultracool dwarfs: additional analysis
NASA Astrophysics Data System (ADS)
Allers, K. N.; Liu, M. C.
We present additional analysis of the classification system presented in \\citet{allers13}. We refer the reader to \\citet{allers13} for a detailed discussion of our near-IR spectral type and gravity classification system. Here, we address questions and comments from participants of the Brown Dwarfs Come of Age meeting. In particular, we examine the effects of binarity and metallicity on our classification system. We also present our classification of Pleiades brown dwarfs using published spectra. Lastly, we determine SpTs and calculate gravity-sensitive indices for the BT-Settl atmospheric models and compare them to observations.
Precessing rotating flows with additional shear: Stability analysis
NASA Astrophysics Data System (ADS)
Salhi, A.; Cambon, C.
2009-03-01
We consider unbounded precessing rotating flows in which vertical or horizontal shear is induced by the interaction between the solid-body rotation (with angular velocity Ω0 ) and the additional “precessing” Coriolis force (with angular velocity -ɛΩ0 ), normal to it. A “weak” shear flow, with rate 2ɛ of the same order of the Poincaré “small” ratio ɛ , is needed for balancing the gyroscopic torque, so that the whole flow satisfies Euler’s equations in the precessing frame (the so-called admissibility conditions). The base flow case with vertical shear (its cross-gradient direction is aligned with the main angular velocity) corresponds to Mahalov’s [Phys. Fluids A 5, 891 (1993)] precessing infinite cylinder base flow (ignoring boundary conditions), while the base flow case with horizontal shear (its cross-gradient direction is normal to both main and precessing angular velocities) corresponds to the unbounded precessing rotating shear flow considered by Kerswell [Geophys. Astrophys. Fluid Dyn. 72, 107 (1993)]. We show that both these base flows satisfy the admissibility conditions and can support disturbances in terms of advected Fourier modes. Because the admissibility conditions cannot select one case with respect to the other, a more physical derivation is sought: Both flows are deduced from Poincaré’s [Bull. Astron. 27, 321 (1910)] basic state of a precessing spheroidal container, in the limit of small ɛ . A Rapid distortion theory (RDT) type of stability analysis is then performed for the previously mentioned disturbances, for both base flows. The stability analysis of the Kerswell base flow, using Floquet’s theory, is recovered, and its counterpart for the Mahalov base flow is presented. Typical growth rates are found to be the same for both flows at very small ɛ , but significant differences are obtained regarding growth rates and widths of instability bands, if larger ɛ values, up to 0.2, are considered. Finally, both flow cases
Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)
1996-01-01
Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite
Robust global sensitivity analysis of a river management model
NASA Astrophysics Data System (ADS)
Peeters, L. J. M.; Podger, G. M.; Smith, T.; Pickett, T.; Bark, R.; Cuddy, S. M.
2014-03-01
The simulation of routing and distribution of water through a regulated river system with a river management model will quickly results in complex and non-linear model behaviour. A robust sensitivity analysis increases the transparency of the model and provide both the modeller and the system manager with better understanding and insight on how the model simulates reality and management operations. In this study, a robust, density-based sensitivity analysis, developed by Plischke et al. (2013), is applied to an eWater Source river management model. The sensitivity analysis is extended to not only account for main but also for interaction effects and is able to identify major linear effects as well as subtle minor and non-linear effects. The case study is an idealised river management model representing typical conditions of the Southern Murray-Darling Basin in Australia for which the sensitivity of a variety of model outcomes to variations in the driving forces, inflow to the system, rainfall and potential evapotranspiration, is examined. The model outcomes are most sensitive to the inflow to the system, but the sensitivity analysis identified minor effects of potential evapotranspiration as well as non-linear interaction effects between inflow and potential evapotranspiration.
Sensitivity and Uncertainty Analysis of the keff for VHTR fuel
NASA Astrophysics Data System (ADS)
Han, Tae Young; Lee, Hyun Chul; Noh, Jae Man
2014-06-01
For the uncertainty and sensitivity analysis of PMR200 designed as a VHTR in KAERI, MUSAD was implemented based on the deterministic method in the connection with DeCART/CAPP code system. The sensitivity of the multiplication factor was derived using the classical perturbation theory and the sensitivity coefficients for the individual cross sections were obtained by the adjoint method within the framework of the transport equation. Then, the uncertainty of the multiplication factor was calculated from the product of the covariance matrix and the sensitivity. For the verification calculation of the implemented code, the uncertainty analysis on GODIVA benchmark and PMR200 pin cell problem were carried out and the results were compared with the reference codes, TSUNAMI and McCARD. As a result, they are in a good agreement except the uncertainty by the scattering cross section which was calculated using the different scattering moment.
Hybrid Additive Manufacturing Technologies - An Analysis Regarding Potentials and Applications
NASA Astrophysics Data System (ADS)
Merklein, Marion; Junker, Daniel; Schaub, Adam; Neubauer, Franziska
Imposing the trend of mass customization of lightweight construction in industry, conventional manufacturing processes like forming technology and chipping production are pushed to their limits for economical manufacturing. More flexible processes are needed which were developed by the additive manufacturing technology. This toolless production principle offers a high geometrical freedom and an optimized utilization of the used material. Thus load adjusted lightweight components can be produced in small lot sizes in an economical way. To compensate disadvantages like inadequate accuracy and surface roughness hybrid machines combining additive and subtractive manufacturing are developed. Within this paper the principles of mainly used additive manufacturing processes of metals and their possibility to be integrated into a hybrid production machine are summarized. It is pointed out that in particular the integration of deposition processes into a CNC milling center supposes high potential for manufacturing larger parts with high accuracy. Furthermore the combination of additive and subtractive manufacturing allows the production of ready to use products within one single machine. Additionally actual research for the integration of additive manufacturing processes into the production chain will be analyzed. For the long manufacturing time of additive production processes the combination with conventional manufacturing processes like sheet or bulk metal forming seems an effective solution. Especially large volumes can be produced by conventional processes. In an additional production step active elements can be applied by additive manufacturing. This principle is also investigated for tool production to reduce chipping of the high strength material used for forming tools. The aim is the addition of active elements onto a geometrical simple basis by using Laser Metal Deposition. That process allows the utilization of several powder materials during one process what
Sensitivity Analysis of the Integrated Medical Model for ISS Programs
NASA Technical Reports Server (NTRS)
Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.
2016-01-01
Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral
Design sensitivity analysis of mechanical systems in frequency domain
NASA Astrophysics Data System (ADS)
Nalecz, A. G.; Wicher, J.
1988-02-01
A procedure for determining the sensitivity functions of mechanical systems in the frequency domain by use of a vector-matrix approach is presented. Two examples, one for a ground vehicle passive front suspension, and the second for a vehicle active suspension, illustrate the practical applications of parametric sensitivity analysis for redesign and modification of mechanical systems. The sensitivity functions depend on the frequency of the system's oscillations. They can be easily related to the system's frequency characteristics which describe the dynamic properties of the system.
Wang, Youping; Sonntag, Karin; Rudloff, Eicke; Wehling, Peter; Snowdon, Rod J
2006-02-01
Two Brassica napus-Crambe abyssinica monosomic addition lines (2n=39, AACC plus a single chromosome from C. abyssinca) were obtained from the F(2) progeny of the asymmetric somatic hybrid. The alien chromosome from C. abyssinca in the addition line was clearly distinguished by genomic in situ hybridization (GISH). Twenty-seven microspore-derived plants from the addition lines were obtained. Fourteen seedlings were determined to be diploid plants (2n=38) arising from spontaneous chromosome doubling, while 13 seedlings were confirmed as haploid plants. Doubled haploid plants produced after treatment with colchicine and two disomic chromosome addition lines (2n=40, AACC plus a single pair of homologous chromosomes from C. abyssinca) could again be identified by GISH analysis. The lines are potentially useful for molecular genetic analysis of novel C. abyssinica genes or alleles contributing to traits relevant for oilseed rape (B. napus) breeding.
He, Feng-Peng; Wang, Wei
2016-01-01
The response of microbial respiration from soil organic carbon (SOC) decomposition to environmental changes plays a key role in predicting future trends of atmospheric CO2 concentration. However, it remains uncertain whether there is a universal trend in the response of microbial respiration to increased temperature and nutrient addition among different vegetation types. In this study, soils were sampled in spring, summer, autumn and winter from five dominant vegetation types, including pine, larch and birch forest, shrubland, and grassland, in the Saihanba area of northern China. Soil samples from each season were incubated at 1, 10, and 20°C for 5 to 7 days. Nitrogen (N; 0.035 mM as NH4NO3) and phosphorus (P; 0.03 mM as P2O5) were added to soil samples, and the responses of soil microbial respiration to increased temperature and nutrient addition were determined. We found a universal trend that soil microbial respiration increased with increased temperature regardless of sampling season or vegetation type. The temperature sensitivity (indicated by Q10, the increase in respiration rate with a 10°C increase in temperature) of microbial respiration was higher in spring and autumn than in summer and winter, irrespective of vegetation type. The Q10 was significantly positively correlated with microbial biomass and the fungal: bacterial ratio. Microbial respiration (or Q10) did not significantly respond to N or P addition. Our results suggest that short-term nutrient input might not change the SOC decomposition rate or its temperature sensitivity, whereas increased temperature might significantly enhance SOC decomposition in spring and autumn, compared with winter and summer. PMID:27070782
Qian, Yu-Qi; He, Feng-Peng; Wang, Wei
2016-01-01
The response of microbial respiration from soil organic carbon (SOC) decomposition to environmental changes plays a key role in predicting future trends of atmospheric CO2 concentration. However, it remains uncertain whether there is a universal trend in the response of microbial respiration to increased temperature and nutrient addition among different vegetation types. In this study, soils were sampled in spring, summer, autumn and winter from five dominant vegetation types, including pine, larch and birch forest, shrubland, and grassland, in the Saihanba area of northern China. Soil samples from each season were incubated at 1, 10, and 20°C for 5 to 7 days. Nitrogen (N; 0.035 mM as NH4NO3) and phosphorus (P; 0.03 mM as P2O5) were added to soil samples, and the responses of soil microbial respiration to increased temperature and nutrient addition were determined. We found a universal trend that soil microbial respiration increased with increased temperature regardless of sampling season or vegetation type. The temperature sensitivity (indicated by Q10, the increase in respiration rate with a 10°C increase in temperature) of microbial respiration was higher in spring and autumn than in summer and winter, irrespective of vegetation type. The Q10 was significantly positively correlated with microbial biomass and the fungal: bacterial ratio. Microbial respiration (or Q10) did not significantly respond to N or P addition. Our results suggest that short-term nutrient input might not change the SOC decomposition rate or its temperature sensitivity, whereas increased temperature might significantly enhance SOC decomposition in spring and autumn, compared with winter and summer.
Qian, Yu-Qi; He, Feng-Peng; Wang, Wei
2016-01-01
The response of microbial respiration from soil organic carbon (SOC) decomposition to environmental changes plays a key role in predicting future trends of atmospheric CO2 concentration. However, it remains uncertain whether there is a universal trend in the response of microbial respiration to increased temperature and nutrient addition among different vegetation types. In this study, soils were sampled in spring, summer, autumn and winter from five dominant vegetation types, including pine, larch and birch forest, shrubland, and grassland, in the Saihanba area of northern China. Soil samples from each season were incubated at 1, 10, and 20°C for 5 to 7 days. Nitrogen (N; 0.035 mM as NH4NO3) and phosphorus (P; 0.03 mM as P2O5) were added to soil samples, and the responses of soil microbial respiration to increased temperature and nutrient addition were determined. We found a universal trend that soil microbial respiration increased with increased temperature regardless of sampling season or vegetation type. The temperature sensitivity (indicated by Q10, the increase in respiration rate with a 10°C increase in temperature) of microbial respiration was higher in spring and autumn than in summer and winter, irrespective of vegetation type. The Q10 was significantly positively correlated with microbial biomass and the fungal: bacterial ratio. Microbial respiration (or Q10) did not significantly respond to N or P addition. Our results suggest that short-term nutrient input might not change the SOC decomposition rate or its temperature sensitivity, whereas increased temperature might significantly enhance SOC decomposition in spring and autumn, compared with winter and summer. PMID:27070782
Imaging system sensitivity analysis with NV-IPM
NASA Astrophysics Data System (ADS)
Fanning, Jonathan; Teaney, Brian
2014-05-01
This paper describes the sensitivity analysis capabilities to be added to version 1.2 of the NVESD imaging sensor model NV-IPM. Imaging system design always involves tradeoffs to design the best system possible within size, weight, and cost constraints. In general, the performance of a well designed system will be limited by the largest, heaviest, and most expensive components. Modeling is used to analyze system designs before the system is built. Traditionally, NVESD models were only used to determine the performance of a given system design. NV-IPM has the added ability to automatically determine the sensitivity of any system output to changes in the system parameters. The component-based structure of NV-IPM tracks the dependence between outputs and inputs such that only the relevant parameters are varied in the sensitivity analysis. This allows sensitivity analysis of an output such as probability of identification to determine the limiting parameters of the system. Individual components can be optimized by doing sensitivity analysis of outputs such as NETD or SNR. This capability will be demonstrated by analyzing example imaging systems.
Sensitivity analysis for missing data in regulatory submissions.
Permutt, Thomas
2016-07-30
The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. PMID:26567763
Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC
NASA Astrophysics Data System (ADS)
Yang, J.; Castelli, F.; Chen, Y.
2014-10-01
Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ɛ-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more
Nonparametric Bounds and Sensitivity Analysis of Treatment Effects
Richardson, Amy; Hudgens, Michael G.; Gilbert, Peter B.; Fine, Jason P.
2015-01-01
This paper considers conducting inference about the effect of a treatment (or exposure) on an outcome of interest. In the ideal setting where treatment is assigned randomly, under certain assumptions the treatment effect is identifiable from the observable data and inference is straightforward. However, in other settings such as observational studies or randomized trials with noncompliance, the treatment effect is no longer identifiable without relying on untestable assumptions. Nonetheless, the observable data often do provide some information about the effect of treatment, that is, the parameter of interest is partially identifiable. Two approaches are often employed in this setting: (i) bounds are derived for the treatment effect under minimal assumptions, or (ii) additional untestable assumptions are invoked that render the treatment effect identifiable and then sensitivity analysis is conducted to assess how inference about the treatment effect changes as the untestable assumptions are varied. Approaches (i) and (ii) are considered in various settings, including assessing principal strata effects, direct and indirect effects and effects of time-varying exposures. Methods for drawing formal inference about partially identified parameters are also discussed. PMID:25663743
Sensitivity analysis approach to multibody systems described by natural coordinates
NASA Astrophysics Data System (ADS)
Li, Xiufeng; Wang, Yabin
2014-03-01
The classical natural coordinate modeling method which removes the Euler angles and Euler parameters from the governing equations is particularly suitable for the sensitivity analysis and optimization of multibody systems. However, the formulation has so many principles in choosing the generalized coordinates that it hinders the implementation of modeling automation. A first order direct sensitivity analysis approach to multibody systems formulated with novel natural coordinates is presented. Firstly, a new selection method for natural coordinate is developed. The method introduces 12 coordinates to describe the position and orientation of a spatial object. On the basis of the proposed natural coordinates, rigid constraint conditions, the basic constraint elements as well as the initial conditions for the governing equations are derived. Considering the characteristics of the governing equations, the newly proposed generalized-α integration method is used and the corresponding algorithm flowchart is discussed. The objective function, the detailed analysis process of first order direct sensitivity analysis and related solving strategy are provided based on the previous modeling system. Finally, in order to verify the validity and accuracy of the method presented, the sensitivity analysis of a planar spinner-slider mechanism and a spatial crank-slider mechanism are conducted. The test results agree well with that of the finite difference method, and the maximum absolute deviation of the results is less than 3%. The proposed approach is not only convenient for automatic modeling, but also helpful for the reduction of the complexity of sensitivity analysis, which provides a practical and effective way to obtain sensitivity for the optimization problems of multibody systems.
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-01-01
Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080
Sensitivity analysis of dynamic biological systems with time-delays
2010-01-01
Background Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. Results We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. Conclusions By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex
Sensitivity analysis for handling uncertainty in an economic evaluation.
Limwattananon, Supon
2014-05-01
To meet updated international standards, this paper revises the previous Thai guidelines for conducting sensitivity analyses as part of the decision analysis model for health technology assessment. It recommends both deterministic and probabilistic sensitivity analyses to handle uncertainty of the model parameters, which are best represented graphically. Two new methodological issues are introduced-a threshold analysis of medicines' unit prices for fulfilling the National Lists of Essential Medicines' requirements and the expected value of information for delaying decision-making in contexts where there are high levels of uncertainty. Further research is recommended where parameter uncertainty is significant and where the cost of conducting the research is not prohibitive. PMID:24964700
Sensitivity analysis of the fission gas behavior model in BISON.
Swiler, Laura Painton; Pastore, Giovanni; Perez, Danielle; Williamson, Richard
2013-05-01
This report summarizes the result of a NEAMS project focused on sensitivity analysis of a new model for the fission gas behavior (release and swelling) in the BISON fuel performance code of Idaho National Laboratory. Using the new model in BISON, the sensitivity of the calculated fission gas release and swelling to the involved parameters and the associated uncertainties is investigated. The study results in a quantitative assessment of the role of intrinsic uncertainties in the analysis of fission gas behavior in nuclear fuel.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin
2016-04-01
Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
Sensitivity Analysis for Coupled Aero-structural Systems
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.
1999-01-01
A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.
Martinuzzo, M.; Barrera, L.; Altuna, D.; Baña, F. Tisi; Bieti, J.; Amigo, Q.; D’Adamo, M.; López, M.S.; Oyhamburu, J.; Otaso, J.C.
2016-01-01
Background Homozygous or double heterozygous factor XIII (FXIII) deficiency is characterized by soft tissue hematomas, intracranial and delayed spontaneous bleeding. Alterations of thromboelastography (TEG) parameters in these patients have been reported. The aim of the study was to show results of TEG, TEG Lysis (Lys 60) induced by subthreshold concentrations of streptokinase (SK), and to compare them to the clot solubility studies results in samples of a 1-year-old girl with homozygous or double heterozygous FXIII deficiency. Case A year one girl with a history of bleeding from the umbilical cord. During her first year of life, several hematomas appeared in soft upper limb tissue after punctures for vaccination and a gluteal hematoma. One additional sample of a heterozygous patient and three samples of acquired FXIII deficiency were also evaluated. Materials and Methods Clotting tests, von Willebrand factor (vWF) antigen and activity, plasma FXIII-A subunit (pFXIII-A) were measured by an immunoturbidimetric assay in a photo-optical coagulometer. Solubility tests were performed with Ca2+-5 M urea and thrombin-2% acetic acid. Basal and post-FXIII concentrate infusion samples were studied. TEG was performed with CaCl2 or CaCl2 + SK (3.2 U/mL) in a Thromboelastograph. Results Prothrombin time (PT), activated partial thromboplastin time (APTT), thrombin time, fibrinogen, factor VIIIc, vWF, and platelet aggregation were normal. Antigenic pFXIII-A subunit was < 2%. TEG, evaluated at diagnosis and post FXIII concentrate infusion (pFXIII-A= 37%), presented a normal reaction time (R), 8 min, prolonged k (14 and 11min respectively), a low Maximum-Amplitude (MA) ( 39 and 52 mm respectively), and Clot Lysis (Lys60) slightly increased (23 and 30% respectively). In the sample at diagnosis, clot solubility was abnormal, 50 and 45 min with Ca-Urea and thrombin-acetic acid, respectively, but normal (>16 hours) 1-day post-FXIII infusion. Analysis of FXIII deficient and normal
Martinuzzo, M.; Barrera, L.; Altuna, D.; Baña, F. Tisi; Bieti, J.; Amigo, Q.; D’Adamo, M.; López, M.S.; Oyhamburu, J.; Otaso, J.C.
2016-01-01
Background Homozygous or double heterozygous factor XIII (FXIII) deficiency is characterized by soft tissue hematomas, intracranial and delayed spontaneous bleeding. Alterations of thromboelastography (TEG) parameters in these patients have been reported. The aim of the study was to show results of TEG, TEG Lysis (Lys 60) induced by subthreshold concentrations of streptokinase (SK), and to compare them to the clot solubility studies results in samples of a 1-year-old girl with homozygous or double heterozygous FXIII deficiency. Case A year one girl with a history of bleeding from the umbilical cord. During her first year of life, several hematomas appeared in soft upper limb tissue after punctures for vaccination and a gluteal hematoma. One additional sample of a heterozygous patient and three samples of acquired FXIII deficiency were also evaluated. Materials and Methods Clotting tests, von Willebrand factor (vWF) antigen and activity, plasma FXIII-A subunit (pFXIII-A) were measured by an immunoturbidimetric assay in a photo-optical coagulometer. Solubility tests were performed with Ca2+-5 M urea and thrombin-2% acetic acid. Basal and post-FXIII concentrate infusion samples were studied. TEG was performed with CaCl2 or CaCl2 + SK (3.2 U/mL) in a Thromboelastograph. Results Prothrombin time (PT), activated partial thromboplastin time (APTT), thrombin time, fibrinogen, factor VIIIc, vWF, and platelet aggregation were normal. Antigenic pFXIII-A subunit was < 2%. TEG, evaluated at diagnosis and post FXIII concentrate infusion (pFXIII-A= 37%), presented a normal reaction time (R), 8 min, prolonged k (14 and 11min respectively), a low Maximum-Amplitude (MA) ( 39 and 52 mm respectively), and Clot Lysis (Lys60) slightly increased (23 and 30% respectively). In the sample at diagnosis, clot solubility was abnormal, 50 and 45 min with Ca-Urea and thrombin-acetic acid, respectively, but normal (>16 hours) 1-day post-FXIII infusion. Analysis of FXIII deficient and normal
LSENS - GENERAL CHEMICAL KINETICS AND SENSITIVITY ANALYSIS CODE
NASA Technical Reports Server (NTRS)
Bittker, D. A.
1994-01-01
LSENS has been developed for solving complex, homogeneous, gas-phase, chemical kinetics problems. The motivation for the development of this program is the continuing interest in developing detailed chemical reaction mechanisms for complex reactions such as the combustion of fuels and pollutant formation and destruction. A reaction mechanism is the set of all elementary chemical reactions that are required to describe the process of interest. Mathematical descriptions of chemical kinetics problems constitute sets of coupled, nonlinear, first-order ordinary differential equations (ODEs). The number of ODEs can be very large because of the numerous chemical species involved in the reaction mechanism. Further complicating the situation are the many simultaneous reactions needed to describe the chemical kinetics of practical fuels. For example, the mechanism describing the oxidation of the simplest hydrocarbon fuel, methane, involves over 25 species participating in nearly 100 elementary reaction steps. Validating a chemical reaction mechanism requires repetitive solutions of the governing ODEs for a variety of reaction conditions. Analytical solutions to the systems of ODEs describing chemistry are not possible, except for the simplest cases, which are of little or no practical value. Consequently, there is a need for fast and reliable numerical solution techniques for chemical kinetics problems. In addition to solving the ODEs describing chemical kinetics, it is often necessary to know what effects variations in either initial condition values or chemical reaction mechanism parameters have on the solution. Such a need arises in the development of reaction mechanisms from experimental data. The rate coefficients are often not known with great precision and in general, the experimental data are not sufficiently detailed to accurately estimate the rate coefficient parameters. The development of a reaction mechanism is facilitated by a systematic sensitivity analysis
Frosch, Peter J; Pirker, Claudia; Rastogi, Suresh C; Andersen, Klaus E; Bruze, Magnus; Svedman, Cecilia; Goossens, An; White, Ian R; Uter, Wolfgang; Arnau, Elena Giménez; Lepoittevin, Jean-Pierre; Menné, Torkil; Johansen, Jeanne Duus
2005-04-01
The currently used 8% fragrance mix (FM I) does not identify all patients with a positive history of adverse reactions to fragrances. A new FM II with 6 frequently used chemicals was evaluated in 1701 consecutive patients patch tested in 6 dermatological centres in Europe. FM II was tested in 3 concentrations - 28% FM II contained 5% hydroxyisohexyl 3-cyclohexene carboxaldehyde (Lyral), 2% citral, 5% farnesol, 5% coumarin, 1% citronellol and 10%alpha-hexyl-cinnamic aldehyde; in 14% FM II, the single constituents' concentration was lowered to 50% and in 2.8% FM II to 10%. Each patient was classified regarding a history of adverse reactions to fragrances: certain, probable, questionable, none. Positive reactions to FM I occurred in 6.5% of the patients. Positive reactions to FM II were dose-dependent and increased from 1.3% (2.8% FM II), through 2.9% (14% FM II) to 4.1% (28% FM II). Reactions classified as doubtful or irritant varied considerably between the 6 centres, with a mean value of 7.2% for FM I and means ranging from 1.8% to 10.6% for FM II. 8.7% of the tested patients had a certain fragrance history. Of these, 25.2% were positive to FM I; reactivity to FM II was again dose-dependent and ranged from 8.1% to 17.6% in this subgroup. Comparing 2 groups of history - certain and none - values for sensitivity and specificity were calculated: sensitivity: FM I, 25.2%; 2.8% FM II, 8.1%; 14% FM II, 13.5%; 28% FM II, 17.6%; specificity: FM I, 96.5%; 2.8% FM II, 99.5%; 14% FM II, 98.8%; 28% FM II, 98.1%. 31/70 patients (44.3%) positive to 28% FM II were negative to FM I, with 14% FM II this proportion being 16/50 (32%). In the group of patients with a certain history, a total of 7 patients were found reacting to FM II only. Conversely, in the group of patients without any fragrance history, there were significantly more positive reactions to FM I than to any concentration of FM II. In conclusion, the new FM II detects additional patients sensitive to fragrances missed
Proteomic analysis on zoxamide-induced sensitivity changes in Phytophthora cactorum.
Mei, Xinyue; Yang, Min; Jiang, Bingbing; Ding, Xupo; Deng, Weiping; Dong, Yumei; Chen, Lei; Liu, Xili; Zhu, Shusheng
2015-09-01
Zoxamide is an important fungicide for oomycete disease management. In this study, we established the baseline sensitivity of Phytophthora cactorum to zoxamide and assessed the risk of developing resistance to zoxamide using ultraviolet irradiation and fungicide taming methods. All 73 studied isolates were sensitive to zoxamide, with effective concentrations for 50% inhibition of mycelial growth ranging from 0.04 to 0.29 mg/L and mean of 0.15 mg/L. Stable zoxamide-resistant mutants of P. cactorum were not obtained from four arbitrarily selected isolates by either treating mycelial cultures with ultraviolet irradiation or adapting mycelial cultures to the addition of increasing zoxamide concentrations. However, the sensitivity of the isolates to zoxamide could be easily reduced by successive zoxamide treatments. In addition to displaying decreased sensitivity to zoxamide, these isolates also showed decreased sensitivity to the fungicides flumorph and cymoxanil. Proteomic analysis indicated that some proteins involved in antioxidant detoxification, ATP-dependent multidrug resistance, and anti-apoptosis activity, are likely responsible for the induced decrease in the sensitivity of P. cactorum to zoxamide compared to controls. Further semi-quantitative PCR analysis demonstrated that the gene expression profiles of most of above proteins were consistent with the proteomic analysis. Based on the above results, P. cactorum shows low resistance risk to zoxamide; however, the fungicidal effect of zoxamide might be decreased due to induced resistance when this fungicide is continuously applied.
Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.
2007-01-01
To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.
Sensitivity analysis in a Lassa fever deterministic mathematical model
NASA Astrophysics Data System (ADS)
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
Sensitivity analysis of the critical speed in railway vehicle dynamics
NASA Astrophysics Data System (ADS)
Bigoni, D.; True, H.; Engsig-Karup, A. P.
2014-05-01
We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, high-dimensional model representation and total sensitivity indices. It is applied to a half car with a two-axle Cooperrider bogie, in order to study the sensitivity of the critical speed with respect to the suspension parameters. The importance of a certain suspension component is expressed by the variance in critical speed that is ascribable to it. This proves to be useful in the identification of parameters for which the accuracy of their values is critically important. The approach has a general applicability in many engineering fields and does not require the knowledge of the particular solver of the dynamical system. This analysis can be used as part of the virtual homologation procedure and to help engineers during the design phase of complex systems.
Sensitive analysis of a finite element model of orthogonal cutting
NASA Astrophysics Data System (ADS)
Brocail, J.; Watremez, M.; Dubar, L.
2011-01-01
This paper presents a two-dimensional finite element model of orthogonal cutting. The proposed model has been developed with Abaqus/explicit software. An Arbitrary Lagrangian-Eulerian (ALE) formulation is used to predict chip formation, temperature, chip-tool contact length, chip thickness, and cutting forces. This numerical model of orthogonal cutting will be validated by comparing these process variables to experimental and numerical results obtained by Filice et al. [1]. This model can be considered to be reliable enough to make qualitative analysis of entry parameters related to cutting process and frictional models. A sensitivity analysis is conducted on the main entry parameters (coefficients of the Johnson-Cook law, and contact parameters) with the finite element model. This analysis is performed with two levels for each factor. The sensitivity analysis realised with the numerical model on the entry parameters has allowed the identification of significant parameters and the margin identification of parameters.
Multi-Scale Distributed Sensitivity Analysis of Radiative Transfer Model
NASA Astrophysics Data System (ADS)
Neelam, M.; Mohanty, B.
2015-12-01
Amidst nature's great variability and complexity and Soil Moisture Active Passive (SMAP) mission aims to provide high resolution soil moisture products for earth sciences applications. One of the biggest challenges still faced by the remote sensing community are the uncertainties, heterogeneities and scaling exhibited by soil, land cover, topography, precipitation etc. At each spatial scale, there are different levels of uncertainties and heterogeneities. Also, each land surface variable derived from various satellite mission comes with their own error margins. As such, soil moisture retrieval accuracy is affected as radiative model sensitivity changes with space, time, and scale. In this paper, we explore the distributed sensitivity analysis of radiative model under different hydro-climates and spatial scales, 1.5 km, 3 km, 9km and 39km. This analysis is conducted in three different regions Iowa, U.S.A (SMEX02), Arizona, USA (SMEX04) and Winnipeg, Canada (SMAPVEX12). Distributed variables such as soil moisture, soil texture, vegetation and temperature are assumed to be uncertain and are conditionally simulated to obtain uncertain maps, whereas roughness data which is spatially limited are assumed a probability distribution. The relative contribution of the uncertain model inputs to the aggregated model output is also studied, using various aggregation techniques. We use global sensitivity analysis (GSA) to conduct this analysis across spatio-temporal scales. Keywords: Soil moisture, radiative transfer, remote sensing, sensitivity, SMEX02, SMAPVEX12.
Beyond the GUM: variance-based sensitivity analysis in metrology
NASA Astrophysics Data System (ADS)
Lira, I.
2016-07-01
Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand.
Omitted Variable Sensitivity Analysis with the Annotated Love Plot
ERIC Educational Resources Information Center
Hansen, Ben B.; Fredrickson, Mark M.
2014-01-01
The goal of this research is to make sensitivity analysis accessible not only to empirical researchers but also to the various stakeholders for whom educational evaluations are conducted. To do this it derives anchors for the omitted variable (OV)-program participation association intrinsically, using the Love plot to present a wide range of…
Bayesian Sensitivity Analysis of Statistical Models with Missing Data
ZHU, HONGTU; IBRAHIM, JOSEPH G.; TANG, NIANSHENG
2013-01-01
Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures. PMID:24753718
NASA Astrophysics Data System (ADS)
Kim, Mi-Jeong; Park, Nam-Gyu
2012-09-01
Photovoltaic performance of 0.05 M urea-contained redox electrolyte is compared to that of 0.05 M guanidinium thiocyanate (GSCN)-contained one in dye-sensitized solar cell. No significant difference in the initial photovoltaic performance is observed, which means that the role of urea additive is similar to that of GSCN. Initial solar-to-electrical conversion efficiency of the device containing GSCN shows 7% that is diminished to 5.8% after 40 days, whereas the device containing urea exhibits stable photovoltaic performance showing that initial efficiency of 7.2% is almost remained unchanged after 40 days (7.1%). The lowered efficiency of the GSCN-contained device is mainly due to the decreased photocurrent density, which is ascribed to the formation of needle-shaped crystals on TiO2 layer. Infrared spectroscopic study confirms that the crystals are dye analogue, which is indicative of dye desorption in the presence of GSCN. On the other hand, no crystals are formed in the urea-contained electrolyte, which implies that dye desorption is negligible. Urea additive is thus found to be less reactive in dye desorption than GSCN, leading to long-term stability.
Porosity Measurements and Analysis for Metal Additive Manufacturing Process Control.
Slotwinski, John A; Garboczi, Edward J; Hebenstreit, Keith M
2014-01-01
Additive manufacturing techniques can produce complex, high-value metal parts, with potential applications as critical metal components such as those found in aerospace engines and as customized biomedical implants. Material porosity in these parts is undesirable for aerospace parts - since porosity could lead to premature failure - and desirable for some biomedical implants - since surface-breaking pores allows for better integration with biological tissue. Changes in a part's porosity during an additive manufacturing build may also be an indication of an undesired change in the build process. Here, we present efforts to develop an ultrasonic sensor for monitoring changes in the porosity in metal parts during fabrication on a metal powder bed fusion system. The development of well-characterized reference samples, measurements of the porosity of these samples with multiple techniques, and correlation of ultrasonic measurements with the degree of porosity are presented. A proposed sensor design, measurement strategy, and future experimental plans on a metal powder bed fusion system are also presented.
Porosity Measurements and Analysis for Metal Additive Manufacturing Process Control.
Slotwinski, John A; Garboczi, Edward J; Hebenstreit, Keith M
2014-01-01
Additive manufacturing techniques can produce complex, high-value metal parts, with potential applications as critical metal components such as those found in aerospace engines and as customized biomedical implants. Material porosity in these parts is undesirable for aerospace parts - since porosity could lead to premature failure - and desirable for some biomedical implants - since surface-breaking pores allows for better integration with biological tissue. Changes in a part's porosity during an additive manufacturing build may also be an indication of an undesired change in the build process. Here, we present efforts to develop an ultrasonic sensor for monitoring changes in the porosity in metal parts during fabrication on a metal powder bed fusion system. The development of well-characterized reference samples, measurements of the porosity of these samples with multiple techniques, and correlation of ultrasonic measurements with the degree of porosity are presented. A proposed sensor design, measurement strategy, and future experimental plans on a metal powder bed fusion system are also presented. PMID:26601041
Porosity Measurements and Analysis for Metal Additive Manufacturing Process Control
Slotwinski, John A; Garboczi, Edward J; Hebenstreit, Keith M
2014-01-01
Additive manufacturing techniques can produce complex, high-value metal parts, with potential applications as critical metal components such as those found in aerospace engines and as customized biomedical implants. Material porosity in these parts is undesirable for aerospace parts - since porosity could lead to premature failure - and desirable for some biomedical implants - since surface-breaking pores allows for better integration with biological tissue. Changes in a part’s porosity during an additive manufacturing build may also be an indication of an undesired change in the build process. Here, we present efforts to develop an ultrasonic sensor for monitoring changes in the porosity in metal parts during fabrication on a metal powder bed fusion system. The development of well-characterized reference samples, measurements of the porosity of these samples with multiple techniques, and correlation of ultrasonic measurements with the degree of porosity are presented. A proposed sensor design, measurement strategy, and future experimental plans on a metal powder bed fusion system are also presented. PMID:26601041
Lv, Jungang; Feng, Jimin; Zhang, Wen; Shi, Rongguang; Liu, Yong; Wang, Zhaohong; Zhao, Meng
2013-01-01
Pressure-sensitive tape is often used to bind explosive devices. It can become important trace evidence in many cases. Three types of calcium carbonate (heavy, light, and active CaCO(3)), which were widely used as additives in pressure-sensitive tape substrate, were analyzed with Fourier transform infrared spectroscopy (FTIR) in this study. A Spectrum GX 2000 system with a diamond anvil cell and a deuterated triglycine sulfate detector was employed for IR observation. Background was subtracted for every measurement, and triplicate tests were performed. Differences in positions of main peaks and the corresponding functional groups were investigated. Heavy CaCO(3) could be identified from the two absorptions near 873 and 855/cm, while light CaCO(3) only has one peak near 873/cm because of the low content of aragonite. Active CaCO(3) could be identified from the absorptions in the 2800-2900/cm region because of the existence of organic compounds. Tiny but indicative changes in the 878-853/cm region were found in the spectra of CaCO(3) with different content of aragonite and calcite. CaCO(3) in pressure-sensitive tape, which cannot be differentiated by scanning electron microscope/energy dispersive X-ray spectrometer and thermal analysis, can be easily identified using FTIR. The findings were successfully applied to three specific explosive cases and would be helpful in finding the possible source of explosive devices in future cases. PMID:22724657
Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit
NASA Astrophysics Data System (ADS)
Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie
2015-09-01
The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity
Disclosure of hydraulic fracturing fluid chemical additives: analysis of regulations.
Maule, Alexis L; Makey, Colleen M; Benson, Eugene B; Burrows, Isaac J; Scammell, Madeleine K
2013-01-01
Hydraulic fracturing is used to extract natural gas from shale formations. The process involves injecting into the ground fracturing fluids that contain thousands of gallons of chemical additives. Companies are not mandated by federal regulations to disclose the identities or quantities of chemicals used during hydraulic fracturing operations on private or public lands. States have begun to regulate hydraulic fracturing fluids by mandating chemical disclosure. These laws have shortcomings including nondisclosure of proprietary or "trade secret" mixtures, insufficient penalties for reporting inaccurate or incomplete information, and timelines that allow for after-the-fact reporting. These limitations leave lawmakers, regulators, public safety officers, and the public uninformed and ill-prepared to anticipate and respond to possible environmental and human health hazards associated with hydraulic fracturing fluids. We explore hydraulic fracturing exemptions from federal regulations, as well as current and future efforts to mandate chemical disclosure at the federal and state level.
Disclosure of hydraulic fracturing fluid chemical additives: analysis of regulations.
Maule, Alexis L; Makey, Colleen M; Benson, Eugene B; Burrows, Isaac J; Scammell, Madeleine K
2013-01-01
Hydraulic fracturing is used to extract natural gas from shale formations. The process involves injecting into the ground fracturing fluids that contain thousands of gallons of chemical additives. Companies are not mandated by federal regulations to disclose the identities or quantities of chemicals used during hydraulic fracturing operations on private or public lands. States have begun to regulate hydraulic fracturing fluids by mandating chemical disclosure. These laws have shortcomings including nondisclosure of proprietary or "trade secret" mixtures, insufficient penalties for reporting inaccurate or incomplete information, and timelines that allow for after-the-fact reporting. These limitations leave lawmakers, regulators, public safety officers, and the public uninformed and ill-prepared to anticipate and respond to possible environmental and human health hazards associated with hydraulic fracturing fluids. We explore hydraulic fracturing exemptions from federal regulations, as well as current and future efforts to mandate chemical disclosure at the federal and state level. PMID:23552653
Risk analysis of sulfites used as food additives in China.
Zhang, Jian Bo; Zhang, Hong; Wang, Hua Li; Zhang, Ji Yue; Luo, Peng Jie; Zhu, Lei; Wang, Zhu Tian
2014-02-01
This study was to analyze the risk of sulfites in food consumed by the Chinese people and assess the health protection capability of maximum-permitted level (MPL) of sulfites in GB 2760-2011. Sulfites as food additives are overused or abused in many food categories. When the MPL in GB 2760-2011 was used as sulfites content in food, the intake of sulfites in most surveyed populations was lower than the acceptable daily intake (ADI). Excess intake of sulfites was found in all the surveyed groups when a high percentile of sulfites in food was in taken. Moreover, children aged 1-6 years are at a high risk to intake excess sulfites. The primary cause for the excess intake of sulfites in Chinese people is the overuse and abuse of sulfites by the food industry. The current MPL of sulfites in GB 2760-2011 protects the health of most populations.
Double Precision Differential/Algebraic Sensitivity Analysis Code
1995-06-02
DDASAC solves nonlinear initial-value problems involving stiff implicit systems of ordinary differential and algebraic equations. Purely algebraic nonlinear systems can also be solved, given an initial guess within the region of attraction of a solution. Options include automatic reconciliation of inconsistent initial states and derivatives, automatic initial step selection, direct concurrent parametric sensitivity analysis, and stopping at a prescribed value of any user-defined functional of the current solution vector. Local error control (in the max-normmore » or the 2-norm) is provided for the state vector and can include the sensitivities on request.« less
A sensitivity analysis for subverting randomization in controlled trials.
Marcus, S M
2001-02-28
In some randomized controlled trials, subjects with a better prognosis may be diverted into the treatment group. This subverting of randomization results in an unobserved non-compliance with the originally intended treatment assignment. Consequently, the estimate of treatment effect from these trials may be biased. This paper clarifies the determinants of the magnitude of the bias and gives a sensitivity analysis that associates the amount that randomization is subverted and the resulting bias in treatment effect estimation. The methods are illustrated with a randomized controlled trial that evaluates the efficacy of a culturally sensitive AIDS education video.
Sensitivity Analysis of Chaotic Flow around Two-Dimensional Airfoil
NASA Astrophysics Data System (ADS)
Blonigan, Patrick; Wang, Qiqi; Nielsen, Eric; Diskin, Boris
2015-11-01
Computational methods for sensitivity analysis are invaluable tools for fluid dynamics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods, including the adjoint method, break down when applied to long-time averaged quantities in chaotic fluid flow fields, such as high-fidelity turbulence simulations. This break down is due to the ``Butterfly Effect'' the high sensitivity of chaotic dynamical systems to the initial condition. A new sensitivity analysis method developed by the authors, Least Squares Shadowing (LSS), can compute useful and accurate gradients for quantities of interest in chaotic dynamical systems. LSS computes gradients using the ``shadow trajectory'', a phase space trajectory (or solution) for which perturbations to the flow field do not grow exponentially in time. To efficiently compute many gradients for one objective function, we use an adjoint version of LSS. This talk will briefly outline Least Squares Shadowing and demonstrate it on chaotic flow around a Two-Dimensional airfoil.
Efficient sensitivity analysis and optimization of a helicopter rotor
NASA Technical Reports Server (NTRS)
Lim, Joon W.; Chopra, Inderjit
1989-01-01
Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.
NASA Astrophysics Data System (ADS)
Sharvani, S.; Upadhayaya, Kishor; Kumari, Gayatri; Narayana, Chandrabhas; Shivaprasad, S. M.
2015-11-01
The GaN nanowall network, formed by opening the screw dislocations by kinetically controlled MBE growth, possesses a large surface and high conductivity. Sharp apexed nanowalls show higher surface electron concentration in the band-tail states, in comparison to blunt apexed nanowalls. Uncapped silver nanoparticles are vapor deposited on the blunt and sharp GaN nanowall networks to study the morphological dependence of band-edge plasmon-coupling. Surface enhanced Raman spectroscopy studies performed with a rhodamine 6G analyte on these two configurations clearly show that the sharp nanowall morphology with smaller Ag nanoparticles shows higher enhancement of the Raman signal. A very large enhancement factor of 2.8 × 107 and a very low limit of detection of 10-10 M is observed, which is attributed to the surface plasmon resonance owing to the high surface electron concentration on the GaN nanowall in addition to that of the Ag nanoparticles. The significantly higher sensitivity with same-sized Ag nanoparticles confirms the unconventional role of morphology-dependent surface charge carrier concentration of GaN nanowalls in the enhancement of Raman signals.
Sharvani, S; Upadhayaya, Kishor; Kumari, Gayatri; Narayana, Chandrabhas; Shivaprasad, S M
2015-11-20
The GaN nanowall network, formed by opening the screw dislocations by kinetically controlled MBE growth, possesses a large surface and high conductivity. Sharp apexed nanowalls show higher surface electron concentration in the band-tail states, in comparison to blunt apexed nanowalls. Uncapped silver nanoparticles are vapor deposited on the blunt and sharp GaN nanowall networks to study the morphological dependence of band-edge plasmon-coupling. Surface enhanced Raman spectroscopy studies performed with a rhodamine 6G analyte on these two configurations clearly show that the sharp nanowall morphology with smaller Ag nanoparticles shows higher enhancement of the Raman signal. A very large enhancement factor of 2.8 × 10(7) and a very low limit of detection of 10(-10) M is observed, which is attributed to the surface plasmon resonance owing to the high surface electron concentration on the GaN nanowall in addition to that of the Ag nanoparticles. The significantly higher sensitivity with same-sized Ag nanoparticles confirms the unconventional role of morphology-dependent surface charge carrier concentration of GaN nanowalls in the enhancement of Raman signals. PMID:26502004
Shape sensitivity analysis of flutter response of a laminated wing
NASA Technical Reports Server (NTRS)
Bergen, Fred D.; Kapania, Rakesh K.
1988-01-01
A method is presented for calculating the shape sensitivity of a wing aeroelastic response with respect to changes in geometric shape. Yates' modified strip method is used in conjunction with Giles' equivalent plate analysis to predict the flutter speed, frequency, and reduced frequency of the wing. Three methods are used to calculate the sensitivity of the eigenvalue. The first method is purely a finite difference calculation of the eigenvalue derivative directly from the solution of the flutter problem corresponding to the two different values of the shape parameters. The second method uses an analytic expression for the eigenvalue sensitivities of a general complex matrix, where the derivatives of the aerodynamic, mass, and stiffness matrices are computed using a finite difference approximation. The third method also uses an analytic expression for the eigenvalue sensitivities, but the aerodynamic matrix is computed analytically. All three methods are found to be in good agreement with each other. The sensitivities of the eigenvalues were used to predict the flutter speed, frequency, and reduced frequency. These approximations were found to be in good agreement with those obtained using a complete reanalysis.
Sensitivity Analysis of the Static Aeroelastic Response of a Wing
NASA Technical Reports Server (NTRS)
Eldred, Lloyd B.
1993-01-01
A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.
Gerstl, S.A.W.
1980-01-01
SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.
Sensitivity Analysis and Optimal Control of Anthroponotic Cutaneous Leishmania
Zamir, Muhammad; Zaman, Gul; Alshomrani, Ali Saleh
2016-01-01
This paper is focused on the transmission dynamics and optimal control of Anthroponotic Cutaneous Leishmania. The threshold condition R0 for initial transmission of infection is obtained by next generation method. Biological sense of the threshold condition is investigated and discussed in detail. The sensitivity analysis of the reproduction number is presented and the most sensitive parameters are high lighted. On the basis of sensitivity analysis, some control strategies are introduced in the model. These strategies positively reduce the effect of the parameters with high sensitivity indices, on the initial transmission. Finally, an optimal control strategy is presented by taking into account the cost associated with control strategies. It is also shown that an optimal control exists for the proposed control problem. The goal of optimal control problem is to minimize, the cost associated with control strategies and the chances of infectious humans, exposed humans and vector population to become infected. Numerical simulations are carried out with the help of Runge-Kutta fourth order procedure. PMID:27505634
Sensitivity analysis for improving nanomechanical photonic transducers biosensors
NASA Astrophysics Data System (ADS)
Fariña, D.; Álvarez, M.; Márquez, S.; Dominguez, C.; Lechuga, L. M.
2015-08-01
The achievement of high sensitivity and highly integrated transducers is one of the main challenges in the development of high-throughput biosensors. The aim of this study is to improve the final sensitivity of an opto-mechanical device to be used as a reliable biosensor. We report the analysis of the mechanical and optical properties of optical waveguide microcantilever transducers, and their dependency on device design and dimensions. The selected layout (geometry) based on two butt-coupled misaligned waveguides displays better sensitivities than an aligned one. With this configuration, we find that an optimal microcantilever thickness range between 150 nm and 400 nm would increase both microcantilever bending during the biorecognition process and increase optical sensitivity to 4.8 × 10-2 nm-1, an order of magnitude higher than other similar opto-mechanical devices. Moreover, the analysis shows that a single mode behaviour of the propagating radiation is required to avoid modal interference that could misinterpret the readout signal.
Additional challenges for uncertainty analysis in river engineering
NASA Astrophysics Data System (ADS)
Berends, Koen; Warmink, Jord; Hulscher, Suzanne
2016-04-01
the proposed intervention. The implicit assumption underlying such analysis is that both models are commensurable. We hypothesize that they are commensurable only to a certain extent. In an idealised study we have demonstrated that prediction performance loss should be expected with increasingly large engineering works. When accounting for parametric uncertainty of floodplain roughness in model identification, we see uncertainty bounds for predicted effects of interventions increase with increasing intervention scale. Calibration of these types of models therefore seems to have a shelf-life, beyond which calibration does not longer improves prediction. Therefore a qualification scheme for model use is required that can be linked to model validity. In this study, we characterize model use along three dimensions: extrapolation (using the model with different external drivers), extension (using the model for different output or indicators) and modification (using modified models). Such use of models is expected to have implications for the applicability of surrogating modelling for efficient uncertainty analysis as well, which is recommended for future research. Warmink, J. J.; Straatsma, M. W.; Huthoff, F.; Booij, M. J. & Hulscher, S. J. M. H. 2013. Uncertainty of design water levels due to combined bed form and vegetation roughness in the Dutch river Waal. Journal of Flood Risk Management 6, 302-318 . DOI: 10.1111/jfr3.12014
Graphical methods for the sensitivity analysis in discriminant analysis
Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang
2015-09-30
Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern ofmore » the change.« less
Graphical methods for the sensitivity analysis in discriminant analysis
Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang
2015-09-30
Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern of the change.
Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations
Wang, Qiqi Hu, Rui Blonigan, Patrick
2014-06-15
The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.
Control of a mechanical aeration process via topological sensitivity analysis
NASA Astrophysics Data System (ADS)
Abdelwahed, M.; Hassine, M.; Masmoudi, M.
2009-06-01
The topological sensitivity analysis method gives the variation of a criterion with respect to the creation of a small hole in the domain. In this paper, we use this method to control the mechanical aeration process in eutrophic lakes. A simplified model based on incompressible Navier-Stokes equations is used, only considering the liquid phase, which is the dominant one. The injected air is taken into account through local boundary conditions for the velocity, on the injector holes. A 3D numerical simulation of the aeration effects is proposed using a mixed finite element method. In order to generate the best motion in the fluid for aeration purposes, the optimization of the injector location is considered. The main idea is to carry out topological sensitivity analysis with respect to the insertion of an injector. Finally, a topological optimization algorithm is proposed and some numerical results, showing the efficiency of our approach, are presented.
Sensitivity analysis techniques for models of human behavior.
Bier, Asmeret Brooke
2010-09-01
Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.
Kinetic analysis of microbial respiratory response to substrate addition
NASA Astrophysics Data System (ADS)
Blagodatskaya, Evgenia; Blagodatsky, Sergey; Yuyukina, Tatayna; Kuzyakov, Yakov
2010-05-01
Heterotrophic component of CO2 emitted from soil is mainly due to the respiratory activity of soil microorganisms. Field measurements of microbial respiration can be used for estimation of C-budget in soil, while laboratory estimation of respiration kinetics allows the elucidation of mechanisms of soil C sequestration. Physiological approaches based on 1) time-dependent or 2) substrate-dependent respiratory response of soil microorganisms decomposing the organic substrates allow to relate the functional properties of soil microbial community with decomposition rates of soil organic matter. We used a novel methodology combining (i) microbial growth kinetics and (ii) enzymes affinity to the substrate to show the shift in functional properties of the soil microbial community after amendments with substrates of contrasting availability. We combined the application of 14C labeled glucose as easily available C source to soil with natural isotope labeling of old and young soil SOM. The possible contribution of two processes: isotopic fractionation and preferential substrate utilization to the shifts in δ13C during SOM decomposition in soil after C3-C4 vegetation change was evaluated. Specific growth rate (µ) of soil microorganisms was estimated by fitting the parameters of the equation v(t) = A + B * exp(µ*t), to the measured CO2 evolution rate (v(t)) after glucose addition, and where A is the initial rate of non-growth respiration, B - initial rate of the growing fraction of total respiration. Maximal mineralization rate (Vmax), substrate affinity of microbial enzymes (Ks) and substrate availability (Sn) were determined by Michaelis-Menten kinetics. To study the effect of plant originated C on δ13C signature of SOM we compared the changes in isotopic composition of different C pools in C3 soil under grassland with C3-C4 soil where C4 plant Miscanthus giganteus was grown for 12 years on the plot after grassland. The shift in 13δ C caused by planting of M. giganteus
Development and application of optimum sensitivity analysis of structures
NASA Technical Reports Server (NTRS)
Barthelemy, J. F. M.; Hallauer, W. L., Jr.
1984-01-01
The research focused on developing an algorithm applying optimum sensitivity analysis for multilevel optimization. The research efforts have been devoted to assisting NASA Langley's Interdisciplinary Research Office (IRO) in the development of a mature methodology for a multilevel approach to the design of complex (large and multidisciplinary) engineering systems. An effort was undertaken to identify promising multilevel optimization algorithms. In the current reporting period, the computer program generating baseline single level solutions was completed and tested out.
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
Sensitivity analysis in multiple imputation in effectiveness studies of psychotherapy
Crameri, Aureliano; von Wyl, Agnes; Koemeda, Margit; Schulthess, Peter; Tschuschke, Volker
2015-01-01
The importance of preventing and treating incomplete data in effectiveness studies is nowadays emphasized. However, most of the publications focus on randomized clinical trials (RCT). One flexible technique for statistical inference with missing data is multiple imputation (MI). Since methods such as MI rely on the assumption of missing data being at random (MAR), a sensitivity analysis for testing the robustness against departures from this assumption is required. In this paper we present a sensitivity analysis technique based on posterior predictive checking, which takes into consideration the concept of clinical significance used in the evaluation of intra-individual changes. We demonstrate the possibilities this technique can offer with the example of irregular longitudinal data collected with the Outcome Questionnaire-45 (OQ-45) and the Helping Alliance Questionnaire (HAQ) in a sample of 260 outpatients. The sensitivity analysis can be used to (1) quantify the degree of bias introduced by missing not at random data (MNAR) in a worst reasonable case scenario, (2) compare the performance of different analysis methods for dealing with missing data, or (3) detect the influence of possible violations to the model assumptions (e.g., lack of normality). Moreover, our analysis showed that ratings from the patient's and therapist's version of the HAQ could significantly improve the predictive value of the routine outcome monitoring based on the OQ-45. Since analysis dropouts always occur, repeated measurements with the OQ-45 and the HAQ analyzed with MI are useful to improve the accuracy of outcome estimates in quality assurance assessments and non-randomized effectiveness studies in the field of outpatient psychotherapy. PMID:26283989
Goh, Eng Giap; Noborio, Kosuke
2015-01-01
A FORTRAN code for liquid water flow in unsaturated soil under the isothermal condition was developed to simulate water infiltration into Yolo light clay. The governing equation, that is, Richards' equation, was approximated by the finite-difference method. A normalized sensitivity coefficient was used in the sensitivity analysis of Richards' equation. Normalized sensitivity coefficient was calculated using one-at-a-time (OAT) method and elementary effects (EE) method based on hydraulic functions for matric suction and hydraulic conductivity. Results from EE method provided additional insight into model input parameters, such as input parameter linearity and oscillating sign effect. Boundary volumetric water content (θ L (upper bound)) and saturated volumetric water content (θ s ) were consistently found to be the most sensitive parameters corresponding to positive and negative relations, as given by the hydraulic functions. In addition, although initial volumetric water content (θ L (initial cond)) and time-step size (Δt), respectively, possessed a great amount of sensitivity coefficient and uncertainty value, they did not exhibit significant influence on model output as demonstrated by spatial discretization size (Δz). The input multiplication of parameters sensitivity coefficient and uncertainty value was found to affect the outcome of model simulation, in which parameter with the highest value was found to be Δz. PMID:27347550
Goh, Eng Giap
2015-01-01
A FORTRAN code for liquid water flow in unsaturated soil under the isothermal condition was developed to simulate water infiltration into Yolo light clay. The governing equation, that is, Richards' equation, was approximated by the finite-difference method. A normalized sensitivity coefficient was used in the sensitivity analysis of Richards' equation. Normalized sensitivity coefficient was calculated using one-at-a-time (OAT) method and elementary effects (EE) method based on hydraulic functions for matric suction and hydraulic conductivity. Results from EE method provided additional insight into model input parameters, such as input parameter linearity and oscillating sign effect. Boundary volumetric water content (θL (upper bound)) and saturated volumetric water content (θs) were consistently found to be the most sensitive parameters corresponding to positive and negative relations, as given by the hydraulic functions. In addition, although initial volumetric water content (θL (initial cond)) and time-step size (Δt), respectively, possessed a great amount of sensitivity coefficient and uncertainty value, they did not exhibit significant influence on model output as demonstrated by spatial discretization size (Δz). The input multiplication of parameters sensitivity coefficient and uncertainty value was found to affect the outcome of model simulation, in which parameter with the highest value was found to be Δz. PMID:27347550
Sensitivity analysis of fine sediment models using heterogeneous data
NASA Astrophysics Data System (ADS)
Kamel, A. M. Yousif; Bhattacharya, B.; El Serafy, G. Y.; van Kessel, T.; Solomatine, D. P.
2012-04-01
Sediments play an important role in many aquatic systems. Their transportation and deposition has significant implication on morphology, navigability and water quality. Understanding the dynamics of sediment transportation in time and space is therefore important in drawing interventions and making management decisions. This research is related to the fine sediment dynamics in the Dutch coastal zone, which is subject to human interference through constructions, fishing, navigation, sand mining, etc. These activities do affect the natural flow of sediments and sometimes lead to environmental concerns or affect the siltation rates in harbours and fairways. Numerical models are widely used in studying fine sediment processes. Accuracy of numerical models depends upon the estimation of model parameters through calibration. Studying the model uncertainty related to these parameters is important in improving the spatio-temporal prediction of suspended particulate matter (SPM) concentrations, and determining the limits of their accuracy. This research deals with the analysis of a 3D numerical model of North Sea covering the Dutch coast using the Delft3D modelling tool (developed at Deltares, The Netherlands). The methodology in this research was divided into three main phases. The first phase focused on analysing the performance of the numerical model in simulating SPM concentrations near the Dutch coast by comparing the model predictions with SPM concentrations estimated from NASA's MODIS sensors at different time scales. The second phase focused on carrying out a sensitivity analysis of model parameters. Four model parameters were identified for the uncertainty and sensitivity analysis: the sedimentation velocity, the critical shear stress above which re-suspension occurs, the shields shear stress for re-suspension pick-up, and the re-suspension pick-up factor. By adopting different values of these parameters the numerical model was run and a comparison between the
Floquet theoretic approach to sensitivity analysis for periodic systems
NASA Astrophysics Data System (ADS)
Larter, Raima
1986-12-01
The mathematical relationship between sensitivity analysis and Floquet theory is explored. The former technique has been used in recent years to study the parameter sensitivity of numerical models in chemical kinetics, scattering theory, and other problems in chemistry. In the present work, we derive analytical expressions for the sensitivity coefficients for models of oscillating chemical reactions. These reactions have been the subject of increased interest in recent years because of their relationship to fundamental biological problems, such as development, and because of their similarity to related phenomena in fields such as hydrodynamics, plasma physics, meteorology, geology, etc. The analytical form of the sensitivity coefficients derived here can be used to determine the explicit time dependence of the initial transient and any secular term. The method is applicable to unstable as well as stable oscillations and is illustrated by application to the Brusselator and to a three variable model due to Hassard, Kazarinoff, and Wan. It is shown that our results reduce to those previously derived by Edelson, Rabitz, and others in certain limits. The range of validity of these formerly derived expressions is thus elucidated.
Species sensitivity analysis of heavy metals to freshwater organisms.
Xin, Zheng; Wenchao, Zang; Zhenguang, Yan; Yiguo, Hong; Zhengtao, Liu; Xianliang, Yi; Xiaonan, Wang; Tingting, Liu; Liming, Zhou
2015-10-01
Acute toxicity data of six heavy metals [Cu, Hg, Cd, Cr(VI), Pb, Zn] to aquatic organisms were collected and screened. Species sensitivity distributions (SSD) curves of vertebrate and invertebrate were constructed by log-logistic model separately. The comprehensive comparisons of the sensitivities of different trophic species to six typical heavy metals were performed. The results indicated invertebrate taxa to each heavy metal exhibited higher sensitivity than vertebrates. However, with respect to the same taxa species, Cu had the most adverse effect on vertebrate, followed by Hg, Cd, Zn and Cr. When datasets from all species were included, Cu and Hg were still more toxic than the others. In particular, the toxicities of Pb to vertebrate and fish were complicated as the SSD curves of Pb intersected with those of other heavy metals, while the SSD curves of Pb constructed by total species no longer crossed with others. The hazardous concentrations for 5 % of the species (HC5) affected were derived to determine the concentration protecting 95 % of species. The HC5 values of the six heavy metals were in the descending order: Zn > Pb > Cr > Cd > Hg > Cu, indicating toxicities in opposite order. Moreover, potential affected fractions were calculated to assess the ecological risks of different heavy metals at certain concentrations of the selected heavy metals. Evaluations of sensitivities of the species at various trophic levels and toxicity analysis of heavy metals are necessary prior to derivation of water quality criteria and the further environmental protection.
Optimizing human activity patterns using global sensitivity analysis
Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.
2014-01-01
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080
Sensitivity-analysis techniques: self-teaching curriculum
Iman, R.L.; Conover, W.J.
1982-06-01
This self teaching curriculum on sensitivity analysis techniques consists of three parts: (1) Use of the Latin Hypercube Sampling Program (Iman, Davenport and Ziegler, Latin Hypercube Sampling (Program User's Guide), SAND79-1473, January 1980); (2) Use of the Stepwise Regression Program (Iman, et al., Stepwise Regression with PRESS and Rank Regression (Program User's Guide) SAND79-1472, January 1980); and (3) Application of the procedures to sensitivity and uncertainty analyses of the groundwater transport model MWFT/DVM (Campbell, Iman and Reeves, Risk Methodology for Geologic Disposal of Radioactive Waste - Transport Model Sensitivity Analysis; SAND80-0644, NUREG/CR-1377, June 1980: Campbell, Longsine, and Reeves, The Distributed Velocity Method of Solving the Convective-Dispersion Equation, SAND80-0717, NUREG/CR-1376, July 1980). This curriculum is one in a series developed by Sandia National Laboratories for transfer of the capability to use the technology developed under the NRC funded High Level Waste Methodology Development Program.
Analysis of frequency characteristics and sensitivity of compliant mechanisms
NASA Astrophysics Data System (ADS)
Liu, Shanzeng; Dai, Jiansheng; Li, Aimin; Sun, Zhaopeng; Feng, Shizhe; Cao, Guohua
2016-07-01
Based on a modified pseudo-rigid-body model, the frequency characteristics and sensitivity of the large-deformation compliant mechanism are studied. Firstly, the pseudo-rigid-body model under the static and kinetic conditions is modified to enable the modified pseudo-rigid-body model to be more suitable for the dynamic analysis of the compliant mechanism. Subsequently, based on the modified pseudo-rigid-body model, the dynamic equations of the ordinary compliant four-bar mechanism are established using the analytical mechanics. Finally, in combination with the finite element analysis software ANSYS, the frequency characteristics and sensitivity of the compliant mechanism are analyzed by taking the compliant parallel-guiding mechanism and the compliant bistable mechanism as examples. From the simulation results, the dynamic characteristics of compliant mechanism are relatively sensitive to the structure size, section parameter, and characteristic parameter of material on mechanisms. The results could provide great theoretical significance and application values for the structural optimization of compliant mechanisms, the improvement of their dynamic properties and the expansion of their application range.
Optimizing human activity patterns using global sensitivity analysis
Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.
2013-12-10
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.
Optimizing human activity patterns using global sensitivity analysis
Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.
2013-12-10
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less
Sensitivity Analysis of Hardwired Parameters in GALE Codes
Geelhood, Kenneth J.; Mitchell, Mark R.; Droppo, James G.
2008-12-01
The U.S. Nuclear Regulatory Commission asked Pacific Northwest National Laboratory to provide a data-gathering plan for updating the hardwired data tables and parameters of the Gaseous and Liquid Effluents (GALE) codes to reflect current nuclear reactor performance. This would enable the GALE codes to make more accurate predictions about the normal radioactive release source term applicable to currently operating reactors and to the cohort of reactors planned for construction in the next few years. A sensitivity analysis was conducted to define the importance of hardwired parameters in terms of each parameter’s effect on the emission rate of the nuclides that are most important in computing potential exposures. The results of this study were used to compile a list of parameters that should be updated based on the sensitivity of these parameters to outputs of interest.
A sensitivity analysis of regional and small watershed hydrologic models
NASA Technical Reports Server (NTRS)
Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.
1975-01-01
Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.
High derivatives for fast sensitivity analysis in linear magnetodynamics
Petin, P. |; Coulomb, J.L.; Conraux, P.
1997-03-01
In this article, the authors present a method of sensitivity analysis using high derivatives and Taylor development. The principle is to find a polynomial approximation of the finite elements solution towards the sensitivity parameters. While presenting the method, they explain why this method is applicable with special parameters only. They applied it on a magnetodynamic problem, simple enough to be able to find the analytical solution with a formal calculus tool. They then present the implementation and the good results obtained with the polynomial, first by comparing the derivatives themselves, then by comparing the approximate solution with the theoretical one. After this validation, the authors present results on a real 2D application and they underline the possibilities of reuse in other fields of physics.
SENSITIVITY ANALYSIS OF A TPB DEGRADATION RATE MODEL
Crawford, C; Tommy Edwards, T; Bill Wilmarth, B
2006-08-01
A tetraphenylborate (TPB) degradation model for use in aggregating Tank 48 material in Tank 50 is developed in this report. The influential factors for this model are listed as the headings in the table below. A sensitivity study of the predictions of the model over intervals of values for the influential factors affecting the model was conducted. These intervals bound the levels of these factors expected during Tank 50 aggregations. The results from the sensitivity analysis were used to identify settings for the influential factors that yielded the largest predicted TPB degradation rate. Thus, these factor settings are considered as those that yield the ''worst-case'' scenario for TPB degradation rate for Tank 50 aggregation, and, as such they would define the test conditions that should be studied in a waste qualification program whose dual purpose would be the investigation of the introduction of Tank 48 material for aggregation in Tank 50 and the bounding of TPB degradation rates for such aggregations.
Simulation of the global contrail radiative forcing: A sensitivity analysis
NASA Astrophysics Data System (ADS)
Yi, Bingqi; Yang, Ping; Liou, Kuo-Nan; Minnis, Patrick; Penner, Joyce E.
2012-12-01
The contrail radiative forcing induced by human aviation activity is one of the most uncertain contributions to climate forcing. An accurate estimation of global contrail radiative forcing is imperative, and the modeling approach is an effective and prominent method to investigate the sensitivity of contrail forcing to various potential factors. We use a simple offline model framework that is particularly useful for sensitivity studies. The most-up-to-date Community Atmospheric Model version 5 (CAM5) is employed to simulate the atmosphere and cloud conditions during the year 2006. With updated natural cirrus and additional contrail optical property parameterizations, the RRTMG Model (RRTM-GCM application) is used to simulate the global contrail radiative forcing. Global contrail coverage and optical depth derived from the literature for the year 2002 is used. The 2006 global annual averaged contrail net (shortwave + longwave) radiative forcing is estimated to be 11.3 mW m-2. Regional contrail radiative forcing over dense air traffic areas can be more than ten times stronger than the global average. A series of sensitivity tests are implemented and show that contrail particle effective size, contrail layer height, the model cloud overlap assumption, and contrail optical properties are among the most important factors. The difference between the contrail forcing under all and clear skies is also shown.
Biosphere dose conversion Factor Importance and Sensitivity Analysis
M. Wasiolek
2004-10-15
This report presents importance and sensitivity analysis for the environmental radiation model for Yucca Mountain, Nevada (ERMYN). ERMYN is a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis concerns the output of the model, biosphere dose conversion factors (BDCFs) for the groundwater, and the volcanic ash exposure scenarios. It identifies important processes and parameters that influence the BDCF values and distributions, enhances understanding of the relative importance of the physical and environmental processes on the outcome of the biosphere model, includes a detailed pathway analysis for key radionuclides, and evaluates the appropriateness of selected parameter values that are not site-specific or have large uncertainty.
Sensitivity Analysis of OECD Benchmark Tests in BISON
Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.; Williamson, Richard
2015-09-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.
Sensitivity analysis of the GNSS derived Victoria plate motion
NASA Astrophysics Data System (ADS)
Apolinário, João; Fernandes, Rui; Bos, Machiel
2014-05-01
Fernandes et al. (2013) estimated the angular velocity of the Victoria tectonic block from geodetic data (GNSS derived velocities) only.. GNSS observations are sparse in this region and it is therefore of the utmost importance to use the available data (5 sites) in the most optimal way. Unfortunately, the existing time-series were/are affected by missing data and offsets. In addition, some time-series were close to the considered minimal threshold value to compute one reliable velocity solution: 2.5-3.0 years. In this research, we focus on the sensitivity of the derived angular velocity to changes in the data (longer data-span for some stations) by extending the used data-span: Fernandes et al. (2013) used data until September 2011. We also investigate the effect of adding other stations to the solution, which is now possible since more stations became available in the region. In addition, we study if the conventional power-law plus white noise model is indeed the best stochastic model. In this respect, we apply different noise models using HECTOR (Bos et al. (2013), which can use different noise models and estimate offsets and seasonal signals simultaneously. The seasonal signal estimation is also other important parameter, since the time-series are rather short or have large data spans at some stations, which implies that the seasonal signals still can have some effect on the estimated trends as shown by Blewitt and Lavellee (2002) and Bos et al. (2010). We also quantify the magnitude of such differences in the estimation of the secular velocity and their effect in the derived angular velocity. Concerning the offsets, we investigate how they can, detected and undetected, influence the estimated plate motion. The time of offsets has been determined by visual inspection of the time-series. The influence of undetected offsets has been done by adding small synthetic random walk signals that are too small to be detected visually but might have an effect on the
SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES
Flach, G.
2014-10-28
PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.
Sensitivity analysis of discrete structural systems: A survey
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.
1984-01-01
Methods for calculating sensitivity derivatives for discrete structural systems are surveyed, primarily covering literature published during the past two decades. Methods are described for calculating derivatives of static displacements and stresses, eigenvalues and eigenvectors, transient structural response, and derivatives of optimum structural designs with respect to problem parameters. The survey is focused on publications addressed to structural analysis, but also includes a number of methods developed in nonstructural fields such as electronics, controls, and physical chemistry which are directly applicable to structural problems. Most notable among the nonstructural-based methods are the adjoint variable technique from control theory, and the Green's function and FAST methods from physical chemistry.
Path-sensitive analysis for reducing rollback overheads
O'Brien, John K.P.; Wang, Kai-Ting Amy; Yamashita, Mark; Zhuang, Xiaotong
2014-07-22
A mechanism is provided for path-sensitive analysis for reducing rollback overheads. The mechanism receives, in a compiler, program code to be compiled to form compiled code. The mechanism divides the code into basic blocks. The mechanism then determines a restore register set for each of the one or more basic blocks to form one or more restore register sets. The mechanism then stores the one or more register sets such that responsive to a rollback during execution of the compiled code. A rollback routine identifies a restore register set from the one or more restore register sets and restores registers identified in the identified restore register set.
Rheological Models of Blood: Sensitivity Analysis and Benchmark Simulations
NASA Astrophysics Data System (ADS)
Szeliga, Danuta; Macioł, Piotr; Banas, Krzysztof; Kopernik, Magdalena; Pietrzyk, Maciej
2010-06-01
Modeling of blood flow with respect to rheological parameters of the blood is the objective of this paper. Casson type equation was selected as a blood model and the blood flow was analyzed based on Backward Facing Step benchmark. The simulations were performed using ADINA-CFD finite element code. Three output parameters were selected, which characterize the accuracy of flow simulation. Sensitivity analysis of the results with Morris Design method was performed to identify rheological parameters and the model output, which control the blood flow to significant extent. The paper is the part of the work on identification of parameters controlling process of clotting.
Al Okab, Riyad Ahmed
2013-02-15
Green analytical methods using Cisapride (CPE) as green analytical reagent was investigated in this work. Rapid, simple, and sensitive spectrophotometric methods for the determination of bromate in water sample, bread and flour additives were developed. The proposed methods based on the oxidative coupling between phenoxazine and Cisapride in the presence of bromate to form red colored product with max at 520 nm. Phenoxazine and Cisapride and its reaction products were found to be environmentally friendly under the optimum experimental condition. The method obeys beers law in concentration range 0.11-4.00 g ml(-1) and molar absorptivity 1.41 × 10(4) L mol(-1)cm(-1). All variables have been optimized and the presented reaction sequences were applied to the analysis of bromate in water, bread and flour additive samples. The performance of these method was evaluated in terms of Student's t-test and variance ratio F-test to find out the significance of proposed methods over the reference method. The combination of pharmaceutical drugs reagents with low concentration create some unique green chemical analyses.
NASA Astrophysics Data System (ADS)
Al Okab, Riyad Ahmed
2013-02-01
Green analytical methods using Cisapride (CPE) as green analytical reagent was investigated in this work. Rapid, simple, and sensitive spectrophotometric methods for the determination of bromate in water sample, bread and flour additives were developed. The proposed methods based on the oxidative coupling between phenoxazine and Cisapride in the presence of bromate to form red colored product with max at 520 nm. Phenoxazine and Cisapride and its reaction products were found to be environmentally friendly under the optimum experimental condition. The method obeys beers law in concentration range 0.11-4.00 g ml-1 and molar absorptivity 1.41 × 104 L mol-1 cm-1. All variables have been optimized and the presented reaction sequences were applied to the analysis of bromate in water, bread and flour additive samples. The performance of these method was evaluated in terms of Student's t-test and variance ratio F-test to find out the significance of proposed methods over the reference method. The combination of pharmaceutical drugs reagents with low concentration create some unique green chemical analyses.
Sensitivity and uncertainty analysis of a polyurethane foam decomposition model
HOBBS,MICHAEL L.; ROBINSON,DAVID G.
2000-03-14
Sensitivity/uncertainty analyses are not commonly performed on complex, finite-element engineering models because the analyses are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, an analytical sensitivity/uncertainty analysis is used to determine the standard deviation and the primary factors affecting the burn velocity of polyurethane foam exposed to firelike radiative boundary conditions. The complex, finite element model has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state burn velocity calculated as the derivative of the burn front location versus time. The standard deviation of the burn velocity was determined by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation is essentially determined from a second derivative that is extremely sensitive to numerical noise. To minimize the numerical noise, 50-micron elements and approximately 1-msec time steps were required to obtain stable uncertainty results. The primary effect variable was shown to be the emissivity of the foam.
Sensitivity analysis for texture models applied to rust steel classification
NASA Astrophysics Data System (ADS)
Trujillo, Maite; Sadki, Mustapha
2004-05-01
The exposure of metallic structures to rust degradation during their operational life is a known problem and it affects storage tanks, steel bridges, ships, etc. In order to prevent this degradation and the potential related catastrophes, the surfaces have to be assessed and the appropriate surface treatment and coating need to be applied according to the corrosion time of the steel. We previously investigated the potential of image processing techniques to tackle this problem. Several mathematical algorithms methods were analyzed and evaluated on a database of 500 images. In this paper, we extend our previous research and provide a further analysis of the textural mathematical methods for automatic rust time steel detection. Statistical descriptors are provided to evaluate the sensitivity of the results as well as the advantages and limitations of the different methods. Finally, a selector of the classifiers algorithms is introduced and the ratio between sensitivity of the results and time response (execution time) is analyzed to compromise good classification results (high sensitivity) and acceptable time response for the automation of the system.
A global sensitivity analysis of crop virtual water content
NASA Astrophysics Data System (ADS)
Tamea, S.; Tuninetti, M.; D'Odorico, P.; Laio, F.; Ridolfi, L.
2015-12-01
The concepts of virtual water and water footprint are becoming widely used in the scientific literature and they are proving their usefulness in a number of multidisciplinary contexts. With such growing interest a measure of data reliability (and uncertainty) is becoming pressing but, as of today, assessments of data sensitivity to model parameters, performed at the global scale, are not known. This contribution aims at filling this gap. Starting point of this study is the evaluation of the green and blue virtual water content (VWC) of four staple crops (i.e. wheat, rice, maize, and soybean) at a global high resolution scale. In each grid cell, the crop VWC is given by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield, where evapotranspiration is determined with a detailed daily soil water balance and actual yield is estimated using country-based data, adjusted to account for spatial variability. The model provides estimates of the VWC at a 5x5 arc minutes and it improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The model is then used as the basis for a sensitivity analysis, in order to evaluate the role of model parameters in affecting the VWC and to understand how uncertainties in input data propagate and impact the VWC accounting. In each cell, small changes are exerted to one parameter at a time, and a sensitivity index is determined as the ratio between the relative change of VWC and the relative change of the input parameter with respect to its reference value. At the global scale, VWC is found to be most sensitive to the planting date, with a positive (direct) or negative (inverse) sensitivity index depending on the typical season of crop planting date. VWC is also markedly dependent on the length of the growing period, with an increase in length always producing an increase of VWC, but with higher spatial variability for rice than for
Analysis of Transition-Sensitized Turbulent Transport Equations
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Thacker, William D.; Gatski, Thomas B.; Grosch, Chester E,
2005-01-01
The dynamics of an ensemble of linear disturbances in boundary-layer flows at various Reynolds numbers is studied through an analysis of the transport equations for the mean disturbance kinetic energy and energy dissipation rate. Effects of adverse and favorable pressure-gradients on the disturbance dynamics are also included in the analysis Unlike the fully turbulent regime where nonlinear phase scrambling of the fluctuations affects the flow field even in proximity to the wall, the early stage transition regime fluctuations studied here are influenced cross the boundary layer by the solid boundary. The dominating dynamics in the disturbance kinetic energy and dissipation rate equations are described. These results are then used to formulate transition-sensitized turbulent transport equations, which are solved in a two-step process and applied to zero-pressure-gradient flow over a flat plate. Computed results are in good agreement with experimental data.
Comparative Analysis of State Fish Consumption Advisories Targeting Sensitive Populations
Scherer, Alison C.; Tsuchiya, Ami; Younglove, Lisa R.; Burbacher, Thomas M.; Faustman, Elaine M.
2008-01-01
Objective Fish consumption advisories are issued to warn the public of possible toxicological threats from consuming certain fish species. Although developing fetuses and children are particularly susceptible to toxicants in fish, fish also contain valuable nutrients. Hence, formulating advice for sensitive populations poses challenges. We conducted a comparative analysis of advisory Web sites issued by states to assess health messages that sensitive populations might access. Data sources We evaluated state advisories accessed via the National Listing of Fish Advisories issued by the U.S. Environmental Protection Agency. Data extraction We created criteria to evaluate advisory attributes such as risk and benefit message clarity. Data synthesis All 48 state advisories issued at the time of this analysis targeted children, 90% (43) targeted pregnant women, and 58% (28) targeted women of childbearing age. Only six advisories addressed single contaminants, while the remainder based advice on 2–12 contaminants. Results revealed that advisories associated a dozen contaminants with specific adverse health effects. Beneficial health effects of any kind were specifically associated only with omega-3 fatty acids found in fish. Conclusions These findings highlight the complexity of assessing and communicating information about multiple contaminant exposure from fish consumption. Communication regarding potential health benefits conferred by specific fish nutrients was minimal and focused primarily on omega-3 fatty acids. This overview suggests some lessons learned and highlights a lack of both clarity and consistency in providing the breadth of information that sensitive populations such as pregnant women need to make public health decisions about fish consumption during pregnancy. PMID:19079708
Simple Sensitivity Analysis for Orion Guidance Navigation and Control
NASA Technical Reports Server (NTRS)
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch. We describe in this paper a sensitivity analysis tool ("Critical Factors Tool" or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
Three-dimensional aerodynamic shape optimization using discrete sensitivity analysis
NASA Technical Reports Server (NTRS)
Burgreen, Gregory W.
1995-01-01
An aerodynamic shape optimization procedure based on discrete sensitivity analysis is extended to treat three-dimensional geometries. The function of sensitivity analysis is to directly couple computational fluid dynamics (CFD) with numerical optimization techniques, which facilitates the construction of efficient direct-design methods. The development of a practical three-dimensional design procedures entails many challenges, such as: (1) the demand for significant efficiency improvements over current design methods; (2) a general and flexible three-dimensional surface representation; and (3) the efficient solution of very large systems of linear algebraic equations. It is demonstrated that each of these challenges is overcome by: (1) employing fully implicit (Newton) methods for the CFD analyses; (2) adopting a Bezier-Bernstein polynomial parameterization of two- and three-dimensional surfaces; and (3) using preconditioned conjugate gradient-like linear system solvers. Whereas each of these extensions independently yields an improvement in computational efficiency, the combined effect of implementing all the extensions simultaneously results in a significant factor of 50 decrease in computational time and a factor of eight reduction in memory over the most efficient design strategies in current use. The new aerodynamic shape optimization procedure is demonstrated in the design of both two- and three-dimensional inviscid aerodynamic problems including a two-dimensional supersonic internal/external nozzle, two-dimensional transonic airfoils (resulting in supercritical shapes), three-dimensional transport wings, and three-dimensional supersonic delta wings. Each design application results in realistic and useful optimized shapes.
Sensitivity Analysis of Offshore Wind Cost of Energy (Poster)
Dykes, K.; Ning, A.; Graf, P.; Scott, G.; Damiami, R.; Hand, M.; Meadows, R.; Musial, W.; Moriarty, P.; Veers, P.
2012-10-01
No matter the source, offshore wind energy plant cost estimates are significantly higher than for land-based projects. For instance, a National Renewable Energy Laboratory (NREL) review on the 2010 cost of wind energy found baseline cost estimates for onshore wind energy systems to be 71 dollars per megawatt-hour ($/MWh), versus 225 $/MWh for offshore systems. There are many ways that innovation can be used to reduce the high costs of offshore wind energy. However, the use of such innovation impacts the cost of energy because of the highly coupled nature of the system. For example, the deployment of multimegawatt turbines can reduce the number of turbines, thereby reducing the operation and maintenance (O&M) costs associated with vessel acquisition and use. On the other hand, larger turbines may require more specialized vessels and infrastructure to perform the same operations, which could result in higher costs. To better understand the full impact of a design decision on offshore wind energy system performance and cost, a system analysis approach is needed. In 2011-2012, NREL began development of a wind energy systems engineering software tool to support offshore wind energy system analysis. The tool combines engineering and cost models to represent an entire offshore wind energy plant and to perform system cost sensitivity analysis and optimization. Initial results were collected by applying the tool to conduct a sensitivity analysis on a baseline offshore wind energy system using 5-MW and 6-MW NREL reference turbines. Results included information on rotor diameter, hub height, power rating, and maximum allowable tip speeds.
Dong, Xuelin; Zhang, Changxing; Feng, Xue; Duan, Zhiyin
2016-06-10
The coherent gradient sensing (CGS) method, one kind of shear interferometry sensitive to surface slope, has been applied to full-field curvature measuring for decades. However, its accuracy, sensitivity, and resolution have not been studied clearly. In this paper, we analyze the accuracy, sensitivity, and resolution for the CGS method based on the derivation of its working principle. The results show that the sensitivity is related to the grating pitch and distance, and the accuracy and resolution are determined by the wavelength of the laser beam and the diameter of the reflected beam. The sensitivity is proportional to the ratio of grating distance to its pitch, while the accuracy will decline as this ratio increases. In addition, we demonstrate that using phase gratings as the shearing element can improve the interferogram and enhance accuracy, sensitivity, and resolution. The curvature of a spherical reflector is measured by CGS with Ronchi gratings and phase gratings under different experimental parameters to illustrate this analysis. All of the results are quite helpful for CGS applications. PMID:27409035
Design and analysis of a PZT-based micromachined acoustic sensor with increased sensitivity.
Wang, Zheyao; Wang, Chao; Liu, Litian
2005-10-01
The ever-growing applications of lead zirconate titanate (PZT) thin films to sensing devices have given birth to a variety of microsensors. This paper presents the design and theoretical analysis of a PZT-based micro acoustic sensor that uses interdigital electrodes (IDE) and in-plane polarization (IPP) instead of commonly used parallel plate-electrodes (PPE) and through-thickness polarization (TTP). The sensitivity of IDE-based sensors is increased due to the small capacitance of the interdigital capacitor and the large and adjustable electrode spacing. In addition, the sensitivity takes advantage of a large piezoelectric coefficient d33 rather than d31, which is used in PPE-based sensors, resulting in a further improvement in the sensitivity. Laminated beam theory is used to analyze the laminated piezoelectric sensors, and the capacitance of the IDE is deduced by using conformal mapping and partial capacitance techniques. Analytical formulations for predicting the sensitivity of both PPE- and IDE-based microsensors are presented, and factors that influence sensitivity are discussed in detail. Results show that the IDE and IPP can improve the sensitivity significantly.
NASA Astrophysics Data System (ADS)
Horvath, K.; Ivančan-Picek, B.
2009-03-01
A 12-15 November 2004 cyclone on the lee side of the Atlas Mountains and the related occurrence of severe bora along the eastern Adriatic coast are numerically analyzed using the MM5 mesoscale model. Motivated by the fact that sub-synoptic scales are more sensitive to initialization errors and dominate forecast error growth, this study is designed in order to assess the sensitivity of the mesoscale forecast to the intensity of mesoscale potential vorticity (PV) anomalies. Five sensitivity simulations are performed after subtracting the selected anomalies from the initial conditions, allowing for the analysis of the cyclone intensity and track, and additionally, the associated severe bora in the Adriatic. The results of the ensemble show that the cyclone is highly sensitive to the exact details of the upper-level dynamic forcing. The spread of cyclone intensities is the greatest in the mature phase of the cyclone lifecycle, due to different cyclone advection speeds towards the Mediterranean. However, the cyclone tracks diffluence appears to be the greatest during the cyclone movement out of the Atlas lee, prior to the mature stage of cyclone development, most likely due to the predominant upper-level steering control and its influence on the thermal anomaly creation in the mountain lee. Furthermore, it is quantitatively shown that the southern Adriatic bora is more sensitive to cyclone presence in the Mediterranean then bora in the northern Adriatic, due to unequal influence of the cyclone on the cross-mountain pressure gradient formation. The orographically induced pressure perturbation is strongly correlated with bora in the northern and to a lesser extent in the southern Adriatic, implying the existence of additional controlling mechanisms to bora in the southern part of the basin. In addition, it is shown that the bora intensity in the southern Adriatic is highly sensitive to the precise sub-synoptic pressure distribution in the cyclone itself, indicating a
Radiolysis Model Sensitivity Analysis for a Used Fuel Storage Canister
Wittman, Richard S.
2013-09-20
This report fulfills the M3 milestone (M3FT-13PN0810027) to report on a radiolysis computer model analysis that estimates the generation of radiolytic products for a storage canister. The analysis considers radiolysis outside storage canister walls and within the canister fill gas over a possible 300-year lifetime. Previous work relied on estimates based directly on a water radiolysis G-value. This work also includes that effect with the addition of coupled kinetics for 111 reactions for 40 gas species to account for radiolytic-induced chemistry, which includes water recombination and reactions with air.
Global sensitivity analysis of the Indian monsoon during the Pleistocene
NASA Astrophysics Data System (ADS)
Araya-Melo, P. A.; Crucifix, M.; Bounceur, N.
2015-01-01
The sensitivity of the Indian monsoon to the full spectrum of climatic conditions experienced during the Pleistocene is estimated using the climate model HadCM3. The methodology follows a global sensitivity analysis based on the emulator approach of Oakley and O'Hagan (2004) implemented following a three-step strategy: (1) development of an experiment plan, designed to efficiently sample a five-dimensional input space spanning Pleistocene astronomical configurations (three parameters), CO2 concentration and a Northern Hemisphere glaciation index; (2) development, calibration and validation of an emulator of HadCM3 in order to estimate the response of the Indian monsoon over the full input space spanned by the experiment design; and (3) estimation and interpreting of sensitivity diagnostics, including sensitivity measures, in order to synthesise the relative importance of input factors on monsoon dynamics, estimate the phase of the monsoon intensity response with respect to that of insolation, and detect potential non-linear phenomena. By focusing on surface temperature, precipitation, mixed-layer depth and sea-surface temperature over the monsoon region during the summer season (June-July-August-September), we show that precession controls the response of four variables: continental temperature in phase with June to July insolation, high glaciation favouring a late-phase response, sea-surface temperature in phase with May insolation, continental precipitation in phase with July insolation, and mixed-layer depth in antiphase with the latter. CO2 variations control temperature variance with an amplitude similar to that of precession. The effect of glaciation is dominated by the albedo forcing, and its effect on precipitation competes with that of precession. Obliquity is a secondary effect, negligible on most variables except sea-surface temperature. It is also shown that orography forcing reduces the glacial cooling, and even has a positive effect on precipitation
Global sensitivity analysis of Indian Monsoon during the Pleistocene
NASA Astrophysics Data System (ADS)
Araya-Melo, P. A.; Crucifix, M.; Bounceur, N.
2014-04-01
The sensitivity of Indian Monsoon to the full spectrum of climatic conditions experienced during the Pleistocene is estimated using the climate model HadCM3. The methodology follows a global sensitivity analysis based on the emulator approach of Oakley and O'Hagan (2004) implemented following a three-step strategy: (1) develop an experiment plan, designed to efficiently sample a 5-dimensional input space spanning Pleistocene astronomical configurations (3 parameters), CO2 concentration and a Northern Hemisphere glaciation index, (2) develop, calibrate and validate an emulator of HadCM3, in order to estimate the response of the Indian Monsoon over the full input space spanned by the experiment design, and (3) estimate and interpret sensitivity diagnostics, including sensitivity measures, in order to synthesize the relative importance of input factors on monsoon dynamics, estimate the phase of the monsoon intensity response with respect to that of insolation, and detect potential non-linear phenomena. Specifically, we focus on four variables: summer (JJAS) temperature and precipitation over North India, and JJAS sea-surface temperature and mixed-layer depth over the north-western side of the Indian ocean. It is shown that precession controls the response of four variables: continental temperature in phase with June to July insolation, high glaciation favouring a late-phase response, sea-surface temperature in phase with May insolation, and continental precipitation in phase with July insolation, and mixed-layer depth in antiphase with the latter. CO2 variations controls temperature variance with an amplitude similar to that of precession. The effect of glaciation is dominated by the albedo forcing, and its effect on precipitation competes with that of precession. Obliquity is a secondary effect, negligible on most variables except sea-surface temperature. It is also shown that orography forcing reduces the glacial cooling, and even has a positive effect on
Parallel-vector design sensitivity analysis in structural dynamics
NASA Technical Reports Server (NTRS)
Zhang, Y.; Nguyen, D. T.
1992-01-01
This paper presents a parallel-vector algorithm for sensitivity calculations in linear structural dynamics. The proposed alternative formulation works efficiently with the reduced system of dynamic equations, since it eliminates the need for expensive and complicated based-vector derivatives, which are required in the conventional reduced system formulation. The relationship between the alternative formulation and the conventional reduced system formulation has been established, and it has been proven analytically that the two approaches are identical when all the mode shapes are included. This paper validates the proposed alternative algorithm through numerical experiments, where only a small number of mode shapes are used. In addition, a modified mode acceleration method is presented, thus not only the displacements but also the velocities and accelerations are shown to be improved.
GPU-based Integration with Application in Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Atanassov, Emanouil; Ivanovska, Sofiya; Karaivanova, Aneta; Slavov, Dimitar
2010-05-01
The presented work is an important part of the grid application MCSAES (Monte Carlo Sensitivity Analysis for Environmental Studies) which aim is to develop an efficient Grid implementation of a Monte Carlo based approach for sensitivity studies in the domains of Environmental modelling and Environmental security. The goal is to study the damaging effects that can be caused by high pollution levels (especially effects on human health), when the main modeling tool is the Danish Eulerian Model (DEM). Generally speaking, sensitivity analysis (SA) is the study of how the variation in the output of a mathematical model can be apportioned to, qualitatively or quantitatively, different sources of variation in the input of a model. One of the important classes of methods for Sensitivity Analysis are Monte Carlo based, first proposed by Sobol, and then developed by Saltelli and his group. In MCSAES the general Saltelli procedure has been adapted for SA of the Danish Eulerian model. In our case we consider as factors the constants determining the speeds of the chemical reactions in the DEM and as output a certain aggregated measure of the pollution. Sensitivity simulations lead to huge computational tasks (systems with up to 4 × 109 equations at every time-step, and the number of time-steps can be more than a million) which motivates its grid implementation. MCSAES grid implementation scheme includes two main tasks: (i) Grid implementation of the DEM, (ii) Grid implementation of the Monte Carlo integration. In this work we present our new developments in the integration part of the application. We have developed an algorithm for GPU-based generation of scrambled quasirandom sequences which can be combined with the CPU-based computations related to the SA. Owen first proposed scrambling of Sobol sequence through permutation in a manner that improves the convergence rates. Scrambling is necessary not only for error analysis but for parallel implementations. Good scrambling is
Sensitivity analysis of an urban stormwater microorganism model.
McCarthy, D T; Deletic, A; Mitchell, V G; Diaper, C
2010-01-01
This paper presents the sensitivity analysis of a newly developed model which predicts microorganism concentrations in urban stormwater (MOPUS--MicroOrganism Prediction in Urban Stormwater). The analysis used Escherichia coli data collected from four urban catchments in Melbourne, Australia. The MICA program (Model Independent Markov Chain Monte Carlo Analysis), used to conduct this analysis, applies a carefully constructed Markov Chain Monte Carlo procedure, based on the Metropolis-Hastings algorithm, to explore the model's posterior parameter distribution. It was determined that the majority of parameters in the MOPUS model were well defined, with the data from the MCMC procedure indicating that the parameters were largely independent. However, a sporadic correlation found between two parameters indicates that some improvements may be possible in the MOPUS model. This paper identifies the parameters which are the most important during model calibration; it was shown, for example, that parameters associated with the deposition of microorganisms in the catchment were more influential than those related to microorganism survival processes. These findings will help users calibrate the MOPUS model, and will help the model developer to improve the model, with efforts currently being made to reduce the number of model parameters, whilst also reducing the slight interaction identified.
Jalal, Hawre; Goldhaber-Fiebert, Jeremy D; Kuntz, Karen M
2015-07-01
Decision makers often desire both guidance on the most cost-effective interventions given current knowledge and also the value of collecting additional information to improve the decisions made (i.e., from value of information [VOI] analysis). Unfortunately, VOI analysis remains underused due to the conceptual, mathematical, and computational challenges of implementing Bayesian decision-theoretic approaches in models of sufficient complexity for real-world decision making. In this study, we propose a novel practical approach for conducting VOI analysis using a combination of probabilistic sensitivity analysis, linear regression metamodeling, and unit normal loss integral function--a parametric approach to VOI analysis. We adopt a linear approximation and leverage a fundamental assumption of VOI analysis, which requires that all sources of prior uncertainties be accurately specified. We provide examples of the approach and show that the assumptions we make do not induce substantial bias but greatly reduce the computational time needed to perform VOI analysis. Our approach avoids the need to analytically solve or approximate joint Bayesian updating, requires only one set of probabilistic sensitivity analysis simulations, and can be applied in models with correlated input parameters. PMID:25840900
Jalal, Hawre; Goldhaber-Fiebert, Jeremy D.; Kuntz, Karen M.
2016-01-01
Decision makers often desire both guidance on the most cost-effective interventions given current knowledge and also the value of collecting additional information to improve the decisions made [i.e., from value of information (VOI) analysis]. Unfortunately, VOI analysis remains underutilized due to the conceptual, mathematical and computational challenges of implementing Bayesian decision theoretic approaches in models of sufficient complexity for real-world decision making. In this study, we propose a novel practical approach for conducting VOI analysis using a combination of probabilistic sensitivity analysis, linear regression metamodeling, and unit normal loss integral function – a parametric approach to VOI analysis. We adopt a linear approximation and leverage a fundamental assumption of VOI analysis which requires that all sources of prior uncertainties be accurately specified. We provide examples of the approach and show that the assumptions we make do not induce substantial bias but greatly reduce the computational time needed to perform VOI analysis. Our approach avoids the need to analytically solve or approximate joint Bayesian updating, requires only one set of probabilistic sensitivity analysis simulations, and can be applied in models with correlated input parameters. PMID:25840900
Dried blood spot analysis of creatinine with LC-MS/MS in addition to immunosuppressants analysis.
Koster, Remco A; Greijdanus, Ben; Alffenaar, Jan-Willem C; Touw, Daan J
2015-02-01
In order to monitor creatinine levels or to adjust the dosage of renally excreted or nephrotoxic drugs, the analysis of creatinine in dried blood spots (DBS) could be a useful addition to DBS analysis. We developed a LC-MS/MS method for the analysis of creatinine in the same DBS extract that was used for the analysis of tacrolimus, sirolimus, everolimus, and cyclosporine A in transplant patients with the use of Whatman FTA DMPK-C cards. The method was validated using three different strategies: a seven-point calibration curve using the intercept of the calibration to correct for the natural presence of creatinine in reference samples, a one-point calibration curve at an extremely high concentration in order to diminish the contribution of the natural presence of creatinine, and the use of creatinine-[(2)H3] with an eight-point calibration curve. The validated range for creatinine was 120 to 480 μmol/L (seven-point calibration curve), 116 to 7000 μmol/L (1-point calibration curve), and 1.00 to 400.0 μmol/L for creatinine-[(2)H3] (eight-point calibration curve). The precision and accuracy results for all three validations showed a maximum CV of 14.0% and a maximum bias of -5.9%. Creatinine in DBS was found stable at ambient temperature and 32 °C for 1 week and at -20 °C for 29 weeks. Good correlations were observed between patient DBS samples and routine enzymatic plasma analysis and showed the capability of the DBS method to be used as an alternative for creatinine plasma measurement.
Global sensitivity analysis of the radiative transfer model
NASA Astrophysics Data System (ADS)
Neelam, Maheshwari; Mohanty, Binayak P.
2015-04-01
With the recently launched Soil Moisture Active Passive (SMAP) mission, it is very important to have a complete understanding of the radiative transfer model for better soil moisture retrievals and to direct future research and field campaigns in areas of necessity. Because natural systems show great variability and complexity with respect to soil, land cover, topography, precipitation, there exist large uncertainties and heterogeneities in model input factors. In this paper, we explore the possibility of using global sensitivity analysis (GSA) technique to study the influence of heterogeneity and uncertainties in model inputs on zero order radiative transfer (ZRT) model and to quantify interactions between parameters. GSA technique is based on decomposition of variance and can handle nonlinear and nonmonotonic functions. We direct our analyses toward growing agricultural fields of corn and soybean in two different regions, Iowa, USA (SMEX02) and Winnipeg, Canada (SMAPVEX12). We noticed that, there exists a spatio-temporal variation in parameter interactions under different soil moisture and vegetation conditions. Radiative Transfer Model (RTM) behaves more non-linearly in SMEX02 and linearly in SMAPVEX12, with average parameter interactions of 14% in SMEX02 and 5% in SMAPVEX12. Also, parameter interactions increased with vegetation water content (VWC) and roughness conditions. Interestingly, soil moisture shows an exponentially decreasing sensitivity function whereas parameters such as root mean square height (RMS height) and vegetation water content show increasing sensitivity with 0.05 v/v increase in soil moisture range. Overall, considering the SMAPVEX12 fields to be water rich environment (due to higher observed SM) and SMEX02 fields to be energy rich environment (due to lower SM and wide ranges of TSURF), our results indicate that first order as well as interactions between the parameters change with water and energy rich environments.
Sensitivity analysis of channel-bend hydraulics influenced by vegetation
NASA Astrophysics Data System (ADS)
Bywater-Reyes, S.; Manners, R.; McDonald, R.; Wilcox, A. C.
2015-12-01
Alternating bars influence hydraulics by changing the force balance of channels as part of a morphodynamic feedback loop that dictates channel geometry. Pioneer woody riparian trees recruit on river bars and may steer flow, alter cross-stream and downstream force balances, and ultimately change channel morphology. Quantifying the influence of vegetation on stream hydraulics is difficult, and researchers increasingly rely on two-dimensional hydraulic models. In many cases, channel characteristics (channel drag and lateral eddy viscosity) and vegetation characteristics (density, frontal area, and drag coefficient) are uncertain. This study uses a beta version of FaSTMECH that models vegetation explicitly as a drag force to test the sensitivity of channel-bend hydraulics to riparian vegetation. We use a simplified, scale model of a meandering river with bars and conduct a global sensitivity analysis that ranks the influence of specified channel characteristics (channel drag and lateral eddy viscosity) against vegetation characteristics (density, frontal area, and drag coefficient) on cross-stream hydraulics. The primary influence on cross-stream velocity and shear stress is channel drag (i.e., bed roughness), followed by the near-equal influence of all vegetation parameters and lateral eddy viscosity. To test the implication of the sensitivity indices on bend hydraulics, we hold calibrated channel characteristics constant for a wandering gravel-bed river with bars (Bitterroot River, MT), and vary vegetation parameters on a bar. For a dense vegetation scenario, we find flow to be steered away from the bar, and velocity and shear stress to be reduced within the thalweg. This provides insight into how the morphodynamic evolution of vegetated bars differs from unvegetated bars.
Sensitivity analysis for computer model projections of hurricane losses.
Iman, Ronald L; Johnson, Mark E; Watson, Charles C
2005-10-01
Projecting losses associated with hurricanes is a complex and difficult undertaking that is fraught with uncertainties. Hurricane Charley, which struck southwest Florida on August 13, 2004, illustrates the uncertainty of forecasting damages from these storms. Due to shifts in the track and the rapid intensification of the storm, real-time estimates grew from 2 billion dollars to 3 billion dollars in losses late on the 12th to a peak of 50 billion dollars for a brief time as the storm appeared to be headed for the Tampa Bay area. The storm struck the resort areas of Charlotte Harbor and moved across the densely populated central part of the state, with early poststorm estimates in the 28 dollars to 31 billion dollars range, and final estimates converging at 15 billion dollars as the actual intensity at landfall became apparent. The Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) has a great appreciation for the role of computer models in projecting losses from hurricanes. The FCHLPM contracts with a professional team to perform onsite (confidential) audits of computer models developed by several different companies in the United States that seek to have their models approved for use in insurance rate filings in Florida. The team's members represent the fields of actuarial science, computer science, meteorology, statistics, and wind and structural engineering. An important part of the auditing process requires uncertainty and sensitivity analyses to be performed with the applicant's proprietary model. To influence future such analyses, an uncertainty and sensitivity analysis has been completed for loss projections arising from use of a sophisticated computer model based on the Holland wind field. Sensitivity analyses presented in this article utilize standardized regression coefficients to quantify the contribution of the computer input variables to the magnitude of the wind speed.
NASA Astrophysics Data System (ADS)
Chou, C. S.; Tsai, P. J.; Wu, P.; Shu, G. G.; Huang, Y. H.; Chen, Y. S.
2014-04-01
This study investigates the relationship between the performance of a dye-sensitized solar cell (DSSC) sensitized by a natural sensitizer of Taiwan Roselle anthocyanin (TRA) and fabrication process conditions of the DSSC. A set of systematic experiments has been carried out at various soaking temperatures, soaking periods, sensitizer concentrations, pH values, and additions of single-walled carbon nanotube (SWCNT). An absorption peak (520 nm) is found for TRA, and it is close to that of the N719 dye (518 nm). At a fixed concentration of TRA and a fixed soaking period, a lower pH of the extract or a lower soaking temperature is found favorable to the formation of pigment cations, which leads to an enhanced power conversion efficiency (η) of DSSC. For instance, by applying 17.53 mg/100ml TRA at 30 for 10 h, as the pH of the extract decreases to 2.00 from 2.33 (the original pH of TRA), the η of DSSC with TiO2+SWCNT electrode increases to 0.67% from 0.11% of a traditional DSSC with TiO2 electrode. This performance improvement can be explained by the combined effect of the pH of sensitizer and the additions of SWCNT, a first investigation in DSSC using the natural sensitizer with SWCNT.
Sensitivity analysis for high accuracy proximity effect correction
NASA Astrophysics Data System (ADS)
Thrun, Xaver; Browning, Clyde; Choi, Kang-Hoon; Figueiro, Thiago; Hohle, Christoph; Saib, Mohamed; Schiavone, Patrick; Bartha, Johann W.
2015-10-01
A sensitivity analysis (SA) algorithm was developed and tested to comprehend the influences of different test pattern sets on the calibration of a point spread function (PSF) model with complementary approaches. Variance-based SA is the method of choice. It allows attributing the variance of the output of a model to the sum of variance of each input of the model and their correlated factors.1 The objective of this development is increasing the accuracy of the resolved PSF model in the complementary technique through the optimization of test pattern sets. Inscale® from Aselta Nanographics is used to prepare the various pattern sets and to check the consequences of development. Fraunhofer IPMS-CNT exposed the prepared data and observed those to visualize the link of sensitivities between the PSF parameters and the test pattern. First, the SA can assess the influence of test pattern sets for the determination of PSF parameters, such as which PSF parameter is affected on the employments of certain pattern. Secondly, throughout the evaluation, the SA enhances the precision of PSF through the optimization of test patterns. Finally, the developed algorithm is able to appraise what ranges of proximity effect correction is crucial on which portion of a real application pattern in the electron beam exposure.
Neutron activation analysis; A sensitive test for trace elements
Hossain, T.Z. . Ward Lab.)
1992-01-01
This paper discusses neutron activation analysis (NAA), an extremely sensitive technique for determining the elemental constituents of an unknown specimen. Currently, there are some twenty-five moderate-power TRIGA reactors scattered across the United States (fourteen of them at universities), and one of their principal uses is for NAA. NAA is procedurally simple. A small amount of the material to be tested (typically between one and one hundred milligrams) is irradiated for a period that varies from a few minutes to several hours in a neutron flux of around 10{sup 12} neutrons per square centimeter per second. A tiny fraction of the nuclei present (about 10{sup {minus}8}) is transmuted by nuclear reactions into radioactive forms. Subsequently, the nuclei decay, and the energy and intensity of the gamma rays that they emit can be measured in a gamma-ray spectrometer.
Sensitivity analysis for causal inference using inverse probability weighting.
Shen, Changyu; Li, Xiaochun; Li, Lingling; Were, Martin C
2011-09-01
Evaluation of impact of potential uncontrolled confounding is an important component for causal inference based on observational studies. In this article, we introduce a general framework of sensitivity analysis that is based on inverse probability weighting. We propose a general methodology that allows both non-parametric and parametric analyses, which are driven by two parameters that govern the magnitude of the variation of the multiplicative errors of the propensity score and their correlations with the potential outcomes. We also introduce a specific parametric model that offers a mechanistic view on how the uncontrolled confounding may bias the inference through these parameters. Our method can be readily applied to both binary and continuous outcomes and depends on the covariates only through the propensity score that can be estimated by any parametric or non-parametric method. We illustrate our method with two medical data sets.
Apparatus and Method for Ultra-Sensitive trace Analysis
Lu, Zhengtian; Bailey, Kevin G.; Chen, Chun Yen; Li, Yimin; O'Connor, Thomas P.; Young, Linda
2000-01-03
An apparatus and method for conducting ultra-sensitive trace element and isotope analysis. The apparatus injects a sample through a fine nozzle to form an atomic beam. A DC discharge is used to elevate select atoms to a metastable energy level. These atoms are then acted on by a laser oriented orthogonally to the beam path to reduce the traverse velocity and to decrease the divergence angle of the beam. The beam then enters a Zeeman slower where a counter-propagating laser beam acts to slow the atoms down. Then select atoms are captured in a magneto-optical trap where they undergo fluorescence. A portion of the scattered photons are imaged onto a photo-detector, and the results analyzed to detect the presence of single atoms of the specific trace elements.
Displacement Monitoring and Sensitivity Analysis in the Observational Method
NASA Astrophysics Data System (ADS)
Górska, Karolina; Muszyński, Zbigniew; Rybak, Jarosław
2013-09-01
This work discusses the fundamentals of designing deep excavation support by means of observational method. The effective tools for optimum designing with the use of the observational method are both inclinometric and geodetic monitoring, which provide data for the systematically updated calibration of the numerical computational model. The analysis included methods for selecting data for the design (by choosing the basic random variables), as well as methods for an on-going verification of the results of numeric calculations (e.g., MES) by way of measuring the structure displacement using geodetic and inclinometric techniques. The presented example shows the sensitivity analysis of the calculation model for a cantilever wall in non-cohesive soil; that analysis makes it possible to select the data to be later subject to calibration. The paper presents the results of measurements of a sheet pile wall displacement, carried out by means of inclinometric method and, simultaneously, two geodetic methods, successively with the deepening of the excavation. This work includes also critical comments regarding the usefulness of the obtained data, as well as practical aspects of taking measurement in the conditions of on-going construction works.
Sensitivity analysis of ecosystem service valuation in a Mediterranean watershed.
Sánchez-Canales, María; López Benito, Alfredo; Passuello, Ana; Terrado, Marta; Ziv, Guy; Acuña, Vicenç; Schuhmacher, Marta; Elorza, F Javier
2012-12-01
The services of natural ecosystems are clearly very important to our societies. In the last years, efforts to conserve and value ecosystem services have been fomented. By way of illustration, the Natural Capital Project integrates ecosystem services into everyday decision making around the world. This project has developed InVEST (a system for Integrated Valuation of Ecosystem Services and Tradeoffs). The InVEST model is a spatially integrated modelling tool that allows us to predict changes in ecosystem services, biodiversity conservation and commodity production levels. Here, InVEST model is applied to a stakeholder-defined scenario of land-use/land-cover change in a Mediterranean region basin (the Llobregat basin, Catalonia, Spain). Of all InVEST modules and sub-modules, only the behaviour of the water provisioning one is investigated in this article. The main novel aspect of this work is the sensitivity analysis (SA) carried out to the InVEST model in order to determine the variability of the model response when the values of three of its main coefficients: Z (seasonal precipitation distribution), prec (annual precipitation) and eto (annual evapotranspiration), change. The SA technique used here is a One-At-a-Time (OAT) screening method known as Morris method, applied over each one of the one hundred and fifty four sub-watersheds in which the Llobregat River basin is divided. As a result, this method provides three sensitivity indices for each one of the sub-watersheds under consideration, which are mapped to study how they are spatially distributed. From their analysis, the study shows that, in the case under consideration and between the limits considered for each factor, the effect of the Z coefficient on the model response is negligible, while the other two need to be accurately determined in order to obtain precise output variables. The results of this study will be applicable to the others watersheds assessed in the Consolider Scarce Project.
A Multivariate Analysis of Extratropical Cyclone Environmental Sensitivity
NASA Astrophysics Data System (ADS)
Tierney, G.; Posselt, D. J.; Booth, J. F.
2015-12-01
The implications of a changing climate system include more than a simple temperature increase. A changing climate also modifies atmospheric conditions responsible for shaping the genesis and evolution of atmospheric circulations. In the mid-latitudes, the effects of climate change on extratropical cyclones (ETCs) can be expressed through changes in bulk temperature, horizontal and vertical temperature gradients (leading to changes in mean state winds) as well as atmospheric moisture content. Understanding how these changes impact ETC evolution and dynamics will help to inform climate mitigation and adaptation strategies, and allow for better informed weather emergency planning. However, our understanding is complicated by the complex interplay between a variety of environmental influences, and their potentially opposing effects on extratropical cyclone strength. Attempting to untangle competing influences from a theoretical or observational standpoint is complicated by nonlinear responses to environmental perturbations and a lack of data. As such, numerical models can serve as a useful tool for examining this complex issue. We present results from an analysis framework that combines the computational power of idealized modeling with the statistical robustness of multivariate sensitivity analysis. We first establish control variables, such as baroclinicity, bulk temperature, and moisture content, and specify a range of values that simulate possible changes in a future climate. The Weather Research and Forecasting (WRF) model serves as the link between changes in climate state and ETC relevant outcomes. A diverse set of output metrics (e.g., sea level pressure, average precipitation rates, eddy kinetic energy, and latent heat release) facilitates examination of storm dynamics, thermodynamic properties, and hydrologic cycles. Exploration of the multivariate sensitivity of ETCs to changes in control parameters space is performed via an ensemble of WRF runs coupled with
Uncertainty and sensitivity analysis for photovoltaic system modeling.
Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk
2013-12-01
We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.
Reduced order techniques for sensitivity analysis and design optimization of aerospace systems
NASA Astrophysics Data System (ADS)
Parrish, Jefferson Carter
This work proposes a new method for using reduced order models in lieu of high fidelity analysis during the sensitivity analysis step of gradient based design optimization. The method offers a reduction in the computational cost of finite difference based sensitivity analysis in that context. The method relies on interpolating reduced order models which are based on proper orthogonal decomposition. The interpolation process is performed using radial basis functions and Grassmann manifold projection. It does not require additional high fidelity analyses to interpolate a reduced order model for new points in the design space. The interpolated models are used specifically for points in the finite difference stencil during sensitivity analysis. The proposed method is applied to an airfoil shape optimization (ASO) problem and a transport wing optimization (TWO) problem. The errors associated with the reduced order models themselves as well as the gradients calculated from them are evaluated. The effects of the method on the overall optimization path, computation times, and function counts are also examined. The ASO results indicate that the proposed scheme is a viable method for reducing the computational cost of these optimizations. They also indicate that the adaptive step is an effective method of improving interpolated gradient accuracy. The TWO results indicate that the interpolation accuracy can have a strong impact on optimization search direction.
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.
Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G.
2015-01-01
The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity, than effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g. sandy soil as compared to clayey soil, and “shallow” sources as compared to “deep” sources) are evaluated. Our results, not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive. PMID:25947051
Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G
2015-02-01
The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity than to effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g., sandy soil as compared to clayey soil, and "shallow" sources as compared to "deep" sources) are evaluated. Our results not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive.
Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G
2015-02-01
The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity than to effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g., sandy soil as compared to clayey soil, and "shallow" sources as compared to "deep" sources) are evaluated. Our results not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive. PMID:25947051
Sensitivity analysis for Probabilistic Tsunami Hazard Assessment (PTHA)
NASA Astrophysics Data System (ADS)
Spada, M.; Basili, R.; Selva, J.; Lorito, S.; Sorensen, M. B.; Zonker, J.; Babeyko, A. Y.; Romano, F.; Piatanesi, A.; Tiberti, M.
2012-12-01
In modern societies, probabilistic hazard assessment of natural disasters is commonly used by decision makers for designing regulatory standards and, more generally, for prioritizing risk mitigation efforts. Systematic formalization of Probabilistic Tsunami Hazard Assessment (PTHA) has started only in recent years, mainly following the giant tsunami disaster of Sumatra in 2004. Typically, PTHA for earthquake sources exploits the long-standing practices developed in probabilistic seismic hazard assessment (PSHA), even though important differences are evident. In PTHA, for example, it is known that far-field sources are more important and that physical models for tsunami propagation are needed for the highly non-isotropic propagation of tsunami waves. However, considering the high impact that PTHA may have on societies, an important effort to quantify the effect of specific assumptions should be performed. Indeed, specific standard hypotheses made in PSHA may prove inappropriate for PTHA, since tsunami waves are sensitive to different aspects of sources (e.g. fault geometry, scaling laws, slip distribution) and propagate differently. In addition, the necessity of running an explicit calculation of wave propagation for every possible event (tsunami scenario) forces analysts to finding strategies for diminishing the computational burden. In this work, we test the sensitivity of hazard results with respect to several assumptions that are peculiar of PTHA and others that are commonly accepted in PSHA. Our case study is located in the central Mediterranean Sea and considers the Western Hellenic Arc as the earthquake source with Crete and Eastern Sicily as near-field and far-field target coasts, respectively. Our suite of sensitivity tests includes: a) comparison of random seismicity distribution within area sources as opposed to systematically distributed ruptures on fault sources; b) effects of statistical and physical parameters (a- and b-value, Mc, Mmax, scaling laws
Review of seismic probabilistic risk assessment and the use of sensitivity analysis
Shiu, K.K.; Reed, J.W.; McCann, M.W. Jr.
1985-01-01
This paper presents results of sensitivity reviews performed to address a range of questions which arise in the context of seismic probabilistic risk assessment (PRA). These questions are the subject of this paper. A seismic PRA involves evaluation of seismic hazard, component fragilities, and system responses. They are combined in an integrated analysis to obtain various risk measures, such as frequency of plant damage states. Calculation of these measures depends on combination of non-linear functions based on a number of parameters and assumptions used in the quantification process. Therefore, it is often difficult to examine seismic PRA results and derive useful insights from them if detailed sensitivity studies are absent. In a seismic PRA, sensitivity evaluations can be divided into three areas: hazard, fragility, and system modeling. As a part of the review of a standard boiling water reactor seismic PRA, a reassessment of the plant damage states frequency and a detailed sensitivity analysis were conducted. Seismic event trees and fault trees were developed to model the different system and plant accident sequences. Hazard curves which represent various sites on the east coast were obtained; alternate structure and equipment fragility data were postulated. Various combinations of hazard and fragility data were analyzed. In addition, system modeling was perturbed to examine the impact upon the final results. Orders of magnitude variation were observed in the plant damage state frequency among the different cases. 6 refs., 2 figs., 3 tabs.
General methods for sensitivity analysis of equilibrium dynamics in patch occupancy models
Miller, David A.W.
2012-01-01
Sensitivity analysis is a useful tool for the study of ecological models that has many potential applications for patch occupancy modeling. Drawing from the rich foundation of existing methods for Markov chain models, I demonstrate new methods for sensitivity analysis of the equilibrium state dynamics of occupancy models. Estimates from three previous studies are used to illustrate the utility of the sensitivity calculations: a joint occupancy model for a prey species, its predators, and habitat used by both; occurrence dynamics from a well-known metapopulation study of three butterfly species; and Golden Eagle occupancy and reproductive dynamics. I show how to deal efficiently with multistate models and how to calculate sensitivities involving derived state variables and lower-level parameters. In addition, I extend methods to incorporate environmental variation by allowing for spatial and temporal variability in transition probabilities. The approach used here is concise and general and can fully account for environmental variability in transition parameters. The methods can be used to improve inferences in occupancy studies by quantifying the effects of underlying parameters, aiding prediction of future system states, and identifying priorities for sampling effort.
Breathing dynamics based parameter sensitivity analysis of hetero-polymeric DNA
NASA Astrophysics Data System (ADS)
Talukder, Srijeeta; Sen, Shrabani; Chakraborti, Prantik; Metzler, Ralf; Banik, Suman K.; Chaudhury, Pinaki
2014-03-01
We study the parameter sensitivity of hetero-polymeric DNA within the purview of DNA breathing dynamics. The degree of correlation between the mean bubble size and the model parameters is estimated for this purpose for three different DNA sequences. The analysis leads us to a better understanding of the sequence dependent nature of the breathing dynamics of hetero-polymeric DNA. Out of the 14 model parameters for DNA stability in the statistical Poland-Scheraga approach, the hydrogen bond interaction ɛ _{hb}({AT}) for an {AT} base pair and the ring factor ξ turn out to be the most sensitive parameters. In addition, the stacking interaction ɛ _{st}({TA}-{TA}) for an {TA}-{TA} nearest neighbor pair of base-pairs is found to be the most sensitive one among all stacking interactions. Moreover, we also establish that the nature of stacking interaction has a deciding effect on the DNA breathing dynamics, not the number of times a particular stacking interaction appears in a sequence. We show that the sensitivity analysis can be used as an effective measure to guide a stochastic optimization technique to find the kinetic rate constants related to the dynamics as opposed to the case where the rate constants are measured using the conventional unbiased way of optimization.
General methods for sensitivity analysis of equilibrium dynamics in patch occupancy models.
Miller, David A W
2012-05-01
Sensitivity analysis is a useful tool for the study of ecological models that has many potential applications for patch occupancy modeling. Drawing from the rich foundation of existing methods for Markov chain models, I demonstrate new methods for sensitivity analysis of the equilibrium state dynamics of occupancy models. Estimates from three previous studies are used to illustrate the utility of the sensitivity calculations: a joint occupancy model for a prey species, its predators, and habitat used by both; occurrence dynamics from a well-known metapopulation study of three butterfly species; and Golden Eagle occupancy and reproductive dynamics. I show how to deal efficiently with multistate models and how to calculate sensitivities involving derived state variables and lower-level parameters. In addition, I extend methods to incorporate environmental variation by allowing for spatial and temporal variability in transition probabilities. The approach used here is concise and general and can fully account for environmental variability in transition parameters. The methods can be used to improve inferences in occupancy studies by quantifying the effects of underlying parameters, aiding prediction of future system states, and identifying priorities for sampling effort. PMID:22764506
NASA Astrophysics Data System (ADS)
Lee, Deok Yeon; Shin, Chan Yong; Yoon, Seog Joon; Lee, Haw Young; Lee, Wonjoo; Shrestha, Nabeen K.; Lee, Joong Kee; Han, Sung-Hwan
2014-02-01
In the present work, TiO2 nanoparticle and multi-walled carbon nanotubes composite powder is prepared hydrothermally. After doctor blading the paste from composite powder, the resulted composite film is sensitized with Cu-based metal-organic frameworks using a layer-by-layer deposition technique and the film is characterized using FE-SEM, EDX, XRD, UV/Visible spectrophotometry and photoluminescence spectroscopy. The influence of the carbon nanotubes in photovoltaic performance is studied by constructing a Grätzel cell with I3-/I- redox couple containing electrolyte. The results demonstrate that the introduction of carbon nanotubes accelerates the electron transfer, and thereby enhances the photovoltaic performance of the cell with a nearly 60% increment in power conversion efficiency.
Spatial risk assessment for critical network infrastructure using sensitivity analysis
NASA Astrophysics Data System (ADS)
Möderl, Michael; Rauch, Wolfgang
2011-12-01
The presented spatial risk assessment method allows for managing critical network infrastructure in urban areas under abnormal and future conditions caused e.g., by terrorist attacks, infrastructure deterioration or climate change. For the spatial risk assessment, vulnerability maps for critical network infrastructure are merged with hazard maps for an interfering process. Vulnerability maps are generated using a spatial sensitivity analysis of network transport models to evaluate performance decrease under investigated thread scenarios. Thereby parameters are varied according to the specific impact of a particular threat scenario. Hazard maps are generated with a geographical information system using raster data of the same threat scenario derived from structured interviews and cluster analysis of events in the past. The application of the spatial risk assessment is exemplified by means of a case study for a water supply system, but the principal concept is applicable likewise to other critical network infrastructure. The aim of the approach is to help decision makers in choosing zones for preventive measures.
Global sensitivity analysis of analytical vibroacoustic transmission models
NASA Astrophysics Data System (ADS)
Christen, Jean-Loup; Ichchou, Mohamed; Troclet, Bernard; Bareille, Olivier; Ouisse, Morvan
2016-04-01
Noise reduction issues arise in many engineering problems. One typical vibroacoustic problem is the transmission loss (TL) optimisation and control. The TL depends mainly on the mechanical parameters of the considered media. At early stages of the design, such parameters are not well known. Decision making tools are therefore needed to tackle this issue. In this paper, we consider the use of the Fourier Amplitude Sensitivity Test (FAST) for the analysis of the impact of mechanical parameters on features of interest. FAST is implemented with several structural configurations. FAST method is used to estimate the relative influence of the model parameters while assuming some uncertainty or variability on their values. The method offers a way to synthesize the results of a multiparametric analysis with large variability. Results are presented for transmission loss of isotropic, orthotropic and sandwich plates excited by a diffuse field on one side. Qualitative trends found to agree with the physical expectation. Design rules can then be set up for vibroacoustic indicators. The case of a sandwich plate is taken as an example of the use of this method inside an optimisation process and for uncertainty quantification.
Design Parameters Influencing Reliability of CCGA Assembly: A Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Tasooji, Amaneh; Ghaffarian, Reza; Rinaldi, Antonio
2006-01-01
Area Array microelectronic packages with small pitch and large I/O counts are now widely used in microelectronics packaging. The impact of various package design and materials/process parameters on reliability has been studied through extensive literature review. Reliability of Ceramic Column Grid Array (CCGA) package assemblies has been evaluated using JPL thermal cycle test results (-50(deg)/75(deg)C, -55(deg)/100(deg)C, and -55(deg)/125(deg)C), as well as those reported by other investigators. A sensitivity analysis has been performed using the literature da to study the impact of design parameters and global/local stress conditions on assembly reliability. The applicability of various life-prediction models for CCGA design has been investigated by comparing model's predictions with the experimental thermal cycling data. Finite Element Method (FEM) analysis has been conducted to assess the state of the stress/strain in CCGA assembly under different thermal cycling, and to explain the different failure modes and locations observed in JPL test assemblies.
Robust and sensitive video motion detection for sleep analysis.
Heinrich, Adrienne; Geng, Di; Znamenskiy, Dmitry; Vink, Jelte Peter; de Haan, Gerard
2014-05-01
In this paper, we propose a camera-based system combining video motion detection, motion estimation, and texture analysis with machine learning for sleep analysis. The system is robust to time-varying illumination conditions while using standard camera and infrared illumination hardware. We tested the system for periodic limb movement (PLM) detection during sleep, using EMG signals as a reference. We evaluated the motion detection performance both per frame and with respect to movement event classification relevant for PLM detection. The Matthews correlation coefficient improved by a factor of 2, compared to a state-of-the-art motion detection method, while sensitivity and specificity increased with 45% and 15%, respectively. Movement event classification improved by a factor of 6 and 3 in constant and highly varying lighting conditions, respectively. On 11 PLM patient test sequences, the proposed system achieved a 100% accurate PLM index (PLMI) score with a slight temporal misalignment of the starting time (<1 s) regarding one movement. We conclude that camera-based PLM detection during sleep is feasible and can give an indication of the PLMI score.
Fault sensitivity and wear-out analysis of VLSI systems
NASA Astrophysics Data System (ADS)
Choi, Gwan Seung
1994-07-01
This thesis describes simulation approaches to conduct fault sensitivity and wear-out failure analysis of VLSI systems. A fault-injection approach to study transient impact in VLSI systems is developed. Through simulated fault injection at the device level and, subsequent fault propagation at the gate functional and software levels, it is possible to identify critical bottlenecks in dependability. Techniques to speed up the fault simulation and to perform statistical analysis of fault-impact are developed. A wear-out simulation environment is also developed to closely mimic dynamic sequences of wear-out events in a device through time, to localize weak location/aspect of target chip and to allow generation of TTF (Time-to-failure) distribution of VLSI chip as a whole. First, an accurate simulation of a target chip and its application code is performed to acquire trace data (real workload) on switch activity. Then, using this switch activity information, wear-out of the each component in the entire chip is simulated using Monte Carlo techniques.
Reliability sensitivity-based correlation coefficient calculation in structural reliability analysis
NASA Astrophysics Data System (ADS)
Yang, Zhou; Zhang, Yimin; Zhang, Xufang; Huang, Xianzhen
2012-05-01
The correlation coefficients of random variables of mechanical structures are generally chosen with experience or even ignored, which cannot actually reflect the effects of parameter uncertainties on reliability. To discuss the selection problem of the correlation coefficients from the reliability-based sensitivity point of view, the theory principle of the problem is established based on the results of the reliability sensitivity, and the criterion of correlation among random variables is shown. The values of the correlation coefficients are obtained according to the proposed principle and the reliability sensitivity problem is discussed. Numerical studies have shown the following results: (1) If the sensitivity value of correlation coefficient ρ is less than (at what magnitude 0.000 01), then the correlation could be ignored, which could simplify the procedure without introducing additional error. (2) However, as the difference between ρ s, that is the most sensitive to the reliability, and ρ R , that is with the smallest reliability, is less than 0.001, ρ s is suggested to model the dependency of random variables. This could ensure the robust quality of system without the loss of safety requirement. (3) In the case of | E abs|>0.001 and also | E rel|>0.001, ρ R should be employed to quantify the correlation among random variables in order to ensure the accuracy of reliability analysis. Application of the proposed approach could provide a practical routine for mechanical design and manufactory to study the reliability and reliability-based sensitivity of basic design variables in mechanical reliability analysis and design.
2013-01-01
Background Stochastic modeling and simulation provide powerful predictive methods for the intrinsic understanding of fundamental mechanisms in complex biochemical networks. Typically, such mathematical models involve networks of coupled jump stochastic processes with a large number of parameters that need to be suitably calibrated against experimental data. In this direction, the parameter sensitivity analysis of reaction networks is an essential mathematical and computational tool, yielding information regarding the robustness and the identifiability of model parameters. However, existing sensitivity analysis approaches such as variants of the finite difference method can have an overwhelming computational cost in models with a high-dimensional parameter space. Results We develop a sensitivity analysis methodology suitable for complex stochastic reaction networks with a large number of parameters. The proposed approach is based on Information Theory methods and relies on the quantification of information loss due to parameter perturbations between time-series distributions. For this reason, we need to work on path-space, i.e., the set consisting of all stochastic trajectories, hence the proposed approach is referred to as “pathwise”. The pathwise sensitivity analysis method is realized by employing the rigorously-derived Relative Entropy Rate, which is directly computable from the propensity functions. A key aspect of the method is that an associated pathwise Fisher Information Matrix (FIM) is defined, which in turn constitutes a gradient-free approach to quantifying parameter sensitivities. The structure of the FIM turns out to be block-diagonal, revealing hidden parameter dependencies and sensitivities in reaction networks. Conclusions As a gradient-free method, the proposed sensitivity analysis provides a significant advantage when dealing with complex stochastic systems with a large number of parameters. In addition, the knowledge of the structure of the
Lock Acquisition and Sensitivity Analysis of Advanced LIGO Interferometers
NASA Astrophysics Data System (ADS)
Martynov, Denis
Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe. The initial phase of LIGO started in 2002, and since then data was collected during the six science runs. Instrument sensitivity improved from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010. In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation of detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted until 2014. This thesis describes results of commissioning work done at the LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers. The first part of this thesis is devoted to the description of methods for bringing the interferometer into linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details. Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2014-12-01
Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.
Kang, KyeongJin
2016-03-01
As a further elaboration of the recently devised Q10 scanning analysis ("Exceptionally high thermal sensitivity of rattlesnake TRPA1 correlates with peak current amplitude" [1]), the interval between current data points at two temperatures was shortened and the resulting parameters representing thermal sensitivities such as peak Q10s and temperature points of major thermosensitivity events are presented for two TRPA1 orthologues from rattlesnakes and boas. In addition, the slope factors from Boltzmann fitting and the change of molar heat capacity of temperature-evoked currents were evaluated and compared as alternative ways of thermal sensitivity appraisal of TRPA1 orthologues.
Kang, KyeongJin
2016-01-01
As a further elaboration of the recently devised Q10 scanning analysis (“Exceptionally high thermal sensitivity of rattlesnake TRPA1 correlates with peak current amplitude” [1]), the interval between current data points at two temperatures was shortened and the resulting parameters representing thermal sensitivities such as peak Q10s and temperature points of major thermosensitivity events are presented for two TRPA1 orthologues from rattlesnakes and boas. In addition, the slope factors from Boltzmann fitting and the change of molar heat capacity of temperature-evoked currents were evaluated and compared as alternative ways of thermal sensitivity appraisal of TRPA1 orthologues. PMID:26870758
Analysis methods for the determination of anthropogenic additions of P to agricultural soils
Technology Transfer Automated Retrieval System (TEKTRAN)
Phosphorus additions and measurement in soil is of concern on lands where biosolids have been applied. Colorimetric analysis for plant-available P may be inadequate for the accurate assessment of soil P. Phosphate additions in a regulatory environment need to be accurately assessed as the reported...
Mokhtari, Amirhossein; Frey, H Christopher
2005-12-01
This article demonstrates application of sensitivity analysis to risk assessment models with two-dimensional probabilistic frameworks that distinguish between variability and uncertainty. A microbial food safety process risk (MFSPR) model is used as a test bed. The process of identifying key controllable inputs and key sources of uncertainty using sensitivity analysis is challenged by typical characteristics of MFSPR models such as nonlinearity, thresholds, interactions, and categorical inputs. Among many available sensitivity analysis methods, analysis of variance (ANOVA) is evaluated in comparison to commonly used methods based on correlation coefficients. In a two-dimensional risk model, the identification of key controllable inputs that can be prioritized with respect to risk management is confounded by uncertainty. However, as shown here, ANOVA provided robust insights regarding controllable inputs most likely to lead to effective risk reduction despite uncertainty. ANOVA appropriately selected the top six important inputs, while correlation-based methods provided misleading insights. Bootstrap simulation is used to quantify uncertainty in ranks of inputs due to sampling error. For the selected sample size, differences in F values of 60% or more were associated with clear differences in rank order between inputs. Sensitivity analysis results identified inputs related to the storage of ground beef servings at home as the most important. Risk management recommendations are suggested in the form of a consumer advisory for better handling and storage practices.
Da Costa, Caitlyn; Reynolds, James C; Whitmarsh, Samuel; Lynch, Tom; Creaser, Colin S
2013-01-01
RATIONALE Chemical additives are incorporated into commercial lubricant oils to modify the physical and chemical properties of the lubricant. The quantitative analysis of additives in oil-based lubricants deposited on a surface without extraction of the sample from the surface presents a challenge. The potential of desorption electrospray ionization mass spectrometry (DESI-MS) for the quantitative surface analysis of an oil additive in a complex oil lubricant matrix without sample extraction has been evaluated. METHODS The quantitative surface analysis of the antioxidant additive octyl (4-hydroxy-3,5-di-tert-butylphenyl)propionate in an oil lubricant matrix was carried out by DESI-MS in the presence of 2-(pentyloxy)ethyl 3-(3,5-di-tert-butyl-4-hydroxyphenyl)propionate as an internal standard. A quadrupole/time-of-flight mass spectrometer fitted with an in-house modified ion source enabling non-proximal DESI-MS was used for the analyses. RESULTS An eight-point calibration curve ranging from 1 to 80 µg/spot of octyl (4-hydroxy-3,5-di-tert-butylphenyl)propionate in an oil lubricant matrix and in the presence of the internal standard was used to determine the quantitative response of the DESI-MS method. The sensitivity and repeatability of the technique were assessed by conducting replicate analyses at each concentration. The limit of detection was determined to be 11 ng/mm2 additive on spot with relative standard deviations in the range 3–14%. CONCLUSIONS The application of DESI-MS to the direct, quantitative surface analysis of a commercial lubricant additive in a native oil lubricant matrix is demonstrated. © 2013 The Authors. Rapid Communications in Mass Spectrometry published by John Wiley & Sons, Ltd. PMID:24097398
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Bittker, David A.
1993-01-01
A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS, are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include static system, steady, one-dimensional, inviscid flow, shock initiated reaction, and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method, which works efficiently for the extremes of very fast and very slow reaction, is used for solving the 'stiff' differential equation systems that arise in chemical kinetics. For static reactions, sensitivity coefficients of all dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters can be computed. This paper presents descriptions of the code and its usage, and includes several illustrative example problems.
Advances in Sensitivity Analysis Capabilities with SCALE 6.0 and 6.1
Rearden, Bradley T; Petrie Jr, Lester M; Williams, Mark L
2010-01-01
The sensitivity and uncertainty analysis sequences of SCALE compute the sensitivity of k{sub eff} to each constituent multigroup cross section using perturbation theory based on forward and adjoint transport computations with several available codes. Versions 6.0 and 6.1 of SCALE, released in 2009 and 2010, respectively, include important additions to the TSUNAMI-3D sequence, which computes forward and adjoint solutions in multigroup with the KENO Monte Carlo codes. Previously, sensitivity calculations were performed with the simple and efficient geometry capabilities of KENO V.a, but now calculations can also be performed with the generalized geometry code KENO-VI. TSUNAMI-3D requires spatial refinement of the angular flux moment solutions for the forward and adjoint calculations. These refinements are most efficiently achieved with the use of a mesh accumulator. For SCALE 6.0, a more flexible mesh accumulator capability has been added to the KENO codes, enabling varying granularity of the spatial refinement to optimize the calculation for different regions of the system model. The new mesh capabilities allow the efficient calculation of larger models than were previously possible. Additional improvements in the TSUNAMI calculations were realized in the computation of implicit effects of resonance self-shielding on the final sensitivity coefficients. Multigroup resonance self-shielded cross sections are accurately computed with SCALE's robust deterministic continuous-energy treatment for the resolved and thermal energy range and with Bondarenko shielding factors elsewhere, including the unresolved resonance range. However, the sensitivities of the self-shielded cross sections to the parameters input to the calculation are quantified using only full-range Bondarenko factors.
NASA Astrophysics Data System (ADS)
Cvetković, Dragan; Marković, Dejan
2011-01-01
The aim of this work is to estimate the antioxidant activity of β-carotene in the presence of two different mixtures of phospholipids in hexane solution, under continuous UV-irradiation from three different ranges (UV-A, UV-B, and UV-C). β-Carotene is employed to control lipid peroxidation process generated by UV-irradiation, in the presence and in the absence of selected photosensitizer, benzophenone, by scavenging the involved, created free radicals. The results show that β-carotene undergoes to a substantial, probably structural dependent destruction (bleaching), highly dependent on UV-photons energy input, more expressed in the presence than in the absence of benzophenone. The additional bleaching is synchronized with the further increase in β-carotene antioxidant activity in the presence of benzophenone, implying the same cause: increase in (phospholipids peroxidation) chain-breaking activities.
Plans for a sensitivity analysis of bridge-scour computations
Dunn, David D.; Smith, Peter N.
1993-01-01
Plans for an analysis of the sensitivity of Level 2 bridge-scour computations are described. Cross-section data from 15 bridge sites in Texas are modified to reflect four levels of field effort ranging from no field surveys to complete surveys. Data from United States Geological Survey (USGS) topographic maps will be used to supplement incomplete field surveys. The cross sections are used to compute the water-surface profile through each bridge for several T-year recurrence-interval design discharges. The effect of determining the downstream energy grade-line slope from topographic maps is investigated by systematically varying the starting slope of each profile. The water-surface profile analyses are then used to compute potential scour resulting from each of the design discharges. The planned results will be presented in the form of exceedance-probability versus scour-depth plots with the maximum and minimum scour depths at each T-year discharge presented as error bars.
Sensitivity analysis of near-infrared functional lymphatic imaging
NASA Astrophysics Data System (ADS)
Weiler, Michael; Kassis, Timothy; Dixon, J. Brandon
2012-06-01
Near-infrared imaging of lymphatic drainage of injected indocyanine green (ICG) has emerged as a new technology for clinical imaging of lymphatic architecture and quantification of vessel function, yet the imaging capabilities of this approach have yet to be quantitatively characterized. We seek to quantify its capabilities as a diagnostic tool for lymphatic disease. Imaging is performed in a tissue phantom for sensitivity analysis and in hairless rats for in vivo testing. To demonstrate the efficacy of this imaging approach to quantifying immediate functional changes in lymphatics, we investigate the effects of a topically applied nitric oxide (NO) donor glyceryl trinitrate ointment. Premixing ICG with albumin induces greater fluorescence intensity, with the ideal concentration being 150 μg/mL ICG and 60 g/L albumin. ICG fluorescence can be detected at a concentration of 150 μg/mL as deep as 6 mm with our system, but spatial resolution deteriorates below 3 mm, skewing measurements of vessel geometry. NO treatment slows lymphatic transport, which is reflected in increased transport time, reduced packet frequency, reduced packet velocity, and reduced effective contraction length. NIR imaging may be an alternative to invasive procedures measuring lymphatic function in vivo in real time.
Sensitivity analysis and optimization of the nuclear fuel cycle
Passerini, S.; Kazimi, M. S.; Shwageraus, E.
2012-07-01
A sensitivity study has been conducted to assess the robustness of the conclusions presented in the MIT Fuel Cycle Study. The Once Through Cycle (OTC) is considered as the base-line case, while advanced technologies with fuel recycling characterize the alternative fuel cycles. The options include limited recycling in LWRs and full recycling in fast reactors and in high conversion LWRs. Fast reactor technologies studied include both oxide and metal fueled reactors. The analysis allowed optimization of the fast reactor conversion ratio with respect to desired fuel cycle performance characteristics. The following parameters were found to significantly affect the performance of recycling technologies and their penetration over time: Capacity Factors of the fuel cycle facilities, Spent Fuel Cooling Time, Thermal Reprocessing Introduction Date, and in core and Out-of-core TRU Inventory Requirements for recycling technology. An optimization scheme of the nuclear fuel cycle is proposed. Optimization criteria and metrics of interest for different stakeholders in the fuel cycle (economics, waste management, environmental impact, etc.) are utilized for two different optimization techniques (linear and stochastic). Preliminary results covering single and multi-variable and single and multi-objective optimization demonstrate the viability of the optimization scheme. (authors)
Sensitivity analysis on an AC600 aluminum skin component
NASA Astrophysics Data System (ADS)
Mendiguren, J.; Agirre, J.; Mugarra, E.; Galdos, L.; Saenz de Argandoña, E.
2016-08-01
New materials are been introduced on the car body in order to reduce weight and fulfil the international CO2 emission regulations. Among them, the application of aluminum alloys is increasing for skin panels. Even if these alloys are beneficial for the car design, the manufacturing of these components become more complex. In this regard, numerical simulations have become a necessary tool for die designers. There are multiple factors affecting the accuracy of these simulations e.g. hardening, anisotropy, lubrication, elastic behavior. Numerous studies have been conducted in the last years on high strength steels component stamping and on developing new anisotropic models for aluminum cup drawings. However, the impact of the correct modelling on the latest aluminums for the manufacturing of skin panels has been not yet analyzed. In this work, first, the new AC600 aluminum alloy of JLR-Novelis is characterized for anisotropy, kinematic hardening, friction coefficient, elastic behavior. Next, a sensitivity analysis is conducted on the simulation of a U channel (with drawbeads). Then, the numerical an experimental results are correlated in terms of springback and failure. Finally, some conclusions are drawn.
Sensitivity analysis of surface runoff generation in urban flood forecasting.
Simões, N E; Leitão, J P; Maksimović, C; Sá Marques, A; Pina, R
2010-01-01
Reliable flood forecasting requires hydraulic models capable to estimate pluvial flooding fast enough in order to enable successful operational responses. Increased computational speed can be achieved by using a 1D/1D model, since 2D models are too computationally demanding. Further changes can be made by simplifying 1D network models, removing and by changing some secondary elements. The Urban Water Research Group (UWRG) of Imperial College London developed a tool that automatically analyses, quantifies and generates 1D overland flow network. The overland flow network features (ponds and flow pathways) generated by this methodology are dependent on the number of sewer network manholes and sewer inlets, as some of the overland flow pathways start at manholes (or sewer inlets) locations. Thus, if a simplified version of the sewer network has less manholes (or sewer inlets) than the original one, the overland flow network will be consequently different. This paper compares different overland flow networks generated with different levels of sewer network skeletonisation. Sensitivity analysis is carried out in one catchment area in Coimbra, Portugal, in order to evaluate overland flow network characteristics. PMID:20453333
Quantitative Analysis of Polymer Additives with MALDI-TOF MS Using an Internal Standard Approach
NASA Astrophysics Data System (ADS)
Schwarzinger, Clemens; Gabriel, Stefan; Beißmann, Susanne; Buchberger, Wolfgang
2012-06-01
MALDI-TOF MS is used for the qualitative analysis of seven different polymer additives directly from the polymer without tedious sample pretreatment. Additionally, by using a solid sample preparation technique, which avoids the concentration gradient problems known to occur with dried droplets and by adding tetraphenylporphyrine as an internal standard to the matrix, it is possible to perform quantitative analysis of additives directly from the polymer sample. Calibration curves for Tinuvin 770, Tinuvin 622, Irganox 1024, Irganox 1010, Irgafos 168, and Chimassorb 944 are presented, showing coefficients of determination between 0.911 and 0.990.
Quantitative analysis of polymer additives with MALDI-TOF MS using an internal standard approach.
Schwarzinger, Clemens; Gabriel, Stefan; Beißmann, Susanne; Buchberger, Wolfgang
2012-06-01
MALDI-TOF MS is used for the qualitative analysis of seven different polymer additives directly from the polymer without tedious sample pretreatment. Additionally, by using a solid sample preparation technique, which avoids the concentration gradient problems known to occur with dried droplets and by adding tetraphenylporphyrine as an internal standard to the matrix, it is possible to perform quantitative analysis of additives directly from the polymer sample. Calibration curves for Tinuvin 770, Tinuvin 622, Irganox 1024, Irganox 1010, Irgafos 168, and Chimassorb 944 are presented, showing coefficients of determination between 0.911 and 0.990.
ERIC Educational Resources Information Center
Akturk, Ahmet Oguz
2015-01-01
Purpose: The purpose of this paper is to determine the cyberbullying sensitivity levels of high school students and their perceived social supports levels, and analyze the variables that predict cyberbullying sensitivity. In addition, whether cyberbullying sensitivity levels and social support levels differed according to gender was also…
NASA Astrophysics Data System (ADS)
Liu, Yi; Ren, Liliang; Hong, Yang; Zhu, Ye; Yang, Xiaoli; Yuan, Fei; Jiang, Shanhu
2016-07-01
Reasonable input data selection is of great significance for accurate computation of drought indices. In this study, a comprehensive comparison is conducted on the sensitivity of two commonly used standardization procedures (SP) in drought indices to datasets, namely the probability distribution based SP and the self-calibrating Palmer SP. The standardized Palmer drought index (SPDI) and the self-calibrating Palmer drought severity index (SC-PDSI) are selected as representatives of the two SPs, respectively. Using meteorological observations (1961-2012) in the Yellow River basin, 23 sub-datasets with a length of 30 years are firstly generated with the moving window method. Then we use the whole time series and 23 sub-datasets to compute two indices separately, and compare their spatiotemporal differences, as well as performances in capturing drought areas. Finally, a systematic investigation in term of changing climatic conditions and varied parameters in each SP is conducted. Results show that SPDI is less sensitive to data selection than SC-PDSI. SPDI series derived from different datasets are highly correlated, and consistent in drought area characterization. Sensitivity analysis shows that among the three parameters in the generalized extreme value (GEV) distribution, SPDI is most sensitive to changes in the scale parameter, followed by location and shape parameters. For SC-PDSI, its inconsistent behaviors among different datasets are primarily induced by the self-calibrated duration factors (p and q). In addition, it is found that the introduction of the self-calibrating procedure for duration factors further aggravates the dependence of drought index on input datasets compared with original empirical algorithm that Palmer uses, making SC-PDSI more sensitive to variations in data sample. This study clearly demonstrate the impacts of dataset selection on sensitivity of drought index computation, which has significant implications for proper usage of drought
Economic impact analysis for global warming: Sensitivity analysis for cost and benefit estimates
Ierland, E.C. van; Derksen, L.
1994-12-31
Proper policies for the prevention or mitigation of the effects of global warming require profound analysis of the costs and benefits of alternative policy strategies. Given the uncertainty about the scientific aspects of the process of global warming, in this paper a sensitivity analysis for the impact of various estimates of costs and benefits of greenhouse gas reduction strategies is carried out to analyze the potential social and economic impacts of climate change.
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.
1995-01-01
Three recent developments in the sensitivity analysis for thermomechanical postbuckling response of composite panels are reviewed. The three developments are: (1) effective computational procedure for evaluating hierarchical sensitivity coefficients of the various response quantities with respect to the different laminate, layer, and micromechanical characteristics; (2) application of reduction methods to the sensitivity analysis of the postbuckling response; and (3) accurate evaluation of the sensitivity coefficients to transverse shear stresses. Sample numerical results are presented to demonstrate the effectiveness of the computational procedures presented. Some of the future directions for research on sensitivity analysis for the thermomechanical postbuckling response of composite and smart structures are outlined.
NASA Astrophysics Data System (ADS)
Noor, Ahmed K.
1995-04-01
Three recent developments in the sensitivity analysis for thermomechanical postbuckling response of composite panels are reviewed. The three developments are: (1) effective computational procedure for evaluating hierarchical sensitivity coefficients of the various response quantities with respect to the different laminate, layer, and micromechanical characteristics; (2) application of reduction methods to the sensitivity analysis of the postbuckling response; and (3) accurate evaluation of the sensitivity coefficients to transverse shear stresses. Sample numerical results are presented to demonstrate the effectiveness of the computational procedures presented. Some of the future directions for research on sensitivity analysis for the thermomechanical postbuckling response of composite and smart structures are outlined.
Robustness and period sensitivity analysis of minimal models for biochemical oscillators.
Caicedo-Casso, Angélica; Kang, Hye-Won; Lim, Sookkyung; Hong, Christian I
2015-01-01
Biological systems exhibit numerous oscillatory behaviors from calcium oscillations to circadian rhythms that recur daily. These autonomous oscillators contain complex feedbacks with nonlinear dynamics that enable spontaneous oscillations. The detailed nonlinear dynamics of such systems remains largely unknown. In this paper, we investigate robustness and dynamical differences of five minimal systems that may underlie fundamental molecular processes in biological oscillatory systems. Bifurcation analyses of these five models demonstrate an increase of oscillatory domains with a positive feedback mechanism that incorporates a reversible reaction, and dramatic changes in dynamics with small modifications in the wiring. Furthermore, our parameter sensitivity analysis and stochastic simulations reveal different rankings of hierarchy of period robustness that are determined by the number of sensitive parameters or network topology. In addition, systems with autocatalytic positive feedback loop are shown to be more robust than those with positive feedback via inhibitory degradation regardless of noise type. We demonstrate that robustness has to be comprehensively assessed with both parameter sensitivity analysis and stochastic simulations. PMID:26267886
Sensitivity and Robustness Analysis for Stochastic Model of Nanog Gene Regulatory Network
NASA Astrophysics Data System (ADS)
Wu, Qianqian; Jiang, Feng; Tian, Tianhai
2015-06-01
The advances of systems biology have raised a large number of mathematical models for exploring the dynamic property of biological systems. A challenging issue in mathematical modeling is how to study the influence of parameter variation on system property. Robustness and sensitivity are two major measurements to describe the dynamic property of a system against the variation of model parameters. For stochastic models of discrete chemical reaction systems, although these two properties have been studied separately, no work has been done so far to investigate these two properties together. In this work, we propose an integrated framework to study these two properties for a biological system simultaneously. We also consider a stochastic model with intrinsic noise for the Nanog gene network based on a published model that studies extrinsic noise only. For the stochastic model of Nanog gene network, we identify key coefficients that have more influence on the network dynamics than the others through sensitivity analysis. In addition, robustness analysis suggests that the model parameters can be classified into four types regarding the bistability property of Nanog expression levels. Numerical results suggest that the proposed framework is an efficient approach to study the sensitivity and robustness properties of biological network models.
Robustness and period sensitivity analysis of minimal models for biochemical oscillators
Caicedo-Casso, Angélica; Kang, Hye-Won; Lim, Sookkyung; Hong, Christian I.
2015-01-01
Biological systems exhibit numerous oscillatory behaviors from calcium oscillations to circadian rhythms that recur daily. These autonomous oscillators contain complex feedbacks with nonlinear dynamics that enable spontaneous oscillations. The detailed nonlinear dynamics of such systems remains largely unknown. In this paper, we investigate robustness and dynamical differences of five minimal systems that may underlie fundamental molecular processes in biological oscillatory systems. Bifurcation analyses of these five models demonstrate an increase of oscillatory domains with a positive feedback mechanism that incorporates a reversible reaction, and dramatic changes in dynamics with small modifications in the wiring. Furthermore, our parameter sensitivity analysis and stochastic simulations reveal different rankings of hierarchy of period robustness that are determined by the number of sensitive parameters or network topology. In addition, systems with autocatalytic positive feedback loop are shown to be more robust than those with positive feedback via inhibitory degradation regardless of noise type. We demonstrate that robustness has to be comprehensively assessed with both parameter sensitivity analysis and stochastic simulations. PMID:26267886
A meta-analysis on pain sensitivity in self-injury.
Koenig, J; Thayer, J F; Kaess, M
2016-06-01
Individuals engaging in self-injurious behavior (SIB) frequently report absence of pain during acts of SIB. While altered pain sensitivity is discussed as a risk factor for the engagement in SIB, results have been mixed with considerable variance across reported effect sizes, in particular with respect to the effect of co-morbid psychopathology. The present meta-analysis aimed to summarize the current evidence on pain sensitivity in individuals engaging in SIB and to identify covariates of altered pain processing. Three databases were searched without restrictions. Additionally a hand search was performed and reference lists of included studies were checked for potential studies eligible for inclusion. Thirty-two studies were identified after screening 720 abstracts by two independent reviewers. Studies were included if they reported (i) an empirical investigation, in (ii) humans, including a sample of individuals engaging in (iii) SIB and a group of (iv) healthy controls, (v) receiving painful stimulation. Random-effects meta-analysis was performed on three pain-related outcomes (pain threshold, pain tolerance, pain intensity) and several population- and study-level covariates (i.e. age, sex, clinical etiology) were subjected to meta-regression. Meta-analysis revealed significant main effects associated with medium to large effect sizes for all included outcomes. Individuals engaging in SIB show greater pain threshold and tolerance and report less pain intensity compared to healthy controls. Clinical etiology and age are significant covariates of pain sensitivity in individuals engaging in SIB, such that pain threshold is further increased in borderline personality disorder compared to non-suicidal self-injury. Mechanisms underlying altered pain sensitivity are discussed.
Petzold, L.R.; Rosen, J.B.
1997-12-30
Differential-algebraic equations arise in a wide variety of engineering and scientific problems. Relatively little work has been done regarding sensitivity analysis and model reduction for this class of problems. Efficient methods for sensitivity analysis are required in model development and as an intermediate step in design optimization of engineering processes. Reduced order models are needed for modelling complex physical phenomena like turbulent reacting flows, where it is not feasible to use a fully-detailed model. The objective of this work has been to develop numerical methods and software for sensitivity analysis and model reduction of nonlinear differential-algebraic systems, including large-scale systems. In collaboration with Peter Brown and Alan Hindmarsh of LLNL, the authors developed an algorithm for finding consistent initial conditions for several widely occurring classes of differential-algebraic equations (DAEs). The new algorithm is much more robust than the previous algorithm. It is also very easy to use, having been designed to require almost no information about the differential equation, Jacobian matrix, etc. in addition to what is already needed to take the subsequent time steps. The new algorithm has been implemented in a version of the software for solution of large-scale DAEs, DASPK, which has been made available on the internet. The new methods and software have been used to solve a Tokamak edge plasma problem at LLNL which could not be solved with the previous methods and software because of difficulties in finding consistent initial conditions. The capability of finding consistent initial values is also needed for the sensitivity and optimization efforts described in this paper.
Sensitivity analysis of water quality for Delhi stretch of the River Yamuna, India.
Parmar, D L; Keshari, Ashok K
2012-03-01
Simulation models are used to aid the decision makers about water pollution control and management in river systems. However, uncertainty of model parameters affects the model predictions and hence the pollution control decision. Therefore, it often is necessary to identify the model parameters that significantly affect the model output uncertainty prior to or as a supplement to model application to water pollution control and planning problems. In this study, sensitivity analysis, as a tool for uncertainty analysis was carried out to assess the sensitivity of water quality to (a) model parameters (b) pollution abatement measures such as wastewater treatment, waste discharge and flow augmentation from upstream reservoir. In addition, sensitivity analysis for the "best practical solution" was carried out to help the decision makers in choosing an appropriate option. The Delhi stretch of the river Yamuna was considered as a case study. The QUAL2E model is used for water quality simulation. The results obtained indicate that parameters K(1) (deoxygenation constant) and K(3) (settling oxygen demand), which is the rate of biochemical decomposition of organic matter and rate of BOD removal by settling, respectively, are the most sensitive parameters for the considered river stretch. Different combinations of variations in K(1) and K(2) also revealed similar results for better understanding of inter-dependability of K(1) and K(2). Also, among the pollution abatement methods, the change (perturbation) in wastewater treatment level at primary, secondary, tertiary, and advanced has the greatest effect on the uncertainty of the simulated dissolved oxygen and biochemical oxygen demand concentrations. PMID:21544505
Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models
NASA Astrophysics Data System (ADS)
Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.
2014-01-01
This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based "local" methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative "bucket-style" hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.
A meta-analysis on pain sensitivity in self-injury.
Koenig, J; Thayer, J F; Kaess, M
2016-06-01
Individuals engaging in self-injurious behavior (SIB) frequently report absence of pain during acts of SIB. While altered pain sensitivity is discussed as a risk factor for the engagement in SIB, results have been mixed with considerable variance across reported effect sizes, in particular with respect to the effect of co-morbid psychopathology. The present meta-analysis aimed to summarize the current evidence on pain sensitivity in individuals engaging in SIB and to identify covariates of altered pain processing. Three databases were searched without restrictions. Additionally a hand search was performed and reference lists of included studies were checked for potential studies eligible for inclusion. Thirty-two studies were identified after screening 720 abstracts by two independent reviewers. Studies were included if they reported (i) an empirical investigation, in (ii) humans, including a sample of individuals engaging in (iii) SIB and a group of (iv) healthy controls, (v) receiving painful stimulation. Random-effects meta-analysis was performed on three pain-related outcomes (pain threshold, pain tolerance, pain intensity) and several population- and study-level covariates (i.e. age, sex, clinical etiology) were subjected to meta-regression. Meta-analysis revealed significant main effects associated with medium to large effect sizes for all included outcomes. Individuals engaging in SIB show greater pain threshold and tolerance and report less pain intensity compared to healthy controls. Clinical etiology and age are significant covariates of pain sensitivity in individuals engaging in SIB, such that pain threshold is further increased in borderline personality disorder compared to non-suicidal self-injury. Mechanisms underlying altered pain sensitivity are discussed. PMID:26964517
Sensitivity analysis of water quality for Delhi stretch of the River Yamuna, India.
Parmar, D L; Keshari, Ashok K
2012-03-01
Simulation models are used to aid the decision makers about water pollution control and management in river systems. However, uncertainty of model parameters affects the model predictions and hence the pollution control decision. Therefore, it often is necessary to identify the model parameters that significantly affect the model output uncertainty prior to or as a supplement to model application to water pollution control and planning problems. In this study, sensitivity analysis, as a tool for uncertainty analysis was carried out to assess the sensitivity of water quality to (a) model parameters (b) pollution abatement measures such as wastewater treatment, waste discharge and flow augmentation from upstream reservoir. In addition, sensitivity analysis for the "best practical solution" was carried out to help the decision makers in choosing an appropriate option. The Delhi stretch of the river Yamuna was considered as a case study. The QUAL2E model is used for water quality simulation. The results obtained indicate that parameters K(1) (deoxygenation constant) and K(3) (settling oxygen demand), which is the rate of biochemical decomposition of organic matter and rate of BOD removal by settling, respectively, are the most sensitive parameters for the considered river stretch. Different combinations of variations in K(1) and K(2) also revealed similar results for better understanding of inter-dependability of K(1) and K(2). Also, among the pollution abatement methods, the change (perturbation) in wastewater treatment level at primary, secondary, tertiary, and advanced has the greatest effect on the uncertainty of the simulated dissolved oxygen and biochemical oxygen demand concentrations.
NASA Astrophysics Data System (ADS)
James, S. C.; Makino, H.
2004-12-01
Given pre-existing Groundwater Modeling System (GMS) models of the Horonobe Underground Research Laboratory (URL) at both the regional and site scales, this work performs an example uncertainty analysis for performance assessment (PA) applications. After a general overview of uncertainty and sensitivity analysis techniques, the existing GMS site-scale model is converted to a PA model of the steady-state conditions expected after URL closure. This is done to examine the impact of uncertainty in site-specific data in conjunction with conceptual model uncertainty regarding the location of the Oomagari Fault. In addition, a quantitative analysis of the ratio of dispersive to advective forces, the F-ratio, is performed for stochastic realizations of each conceptual model. All analyses indicate that accurate characterization of the Oomagari Fault with respect to both location and hydraulic conductivity is critical to PA calculations. This work defines and outlines typical uncertainty and sensitivity analysis procedures and demonstrates them with example PA calculations relevant to the Horonobe URL. {\\st Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.}
James, Scott Carlton
2004-08-01
Given pre-existing Groundwater Modeling System (GMS) models of the Horonobe Underground Research Laboratory (URL) at both the regional and site scales, this work performs an example uncertainty analysis for performance assessment (PA) applications. After a general overview of uncertainty and sensitivity analysis techniques, the existing GMS sitescale model is converted to a PA model of the steady-state conditions expected after URL closure. This is done to examine the impact of uncertainty in site-specific data in conjunction with conceptual model uncertainty regarding the location of the Oomagari Fault. In addition, a quantitative analysis of the ratio of dispersive to advective forces, the F-ratio, is performed for stochastic realizations of each conceptual model. All analyses indicate that accurate characterization of the Oomagari Fault with respect to both location and hydraulic conductivity is critical to PA calculations. This work defines and outlines typical uncertainty and sensitivity analysis procedures and demonstrates them with example PA calculations relevant to the Horonobe URL.
Sorption of redox-sensitive elements: critical analysis
Strickert, R.G.
1980-12-01
The redox-sensitive elements (Tc, U, Np, Pu) discussed in this report are of interest to nuclear waste management due to their long-lived isotopes which have a potential radiotoxic effect on man. In their lower oxidation states these elements have been shown to be highly adsorbed by geologic materials occurring under reducing conditions. Experimental research conducted in recent years, especially through the Waste Isolation Safety Assessment Program (WISAP) and Waste/Rock Interaction Technology (WRIT) program, has provided extensive information on the mechanisms of retardation. In general, ion-exchange probably plays a minor role in the sorption behavior of cations of the above three actinide elements. Formation of anionic complexes of the oxidized states with common ligands (OH/sup -/, CO/sup - -//sub 3/) is expected to reduce adsorption by ion exchange further. Pertechnetate also exhibits little ion-exchange sorption by geologic media. In the reduced (IV) state, all of the elements are highly charged and it appears that they form a very insoluble compound (oxide, hydroxide, etc.) or undergo coprecipitation or are incorporated into minerals. The exact nature of the insoluble compounds and the effect of temperature, pH, pe, other chemical species, and other parameters are currently being investigated. Oxidation states other than Tc (IV,VII), U(IV,VI), Np(IV,V), and Pu(IV,V) are probably not important for the geologic repository environment expected, but should be considered especially when extreme conditions exist (radiation, temperature, etc.). Various experimental techniques such as oxidation-state analysis of tracer-level isotopes, redox potential measurement and control, pH measurement, and solid phase identification have been used to categorize the behavior of the various valence states.
Lumen, Annie; McNally, Kevin; George, Nysia; Fisher, Jeffrey W; Loizou, George D
2015-01-01
A deterministic biologically based dose-response model for the thyroidal system in a near-term pregnant woman and the fetus was recently developed to evaluate quantitatively thyroid hormone perturbations. The current work focuses on conducting a quantitative global sensitivity analysis on this complex model to identify and characterize the sources and contributions of uncertainties in the predicted model output. The workflow and methodologies suitable for computationally expensive models, such as the Morris screening method and Gaussian Emulation processes, were used for the implementation of the global sensitivity analysis. Sensitivity indices, such as main, total and interaction effects, were computed for a screened set of the total thyroidal system descriptive model input parameters. Furthermore, a narrower sub-set of the most influential parameters affecting the model output of maternal thyroid hormone levels were identified in addition to the characterization of their overall and pair-wise parameter interaction quotients. The characteristic trends of influence in model output for each of these individual model input parameters over their plausible ranges were elucidated using Gaussian Emulation processes. Through global sensitivity analysis we have gained a better understanding of the model behavior and performance beyond the domains of observation by the simultaneous variation in model inputs over their range of plausible uncertainties. The sensitivity analysis helped identify parameters that determine the driving mechanisms of the maternal and fetal iodide kinetics, thyroid function and their interactions, and contributed to an improved understanding of the system modeled. We have thus demonstrated the use and application of global sensitivity analysis for a biologically based dose-response model for sensitive life-stages such as pregnancy that provides richer information on the model and the thyroidal system modeled compared to local sensitivity analysis.
Shape sensitivity analysis of wing static aeroelastic characteristics
NASA Technical Reports Server (NTRS)
Barthelemy, Jean-Francois M.; Bergen, Fred D.
1988-01-01
A method is presented to calculate analytically the sensitivity derivatives of wing static aeroelastic characteristics with respect to wing shape parameters. The wing aerodynamic response under fixed total load is predicted with Weissinger's L-method; its structural response is obtained with Giles' equivalent plate method. The characteristics of interest include the spanwise distribution of lift, trim angle of attack, rolling and pitching moments, wind induced drag, as well as the divergence dynamic pressure. The shape parameters considered are the wing area, aspect ratio, taper ratio, sweep angle, and tip twist angle. Results of sensitivity studies indicate that: (1) approximations based on analytical sensitivity derivatives can be used over wide ranges of variations of the shape parameters considered, and (2) the analytical calculation of sensitivity derivatives is significantly less expensive than the conventional finite-difference alternative.
Computational aspects of sensitivity calculations in linear transient structural analysis
NASA Technical Reports Server (NTRS)
Greene, W. H.; Haftka, R. T.
1991-01-01
The calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, and transient response problems is studied. Several existing sensitivity calculation methods and two new methods are compared for three example problems. Approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite model. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. This was found to result in poor convergence of stress sensitivities in several cases. Two semianalytical techniques are developed to overcome this poor convergence. Both new methods result in very good convergence of the stress sensitivities; the computational cost is much less than would result if the vibration modes were recalculated and then used in an overall finite difference method.
Decoupled direct method for sensitivity analysis in combustion kinetics
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan
1987-01-01
An efficient, decoupled direct method for calculating the first order sensitivity coefficients of homogeneous, batch combustion kinetic rate equations is presented. In this method the ordinary differential equations for the sensitivity coefficients are solved separately from , but sequentially with, those describing the combustion chemistry. The ordinary differential equations for the thermochemical variables are solved using an efficient, implicit method (LSODE) that automatically selects the steplength and order for each solution step. The solution procedure for the sensitivity coefficients maintains accuracy and stability by using exactly the same steplengths and numerical approximations. The method computes sensitivity coefficients with respect to any combination of the initial values of the thermochemical variables and the three rate constant parameters for the chemical reactions. The method is illustrated by application to several simple problems and, where possible, comparisons are made with exact solutions and those obtained by other techniques.
Phase sensitive signal analysis for bi-tapered optical fibers
NASA Astrophysics Data System (ADS)
Ben Harush Negari, Amit; Jauregui, Daniel; Sierra Hernandez, Juan M.; Garcia Mina, Diego; King, Branden J.; Idehenre, Ighodalo; Powers, Peter E.; Hansen, Karolyn M.; Haus, Joseph W.
2016-03-01
Our study examines the transmission characteristics of bi-tapered optical fibers, i.e. fibers that have a tapered down and up span with a waist length separating them. The applications to aqueous and vapor phase biomolecular sensing demand high sensitivity. A bi-tapered optical fiber platform is suited for label-free biomolecular detection and can be optimized by modification of the length, diameter and surface properties of the tapered region. We have developed a phase sensitive method based on interference of two or more modes of the fiber and we demonstrate that our fiber sensitivity is of order 10-4 refractive index units. Higher sensitivity can be achieved, as needed, by enhancing the fiber design characteristics.
NASA Astrophysics Data System (ADS)
Liu, I.-Ping; Chen, Liang-Yih; Lee, Yuh-Lang
2016-09-01
Sodium acetate (NaAc) is utilized as an additive in cationic precursors of the successive ionic layer adsorption and reaction (SILAR) process to fabricate CdS quantum-dot (QD)-sensitized photoelectrodes. The effects of the NaAc concentration on the deposition rate and distribution of QDs in mesoporous TiO2 films, as well as on the performance of CdS-sensitized solar cells are studied. The experimental results show that the presence of NaAc can significantly accelerate the deposition of CdS, improve the QD distribution across photoelectrodes, and thereby, increase the performance of solar cells. These results are mainly attributed to the pH-elevation effect of NaAc to the cationic precursors which increases the electrostatic interaction of the TiO2 film to cadmium ions. The light-to-energy conversion efficiency of the CdS-sensitized solar cell increases with increasing concentration of the NaAc and approaches a maximum value (3.11%) at 0.05 M NaAc. Additionally, an ionic exchange is carried out on the photoelectrode to transform the deposited CdS into CdS1-xSex ternary QDs. The light-absorption range of the photoelectrode is extended and an exceptional power conversion efficiency of 4.51% is achieved due to this treatment.
Sensitivity analysis of static resistance of slender beam under bending
NASA Astrophysics Data System (ADS)
Valeš, Jan
2016-06-01
The paper deals with statical and sensitivity analyses of resistance of simply supported I-beams under bending. The resistance was solved by geometrically nonlinear finite element method in the programme Ansys. The beams are modelled with initial geometrical imperfections following the first eigenmode of buckling. Imperfections were, together with geometrical characteristics of cross section, and material characteristics of steel, considered as random quantities. The method Latin Hypercube Sampling was applied to evaluate statistical and sensitivity resistance analyses.
Sensitivity analysis of the age-structured malaria transmission model
NASA Astrophysics Data System (ADS)
Addawe, Joel M.; Lope, Jose Ernie C.
2012-09-01
We propose an age-structured malaria transmission model and perform sensitivity analyses to determine the relative importance of model parameters to disease transmission. We subdivide the human population into two: preschool humans (below 5 years) and the rest of the human population (above 5 years). We then consider two sets of baseline parameters, one for areas of high transmission and the other for areas of low transmission. We compute the sensitivity indices of the reproductive number and the endemic equilibrium point with respect to the two sets of baseline parameters. Our simulations reveal that in areas of either high or low transmission, the reproductive number is most sensitive to the number of bites by a female mosquito on the rest of the human population. For areas of low transmission, we find that the equilibrium proportion of infectious pre-school humans is most sensitive to the number of bites by a female mosquito. For the rest of the human population it is most sensitive to the rate of acquiring temporary immunity. In areas of high transmission, the equilibrium proportion of infectious pre-school humans and the rest of the human population are both most sensitive to the birth rate of humans. This suggests that strategies that target the mosquito biting rate on pre-school humans and those that shortens the time in acquiring immunity can be successful in preventing the spread of malaria.
Sensitivity analysis of structural parameters to measurement noninvariance: A Bayesian approach
NASA Astrophysics Data System (ADS)
Kang, Yoon Jeong
Most previous studies have argued that the validity of group comparisons of structural parameters is dependent on the extent to which measurement invariance is met. Although some researchers have supported the concept of partial invariance, there is still no clear-cut partial invariance level which is needed to make valid group comparisons. In addition, relatively little attention has been paid to the implications of failing measurement invariance (e.g., partial measurement invariance) on group comparison on the underlying latent constructs in the multiple-group confirmatory factor analysis (MGCFA) framework. Given this, the purpose of the current study was to examine the extent to which measurement noninvariance affects structural parameter comparisons across populations in the MGCFA framework. Particularly, this study takes a Bayesian approach to investigate the sensitivity of the posterior distribution of structural parameter difference to varying types and magnitudes of noninvariance across two populations. A Monte Carlo simulation was performed to empirically investigate the sensitivity of structural parameters to varying types and magnitudes of noninvariant measurement models across two populations from a Bayesian approach. In order to assess the sensitivity of noninvariance conditions, three outcome variables were evaluated: (1) accuracy of statistical conclusion on structural parameter difference, (2) precision of the estimated structural parameter difference, and (3) bias in the posterior mean of structural parameter difference. Inconsistent with findings of previous studies, the results of this study showed that the three outcome variables were not sensitive to varying types and magnitudes of noninvariance across all conditions. Instead, the three outcome variables were sensitive to sample size, factor loading size, and prior distribution. These results indicate that even under a large magnitude of measurement noninvariance, accurate conclusions and
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; Turner, Adrian Keith; Jeffery, Nicole
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Enhanced orbit determination filter sensitivity analysis: Error budget development
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Burkhart, P. D.
1994-01-01
An error budget analysis is presented which quantifies the effects of different error sources in the orbit determination process when the enhanced orbit determination filter, recently developed, is used to reduce radio metric data. The enhanced filter strategy differs from more traditional filtering methods in that nearly all of the principal ground system calibration errors affecting the data are represented as filter parameters. Error budget computations were performed for a Mars Observer interplanetary cruise scenario for cases in which only X-band (8.4-GHz) Doppler data were used to determine the spacecraft's orbit, X-band ranging data were used exclusively, and a combined set in which the ranging data were used in addition to the Doppler data. In all three cases, the filter model was assumed to be a correct representation of the physical world. Random nongravitational accelerations were found to be the largest source of error contributing to the individual error budgets. Other significant contributors, depending on the data strategy used, were solar-radiation pressure coefficient uncertainty, random earth-orientation calibration errors, and Deep Space Network (DSN) station location uncertainty.
Sensitivity analysis of aeroelastic response of a wing using piecewise pressure representation
NASA Astrophysics Data System (ADS)
Eldred, Lloyd B.; Kapania, Rakesh K.; Barthelemy, Jean-Francois M.
1993-04-01
A sensitivity analysis scheme of the static aeroelastic response of a wing is developed, by incorporating a piecewise panel-based pressure representation into an existing wing aeroelastic model to improve the model's fidelity, including the sensitivity of the wing static aeroelastic response with respect to various shape parameters. The new formulation is quite general and accepts any aerodynamics and structural analysis capability. A program is developed which combines the local sensitivities, such as the sensitivity of the stiffness matrix or the aerodynamic kernel matrix, into global sensitivity derivatives.
An analytical approach to grid sensitivity analysis for NACA four-digit wing sections
NASA Technical Reports Server (NTRS)
Sadrehaghighi, I.; Tiwari, S. N.
1992-01-01
Sensitivity analysis in computational fluid dynamics with emphasis on grids and surface parameterization is described. An interactive algebraic grid-generation technique is employed to generate C-type grids around NACA four-digit wing sections. An analytical procedure is developed for calculating grid sensitivity with respect to design parameters of a wing section. A comparison of the sensitivity with that obtained using a finite difference approach is made. Grid sensitivity with respect to grid parameters, such as grid-stretching coefficients, are also investigated. Using the resultant grid sensitivity, aerodynamic sensitivity is obtained using the compressible two-dimensional thin-layer Navier-Stokes equations.
Sensitivity analysis of conservation targets in systematic conservation planning.
Levin, Noam; Mazor, Tessa; Brokovich, Eran; Jablon, Pierre-Elie; Kark, Salit
2015-10-01
flexibility in a conservation network is adequate when ~10-20% of the study area is considered irreplaceable (selection frequency values over 90%). This approach offers a useful sensitivity analysis when applying target-based systematic conservation planning tools, ensuring that the resulting protected area conservation network offers more choices for managers and decision makers. PMID:26591464
Thermal-Hydrological Sensitivity Analysis of Underground Coal Gasification
Buscheck, T A; Hao, Y; Morris, J P; Burton, E A
2009-10-05
. Specifically, we conducted a parameter sensitivity analysis of the influence of thermal and hydrological properties of the host coal, caprock, and bedrock on cavity temperature and steam production.
Han, Haifeng; Wang, Qing; Liu, Xi; Jiang, Shengxiang
2012-05-01
A new capillary electrophoretic method for the rapid and direct separation of seven organic acids in beverages was developed, with poly (1-vinyl-3-butylimidazolium bromide) as the reliable background electrolyte modifier to reverse the direction of anode electroosmotic flow (EOF) severely. Several factors that affected the separation efficiency were investigated in detail. The optimal running buffer consisted of 125 mmol/L sodium dihydrogen phosphate (pH 6.5) and 0.01 g/L poly (1-vinyl-3-butylimidazolium bromide). Highly efficient separation (105,000 to 636,000 plates/m) was achieved within 4 min and standard deviations of the migration times (n=3) were lower than 0.0213 min under optimal conditions. The limits of detection (S/N = 3) ranged from 0.001 to 0.05 g/L. The present method was applied to determine a beverage sample (Mirinda) for sodium citrate, benzoic acid and sorbic acid with concentration of 2.64, 0.10 and 0.08 g/L, respectively. The recoveries of the three analytes in the sample were 100.3%, 100.7% and 131.7%, respectively. The method is simple, rapid, inexpensive, and can be applied to determine organic acids as additives in beverages.
How to assess the Efficiency and "Uncertainty" of Global Sensitivity Analysis?
NASA Astrophysics Data System (ADS)
Haghnegahdar, Amin; Razavi, Saman
2016-04-01
Sensitivity analysis (SA) is an important paradigm for understanding model behavior, characterizing uncertainty, improving model calibration, etc. Conventional "global" SA (GSA) approaches are rooted in different philosophies, resulting in different and sometime conflicting and/or counter-intuitive assessment of sensitivity. Moreover, most global sensitivity techniques are highly computationally demanding to be able to generate robust and stable sensitivity metrics over the entire model response surface. Accordingly, a novel sensitivity analysis method called Variogram Analysis of Response Surfaces (VARS) is introduced to overcome the aforementioned issues. VARS uses the Variogram concept to efficiently provide a comprehensive assessment of global sensitivity across a range of scales within the parameter space. Based on the VARS principles, in this study we present innovative ideas to assess (1) the efficiency of GSA algorithms and (2) the level of confidence we can assign to a sensitivity assessment. We use multiple hydrological models with different levels of complexity to explain the new ideas.
Local sensitivity analysis for inverse problems solved by singular value decomposition
Hill, M.C.; Nolan, B.T.
2010-01-01
regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.
Fractal Analysis of Stress Sensitivity of Permeability in Porous Media
NASA Astrophysics Data System (ADS)
Tan, Xiao-Hua; Li, Xiao-Ping; Liu, Jian-Yi; Zhang, Lie-Hui; Cai, Jianchao
2015-12-01
A permeability model for porous media considering the stress sensitivity is derived based on mechanics of materials and the fractal characteristics of solid cluster size distribution. The permeability of porous media considering the stress sensitivity is related to solid cluster fractal dimension, solid cluster fractal tortuosity dimension, solid cluster minimum diameter and solid cluster maximum diameter, Young's modulus, Poisson's ratio, as well as power index. Every parameter has clear physical meaning without the use of empirical constants. The model predictions of permeability show good agreement with those obtained by the available experimental expression. The proposed model may be conducible to a better understanding of the mechanism for flow in elastic porous media.
NASA Astrophysics Data System (ADS)
Wang, Xiao-Lin; Wu, Meng; Ding, Jie; Li, Ze-Sheng; Sun, Ke-Ning
2014-01-01
Five typical additives N-Butylbenzimidazole (NBB), N-Methylbenzimidazole (NMBI), 3-Methoxypropionitrile (MPN), 4-tert-butylpyridine (TBP) and Guanidinium thiocyanate (GNCS) are selected to investigate the diverse interactions with TiO2 anatase (101), (100) and (001) surfaces in vacuum and acetonitrile conditions, respectively, by means of the analyses of adsorption mode and electronic structure based on a periodic density functional theory method. Five additives are adsorbed more strongly in the order (101) < (100) < (001). The defects that appear in the upmost TiO2 (001) surface induced by additive adsorption affect bonding greatly. GNCS possesses the maximum adsorption energy due to special multidentate and dissociative adsorption modes, while MPN has the minimum adsorption energy, no matter which surface is used. Positive Fermi energy shift (i.e. negative potential shift) is in the order (100) < (001) < (101) for every additive adsorption. The larger shift results in the higher open-circuit photovoltage of dye-sensitized solar cells. Acetonitrile addition reduces the adsorption energy but improves the shift trend of Fermi energy except TBP-TiO2 (100) and (001) systems. There should be a critical point of adsorption density for MPN and TBP adsorption on the TiO2 (100) and (001) surfaces, changing Fermi energy shift from negative to positive value.
Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis
NASA Technical Reports Server (NTRS)
Kallman, Tim
2006-01-01
A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn 011 many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.
Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis
NASA Technical Reports Server (NTRS)
Kallman, Tim
2006-01-01
A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn on many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.
Schijven, J F; Mülschlegel, J H C; Hassanizadeh, S M; Teunis, P F M; de Roda Husman, A M
2006-09-01
Protection zones of shallow unconfined aquifers in The Netherlands were calculated that allow protection against virus contamination to the level that the infection risk of 10(-4) per person per year is not exceeded with a 95% certainty. An uncertainty and a sensitivity analysis of the calculated protection zones were included. It was concluded that protection zones of 1 to 2 years travel time (206-418 m) are needed (6 to 12 times the currently applied travel time of 60 days). This will lead to enlargement of protection zones, encompassing 110 unconfined groundwater well systems that produce 3 x 10(8) m3 y(-1) of drinking water (38% of total Dutch production from groundwater). A smaller protection zone is possible if it can be shown that an aquifer has properties that lead to greater reduction of virus contamination, like more attachment. Deeper aquifers beneath aquitards of at least 2 years of vertical travel time are adequately protected because vertical flow in the aquitards is only 0.7 m per year. The most sensitive parameters are virus attachment and inactivation. The next most sensitive parameters are grain size of the sand, abstraction rate of groundwater, virus concentrations in raw sewage and consumption of unboiled drinking water. Research is recommended on additional protection by attachment and under unsaturated conditions.
Analysis of JPSS J1 VIIRS Polarization Sensitivity Using the NIST T-SIRCUS
NASA Technical Reports Server (NTRS)
McIntire, Jeffrey W.; Young, James B.; Moyer, David; Waluschka, Eugene; Oudrari, Hassan; Xiong, Xiaoxiong
2015-01-01
The polarization sensitivity of the Joint Polar Satellite System (JPSS) J1 Visible Infrared Imaging Radiometer Suite (VIIRS) measured pre-launch using a broadband source was observed to be larger than expected for many reflective bands. Ray trace modeling predicted that the observed polarization sensitivity was the result of larger diattenuation at the edges of the focal plane filter spectral bandpass. Additional ground measurements were performed using a monochromatic source (the NIST T-SIRCUS) to input linearly polarized light at a number of wavelengths across the bandpass of two VIIRS spectral bands and two scan angles. This work describes the data processing, analysis, and results derived from the T-SIRCUS measurements, comparing them with broadband measurements. Results have shown that the observed degree of linear polarization, when weighted by the sensor's spectral response function, is generally larger on the edges and smaller in the center of the spectral bandpass, as predicted. However, phase angle changes in the center of the bandpass differ between model and measurement. Integration of the monochromatic polarization sensitivity over wavelength produced results consistent with the broadband source measurements, for all cases considered.
Large-scale transient sensitivity analysis of a radiation damaged bipolar junction transistor.
Hoekstra, Robert John; Gay, David M.; Bartlett, Roscoe Ainsworth; Phipps, Eric Todd
2007-11-01
Automatic differentiation (AD) is useful in transient sensitivity analysis of a computational simulation of a bipolar junction transistor subject to radiation damage. We used forward-mode AD, implemented in a new Trilinos package called Sacado, to compute analytic derivatives for implicit time integration and forward sensitivity analysis. Sacado addresses element-based simulation codes written in C++ and works well with forward sensitivity analysis as implemented in the Trilinos time-integration package Rythmos. The forward sensitivity calculation is significantly more efficient and robust than finite differencing.
NASA Astrophysics Data System (ADS)
He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei
2015-02-01
A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety.
He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei
2015-02-25
A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety.
He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei
2015-02-25
A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety. PMID:25300041
A Sensitivity Analysis of a Thin Film Conductivity Estimation Method
McMasters, Robert L; Dinwiddie, Ralph Barton
2010-01-01
An analysis method was developed for determining the thermal conductivity of a thin film on a substrate of known thermal properties using the flash diffusivity method. In order to determine the thermal conductivity of the film using this method, the volumetric heat capacity of the film must be known, as determined in a separate experiment. Additionally, the thermal properties of the substrate must be known, including conductivity and volumetric heat capacity. The ideal conditions for the experiment are a low conductivity film adhered to a higher conductivity substrate. As the film becomes thinner with respect to the substrate or, as the conductivity of the film approaches that of the substrate, the estimation of thermal conductivity of the film becomes more difficult. The present research examines the effect of inaccuracies in the known parameters on the estimation of the parameter of interest, the thermal conductivity of the film. As such, perturbations are introduced into the other parameters in the experiment, which are assumed to be known, to find the effect on the estimated thermal conductivity of the film. A baseline case is established with the following parameters: Substrate thermal conductivity 1.0 W/m-K Substrate volumetric heat capacity 106 J/m3-K Substrate thickness 0.8 mm Film thickness 0.2 mm Film volumetric heat capacity 106 J/m3-K Film thermal conductivity 0.01 W/m-K Convection coefficient 20 W/m2-K Magnitude of heat absorbed during the flash 1000 J/m2 Each of these parameters, with the exception of film thermal conductivity, the parameter of interest, is varied from its baseline value, in succession, and placed into a synthetic experimental data file. Each of these data files is individually analyzed by the program to determine the effect on the estimated film conductivity, thus quantifying the vulnerability of the method to measurement errors.
Stimulation of terrestrial ecosystem carbon storage by nitrogen addition: a meta-analysis
NASA Astrophysics Data System (ADS)
Yue, Kai; Peng, Yan; Peng, Changhui; Yang, Wanqin; Peng, Xin; Wu, Fuzhong
2016-01-01
Elevated nitrogen (N) deposition alters the terrestrial carbon (C) cycle, which is likely to feed back to further climate change. However, how the overall terrestrial ecosystem C pools and fluxes respond to N addition remains unclear. By synthesizing data from multiple terrestrial ecosystems, we quantified the response of C pools and fluxes to experimental N addition using a comprehensive meta-analysis method. Our results showed that N addition significantly stimulated soil total C storage by 5.82% ([2.47%, 9.27%], 95% CI, the same below) and increased the C contents of the above- and below-ground parts of plants by 25.65% [11.07%, 42.12%] and 15.93% [6.80%, 25.85%], respectively. Furthermore, N addition significantly increased aboveground net primary production by 52.38% [40.58%, 65.19%] and litterfall by 14.67% [9.24%, 20.38%] at a global scale. However, the C influx from the plant litter to the soil through litter decomposition and the efflux from the soil due to microbial respiration and soil respiration showed insignificant responses to N addition. Overall, our meta-analysis suggested that N addition will increase soil C storage and plant C in both above- and below-ground parts, indicating that terrestrial ecosystems might act to strengthen as a C sink under increasing N deposition.
Stimulation of terrestrial ecosystem carbon storage by nitrogen addition: a meta-analysis
Yue, Kai; Peng, Yan; Peng, Changhui; Yang, Wanqin; Peng, Xin; Wu, Fuzhong
2016-01-01
Elevated nitrogen (N) deposition alters the terrestrial carbon (C) cycle, which is likely to feed back to further climate change. However, how the overall terrestrial ecosystem C pools and fluxes respond to N addition remains unclear. By synthesizing data from multiple terrestrial ecosystems, we quantified the response of C pools and fluxes to experimental N addition using a comprehensive meta-analysis method. Our results showed that N addition significantly stimulated soil total C storage by 5.82% ([2.47%, 9.27%], 95% CI, the same below) and increased the C contents of the above- and below-ground parts of plants by 25.65% [11.07%, 42.12%] and 15.93% [6.80%, 25.85%], respectively. Furthermore, N addition significantly increased aboveground net primary production by 52.38% [40.58%, 65.19%] and litterfall by 14.67% [9.24%, 20.38%] at a global scale. However, the C influx from the plant litter to the soil through litter decomposition and the efflux from the soil due to microbial respiration and soil respiration showed insignificant responses to N addition. Overall, our meta-analysis suggested that N addition will increase soil C storage and plant C in both above- and below-ground parts, indicating that terrestrial ecosystems might act to strengthen as a C sink under increasing N deposition. PMID:26813078
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).
Survey of sampling-based methods for uncertainty and sensitivity analysis.
Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD.; Storlie, Curt B. (Colorado State University, Fort Collins, CO)
2006-06-01
Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.
Aerodynamic design optimization with sensitivity analysis and computational fluid dynamics
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1995-01-01
An investigation was conducted from October 1, 1990 to May 31, 1994 on the development of methodologies to improve the designs (more specifically, the shape) of aerodynamic surfaces of coupling optimization algorithms (OA) with Computational Fluid Dynamics (CFD) algorithms via sensitivity analyses (SA). The study produced several promising methodologies and their proof-of-concept cases, which have been reported in the open literature.
Sensitivity Analysis for Hierarchical Models Employing "t" Level-1 Assumptions.
ERIC Educational Resources Information Center
Seltzer, Michael; Novak, John; Choi, Kilchan; Lim, Nelson
2002-01-01
Examines the ways in which level-1 outliers can impact the estimation of fixed effects and random effects in hierarchical models (HMs). Also outlines and illustrates the use of Markov Chain Monte Carlo algorithms for conducting sensitivity analyses under "t" level-1 assumptions, including algorithms for settings in which the degrees of freedom at…
Intelligence and Interpersonal Sensitivity: A Meta-Analysis
ERIC Educational Resources Information Center
Murphy, Nora A.; Hall, Judith A.
2011-01-01
A meta-analytic review investigated the association between general intelligence and interpersonal sensitivity. The review involved 38 independent samples with 2988 total participants. There was a highly significant small-to-medium effect for intelligence measures to be correlated with decoding accuracy (r=0.19, p less than 0.001). Significant…
Diagnosis of Middle Atmosphere Climate Sensitivity by the Climate Feedback Response Analysis Method
NASA Technical Reports Server (NTRS)
Zhu, Xun; Yee, Jeng-Hwa; Cai, Ming; Swartz, William H.; Coy, Lawrence; Aquila, Valentina; Talaat, Elsayed R.
2014-01-01
We present a new method to diagnose the middle atmosphere climate sensitivity by extending the Climate Feedback-Response Analysis Method (CFRAM) for the coupled atmosphere-surface system to the middle atmosphere. The Middle atmosphere CFRAM (MCFRAM) is built on the atmospheric energy equation per unit mass with radiative heating and cooling rates as its major thermal energy sources. MCFRAM preserves the CFRAM unique feature of an additive property for which the sum of all partial temperature changes due to variations in external forcing and feedback processes equals the observed temperature change. In addition, MCFRAM establishes a physical relationship of radiative damping between the energy perturbations associated with various feedback processes and temperature perturbations associated with thermal responses. MCFRAM is applied to both measurements and model output fields to diagnose the middle atmosphere climate sensitivity. It is found that the largest component of the middle atmosphere temperature response to the 11-year solar cycle (solar maximum vs. solar minimum) is directly from the partial temperature change due to the variation of the input solar flux. Increasing CO2 always cools the middle atmosphere with time whereas partial temperature change due to O3 variation could be either positive or negative. The partial temperature changes due to different feedbacks show distinctly different spatial patterns. The thermally driven globally averaged partial temperature change due to all radiative processes is approximately equal to the observed temperature change, ranging from 0.5 K near 70 km from the near solar maximum to the solar minimum.
On an Additive Semigraphoid Model for Statistical Networks With Application to Pathway Analysis
Li, Bing; Chun, Hyonho; Zhao, Hongyu
2014-01-01
We introduce a nonparametric method for estimating non-gaussian graphical models based on a new statistical relation called additive conditional independence, which is a three-way relation among random vectors that resembles the logical structure of conditional independence. Additive conditional independence allows us to use one-dimensional kernel regardless of the dimension of the graph, which not only avoids the curse of dimensionality but also simplifies computation. It also gives rise to a parallel structure to the gaussian graphical model that replaces the precision matrix by an additive precision operator. The estimators derived from additive conditional independence cover the recently introduced nonparanormal graphical model as a special case, but outperform it when the gaussian copula assumption is violated. We compare the new method with existing ones by simulations and in genetic pathway analysis. PMID:26401064
Dźwiarek, Marek; Latała, Agata
2016-01-01
This article presents an analysis of results of 1035 serious and 341 minor accidents recorded by Poland's National Labour Inspectorate (PIP) in 2005–2011, in view of their prevention by means of additional safety measures applied by machinery users. Since the analysis aimed at formulating principles for the application of technical safety measures, the analysed accidents should bear additional attributes: the type of machine operation, technical safety measures and the type of events causing injuries. The analysis proved that the executed tasks and injury-causing events were closely connected and there was a relation between casualty events and technical safety measures. In the case of tasks consisting of manual feeding and collecting materials, the injuries usually occur because of the rotating motion of tools or crushing due to a closing motion. Numerous accidents also happened in the course of supporting actions, like removing pollutants, correcting material position, cleaning, etc. PMID:26652689
van Eijk, Ronald; van Puijenbroek, Marjo; Chhatta, Amiet R; Gupta, Nisha; Vossen, Rolf H A M; Lips, Esther H; Cleton-Jansen, Anne-Marie; Morreau, Hans; van Wezel, Tom
2010-01-01
Kirsten RAS (KRAS) is a small GTPase that plays a key role in Ras/mitogen-activated protein kinase signaling; somatic mutations in KRAS are frequently found in many cancers. The most common KRAS mutations result in a constitutively active protein. Accurate detection of KRAS mutations is pivotal to the molecular diagnosis of cancer and may guide proper treatment selection. Here, we describe a two-step KRAS mutation screening protocol that combines whole-genome amplification (WGA), high-resolution melting analysis (HRM) as a prescreen method for mutation carrying samples, and direct Sanger sequencing of DNA from formalin-fixed, paraffin-embedded (FFPE) tissue, from which limited amounts of DNA are available. We developed target-specific primers, thereby avoiding amplification of homologous KRAS sequences. The addition of herring sperm DNA facilitated WGA in DNA samples isolated from as few as 100 cells. KRAS mutation screening using high-resolution melting analysis on wgaDNA from formalin-fixed, paraffin-embedded tissue is highly sensitive and specific; additionally, this method is feasible for screening of clinical specimens, as illustrated by our analysis of pancreatic cancers. Furthermore, PCR on wgaDNA does not introduce genotypic changes, as opposed to unamplified genomic DNA. This method can, after validation, be applied to virtually any potentially mutated region in the genome.
Keissar, K; Maestri, R; Pinna, G D; La Rovere, M T; Gilad, O
2010-07-01
A novel approach for the estimation of baroreflex sensitivity (BRS) is introduced based on time-frequency analysis of the transfer function (TF). The TF method (TF-BRS) is a well-established non-invasive technique which assumes stationarity. This condition is difficult to meet, especially in cardiac patients. In this study, the classical TF was replaced with a wavelet transfer function (WTF) and the classical coherence was replaced with wavelet transform coherence (WTC), adding the time domain as an additional degree of freedom with dynamic error estimation. Error analysis and comparison between WTF-BRS and TF-BRS were performed using simulated signals with known transfer function and added noise. Similar comparisons were performed for ECG and blood pressure signals, in the supine position, of 19 normal subjects, 44 patients with a history of previous myocardial infarction (MI) and 45 patients with chronic heart failure. This yielded an excellent linear association (R > 0.94, p < 0.001) for time-averaged WTF-BRS, validating the new method as consistent with a known method. The additional advantage of dynamic analysis of coherence and TF estimates was illustrated in two physiological examples of supine rest and change of posture showing the evolution of BRS synchronized with its error estimations and sympathovagal balance. PMID:20585147
Coupled Aerodynamic and Structural Sensitivity Analysis of a High-Speed Civil Transport
NASA Technical Reports Server (NTRS)
Mason, B. H.; Walsh, J. L.
2001-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite-element structural analysis and computational fluid dynamics aerodynamic analysis. In a previous study, a multi-disciplinary analysis system for a high-speed civil transport was formulated to integrate a set of existing discipline analysis codes, some of them computationally intensive, This paper is an extension of the previous study, in which the sensitivity analysis for the coupled aerodynamic and structural analysis problem is formulated and implemented. Uncoupled stress sensitivities computed with a constant load vector in a commercial finite element analysis code are compared to coupled aeroelastic sensitivities computed by finite differences. The computational expense of these sensitivity calculation methods is discussed.
Rea, Jennifer C; Freistadt, Benny S; McDonald, Daniel; Farnan, Dell; Wang, Yajun Jennifer
2015-12-11
Ion-exchange chromatography (IEC) is widely used for profiling the charge heterogeneity of proteins, including monoclonal antibodies (mAbs). Despite good resolving power and robustness, ionic strength-based ion-exchange separations are generally product specific and can be time consuming to develop. In addition, conventional analytical scale ion-exchange separations require tens of micrograms of mAbs for each injection, amounts that are often unavailable in sample-limited applications. We report the development of a capillary IEC (c-IEC) methodology for the analysis of nanogram amounts of mAb charge variants. Several key modifications were made to a commercially available liquid chromatography system to perform c-IEC for charge variant analysis of mAbs with nanogram sensitivity. We demonstrate the method for multiple monoclonal antibodies, including antibody fragments, on different columns from different manufacturers. Relative standard deviations of <10% were achieved for relative peak areas of main peak, acidic and basic regions, which are common regions of interest for quantifying monoclonal antibody charge variants using IEC. The results herein demonstrate the excellent sensitivity of this c-IEC characterization method, which can be used for analyzing charge variants in sample-limited applications, such as early-stage candidate screening and in vivo studies.
Sensitivity analysis of left ventricle with dilated cardiomyopathy in fluid structure simulation.
Chan, Bee Ting; Abu Osman, Noor Azuan; Lim, Einly; Chee, Kok Han; Abdul Aziz, Yang Faridah; Abed, Amr Al; Lovell, Nigel H; Dokos, Socrates
2013-01-01
Dilated cardiomyopathy (DCM) is the most common myocardial disease. It not only leads to systolic dysfunction but also diastolic deficiency. We sought to investigate the effect of idiopathic and ischemic DCM on the intraventricular fluid dynamics and myocardial wall mechanics using a 2D axisymmetrical fluid structure interaction model. In addition, we also studied the individual effect of parameters related to DCM, i.e. peak E-wave velocity, end systolic volume, wall compliance and sphericity index on several important fluid dynamics and myocardial wall mechanics variables during ventricular filling. Intraventricular fluid dynamics and myocardial wall deformation are significantly impaired under DCM conditions, being demonstrated by low vortex intensity, low flow propagation velocity, low intraventricular pressure difference (IVPD) and strain rates, and high-end diastolic pressure and wall stress. Our sensitivity analysis results showed that flow propagation velocity substantially decreases with an increase in wall stiffness, and is relatively independent of preload at low-peak E-wave velocity. Early IVPD is mainly affected by the rate of change of the early filling velocity and end systolic volume which changes the ventriculo:annular ratio. Regional strain rate, on the other hand, is significantly correlated with regional stiffness, and therefore forms a useful indicator for myocardial regional ischemia. The sensitivity analysis results enhance our understanding of the mechanisms leading to clinically observable changes in patients with DCM. PMID:23825628
Long vs. short-term energy storage:sensitivity analysis.
Schoenung, Susan M. (Longitude 122 West, Inc., Menlo Park, CA); Hassenzahl, William V. (,Advanced Energy Analysis, Piedmont, CA)
2007-07-01
This report extends earlier work to characterize long-duration and short-duration energy storage technologies, primarily on the basis of life-cycle cost, and to investigate sensitivities to various input assumptions. Another technology--asymmetric lead-carbon capacitors--has also been added. Energy storage technologies are examined for three application categories--bulk energy storage, distributed generation, and power quality--with significant variations in discharge time and storage capacity. Sensitivity analyses include cost of electricity and natural gas, and system life, which impacts replacement costs and capital carrying charges. Results are presented in terms of annual cost, $/kW-yr. A major variable affecting system cost is hours of storage available for discharge.
Sensitive glow discharge ion source for aerosol and gas analysis
Reilly, Peter T. A.
2007-08-14
A high sensitivity glow discharge ion source system for analyzing particles includes an aerodynamic lens having a plurality of constrictions for receiving an aerosol including at least one analyte particle in a carrier gas and focusing the analyte particles into a collimated particle beam. A separator separates the carrier gas from the analyte particle beam, wherein the analyte particle beam or vapors derived from the analyte particle beam are selectively transmitted out of from the separator. A glow discharge ionization source includes a discharge chamber having an entrance orifice for receiving the analyte particle beam or analyte vapors, and a target electrode and discharge electrode therein. An electric field applied between the target electrode and discharge electrode generates an analyte ion stream from the analyte vapors, which is directed out of the discharge chamber through an exit orifice, such as to a mass spectrometer. High analyte sensitivity is obtained by pumping the discharge chamber exclusively through the exit orifice and the entrance orifice.
Probability density adjoint for sensitivity analysis of the Mean of Chaos
Blonigan, Patrick J. Wang, Qiqi
2014-08-01
Sensitivity analysis, especially adjoint based sensitivity analysis, is a powerful tool for engineering design which allows for the efficient computation of sensitivities with respect to many parameters. However, these methods break down when used to compute sensitivities of long-time averaged quantities in chaotic dynamical systems. This paper presents a new method for sensitivity analysis of ergodic chaotic dynamical systems, the density adjoint method. The method involves solving the governing equations for the system's invariant measure and its adjoint on the system's attractor manifold rather than in phase-space. This new approach is derived for and demonstrated on one-dimensional chaotic maps and the three-dimensional Lorenz system. It is found that the density adjoint computes very finely detailed adjoint distributions and accurate sensitivities, but suffers from large computational costs.
Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Newman, James C., III; Barnwell, Richard W.; Taylor, Arthur C., III; Hou, Gene J.-W.
1998-01-01
This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics. The focus here is on those methods particularly well- suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid computational fluid dynamics algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid computational fluid dynamics in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.
Thermal analysis of microlens formation on a sensitized gelatin layer
Muric, Branka; Pantelic, Dejan; Vasiljevic, Darko; Panic, Bratimir; Jelenkovic, Branislav
2009-07-01
We analyze a mechanism of direct laser writing of microlenses. We find that thermal effects and photochemical reactions are responsible for microlens formation on a sensitized gelatin layer. An infrared camera was used to assess the temperature distribution during the microlens formation, while the diffraction pattern produced by the microlens itself was used to estimate optical properties. The study of thermal processes enabled us to establish the correlation between thermal and optical parameters.
Sensitivity analysis of heat flow through irradiated fur of calves
Gebremedhin, K.G.; Porter, W.P.
1983-01-01
Fractional factorial formations are used in conjunction with a fur heat transfer model in screening variables in which only a subset of the variables is expected to be important on heat transfer through irradiated fur, but which subset is unknown. Nine of the eleven variables tested have statistically significant effects on heat transfer through irradiated fur. The sensitivity of the variables is illustrated. 15 references, 4 figures, 3 tables.
Sensitivity analysis of eigenvalues for an electro-hydraulic servomechanism
NASA Astrophysics Data System (ADS)
Stoia-Djeska, M.; Safta, C. A.; Halanay, A.; Petrescu, C.
2012-11-01
Electro-hydraulic servomechanisms (EHSM) are important components of flight control systems and their role is to control the movement of the flying control surfaces in response to the movement of the cockpit controls. As flight-control systems, the EHSMs have a fast dynamic response, a high power to inertia ratio and high control accuracy. The paper is devoted to the study of the sensitivity for an electro-hydraulic servomechanism used for an aircraft aileron action. The mathematical model of the EHSM used in this paper includes a large number of parameters whose actual values may vary within some ranges of uncertainty. It consists in a nonlinear ordinary differential equation system composed by the mass and energy conservation equations, the actuator movement equations and the controller equation. In this work the focus is on the sensitivities of the eigenvalues of the linearized homogeneous system, which are the partial derivatives of the eigenvalues of the state-space system with respect the parameters. These are obtained using a modal approach based on the eigenvectors of the state-space direct and adjoint systems. To calculate the eigenvalues and their sensitivity the system's Jacobian and its partial derivatives with respect the parameters are determined. The calculation of the derivative of the Jacobian matrix with respect to the parameters is not a simple task and for many situations it must be done numerically. The system stability is studied in relation with three parameters: m, the equivalent inertial load of primary control surface reduced to the actuator rod; B, the bulk modulus of oil and p a pressure supply proportionality coefficient. All the sensitivities calculated in this work are in good agreement with those obtained through recalculations.
Superconducting Accelerating Cavity Pressure Sensitivity Analysis and Stiffening
Rodnizki, J; Ben Aliz, Y; Grin, A; Horvitz, Z; Perry, A; Weissman, L; Davis, G Kirk; Delayen, Jean R.
2014-12-01
The Soreq Applied Research Accelerator Facility (SARAF) design is based on a 40 MeV 5 mA light ions superconducting RF linac. Phase-I of SARAF delivers up to 2 mA CW proton beams in an energy range of 1.5 - 4.0 MeV. The maximum beam power that we have reached is 5.7 kW. Today, the main limiting factor to reach higher ion energy and beam power is related to the HWR sensitivity to the liquid helium coolant pressure fluctuations. The HWR sensitivity to helium pressure is about 60 Hz/mbar. The cavities had been designed, a decade ago, to be soft in order to enable tuning of their novel shape. However, the cavities turned out to be too soft. In this work we found that increasing the rigidity of the cavities in the vicinity of the external drift tubes may reduce the cavity sensitivity by a factor of three. A preliminary design to increase the cavity rigidity is presented.
Computational aspects of sensitivity calculations in transient structural analysis
NASA Technical Reports Server (NTRS)
Greene, William H.; Haftka, Raphael T.
1989-01-01
A key step in the application of formal automated design techniques to structures under transient loading is the calculation of sensitivities of response quantities to the design parameters. This paper considers response quantities to the design parameters. This paper considers structures with general forms of damping acted on by general transient loading and addresses issues of computational errors and computational efficiency. The equations of motion are reduced using the traditional basis of vibration modes and then integrated using a highly accurate, explicit integration technique. A critical point constraint formulation is used to place constraints on the magnitude of each response quantity as a function of time. Three different techniques for calculating sensitivities of the critical point constraints are presented. The first two are based on the straightforward application of the forward and central difference operators, respectively. The third is based on explicit differentiation of the equations of motion. Condition errors, finite difference truncation errors, and modal convergence errors for the three techniques are compared by applying them to a simple five-span-beam problem. Sensitivity results are presented for two different transient loading conditions and for both damped and undamped cases.
Computational aspects of sensitivity calculations in transient structural analysis
NASA Technical Reports Server (NTRS)
Greene, William H.; Haftka, Raphael T.
1988-01-01
A key step in the application of formal automated design techniques to structures under transient loading is the calculation of sensitivities of response quantities to the design parameters. This paper considers structures with general forms of damping acted on by general transient loading and addresses issues of computational errors and computational efficiency. The equations of motion are reduced using the traditional basis of vibration modes and then integrated using a highly accurate, explicit integration technique. A critical point constraint formulation is used to place constraints on the magnitude of each response quantity as a function of time. Three different techniques for calculating sensitivities of the critical point constraints are presented. The first two are based on the straightforward application of the forward and central difference operators, respectively. The third is based on explicit differentiation of the equations of motion. Condition errors, finite difference truncation errors, and modal convergence errors for the three techniques are compared by applying them to a simple five-span-beam problem. Sensitivity results are presented for two different transient loading conditions and for both damped and undamped cases.
Pediatric bed fall computer simulation model: parametric sensitivity analysis.
Thompson, Angela; Bertocci, Gina
2014-01-01
Falls from beds and other household furniture are common scenarios that may result in injury and may also be stated to conceal child abuse. Knowledge of the biomechanics associated with short-distance falls may aid clinicians in distinguishing between abusive and accidental injuries. In this study, a validated bed fall computer simulation model of an anthropomorphic test device representing a 12-month-old child was used to investigate the effect of altering fall environment parameters (fall height, impact surface stiffness, initial force used to initiate the fall) and child surrogate parameters (overall mass, head stiffness, neck stiffness, stiffness for other body segments) on fall dynamics and outcomes related to injury potential. The sensitivity of head and neck injury outcome measures to model parameters was determined. Parameters associated with the greatest sensitivity values (fall height, initiating force, and surrogate mass) altered fall dynamics and impact orientation. This suggests that fall dynamics and impact orientation play a key role in head and neck injury potential. With the exception of surrogate mass, injury outcome measures tended to be more sensitive to changes in environmental parameters (bed height, impact surface stiffness, initiating force) than surrogate parameters (head stiffness, neck stiffness, body segment stiffness).
Sensitivity Analysis and Optimization of Aerodynamic Configurations with Blend Surfaces
NASA Technical Reports Server (NTRS)
Thomas, A. M.; Tiwari, S. N.
1997-01-01
A novel (geometrical) parametrization procedure using solutions to a suitably chosen fourth order partial differential equation is used to define a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. The general airplane configuration has wing, fuselage, vertical tail and horizontal tail. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. A graphic interface software is developed which dynamically changes the surface of the airplane configuration with the change in input design variable. The software is made user friendly and is targeted towards the initial conceptual development of any aerodynamic configurations. Grid sensitivity with respect to surface design parameters and aerodynamic sensitivity coefficients based on potential flow is obtained using an Automatic Differentiation precompiler software tool ADIFOR. Aerodynamic shape optimization of the complete aircraft with twenty four design variables is performed. Unstructured and structured volume grids and Euler solutions are obtained with standard software to demonstrate the feasibility of the new surface definition.
Molecular-beacon-based array for sensitive DNA analysis.
Yao, Gang; Tan, Weihong
2004-08-15
Molecular beacon (MB) DNA probes provide a new way for sensitive label-free DNA/protein detection in homogeneous solution and biosensor development. However, a relatively low fluorescence enhancement after the hybridization of the surface-immobilized MB hinders its effective biotechnological applications. We have designed new molecular beacon probes to enable a larger separation between the surface and the surface-bound MBs. Using these MB probes, we have developed a DNA array on avidin-coated cover slips and have improved analytical sensitivity. A home-built wide-field optical setup was used for imaging the array. Our results show that linker length, pH, and ionic strength have obvious effects on the performance of the surface-bound MBs. The fluorescence enhancement of the new MBs after hybridization has been increased from 2 to 5.5. The MB-based DNA array could be used for DNA detection with high sensitivity, enabling simultaneous multiple-target bioanalysis in a variety of biotechnological applications.
NASA Astrophysics Data System (ADS)
Mockler, Eva M.; O'Loughlin, Fiachra E.; Bruen, Michael
2016-05-01
Increasing pressures on water quality due to intensification of agriculture have raised demands for environmental modeling to accurately simulate the movement of diffuse (nonpoint) nutrients in catchments. As hydrological flows drive the movement and attenuation of nutrients, individual hydrological processes in models should be adequately represented for water quality simulations to be meaningful. In particular, the relative contribution of groundwater and surface runoff to rivers is of interest, as increasing nitrate concentrations are linked to higher groundwater discharges. These requirements for hydrological modeling of groundwater contribution to rivers initiated this assessment of internal flow path partitioning in conceptual hydrological models. In this study, a variance based sensitivity analysis method was used to investigate parameter sensitivities and flow partitioning of three conceptual hydrological models simulating 31 Irish catchments. We compared two established conceptual hydrological models (NAM and SMARG) and a new model (SMART), produced especially for water quality modeling. In addition to the criteria that assess streamflow simulations, a ratio of average groundwater contribution to total streamflow was calculated for all simulations over the 16 year study period. As observations time-series of groundwater contributions to streamflow are not available at catchment scale, the groundwater ratios were evaluated against average annual indices of base flow and deep groundwater flow for each catchment. The exploration of sensitivities of internal flow path partitioning was a specific focus to assist in evaluating model performances. Results highlight that model structure has a strong impact on simulated groundwater flow paths. Sensitivity to the internal pathways in the models are not reflected in the performance criteria results. This demonstrates that simulated groundwater contribution should be constrained by independent data to ensure results
Dutra, Rosilene L; Cantos, Geny A; Carasek, Eduardo
2006-01-01
The quantification of target analytes in complex matrices requires special calibration approaches to compensate for additional capacity or activity in the matrix samples. The standard addition is one of the most important calibration procedures for quantification of analytes in such matrices. However, this technique requires a great number of reagents and material, and it consumes a considerable amount of time throughout the analysis. In this work, a new calibration procedure to analyze biological samples is proposed. The proposed calibration, called the addition calibration technique, was used for the determination of zinc (Zn) in blood serum and erythrocyte samples. The results obtained were compared with those obtained using conventional calibration techniques (standard addition and standard calibration). The proposed addition calibration was validated by recovery tests using blood samples spiked with Zn. The range of recovery for blood serum and erythrocyte samples were 90-132% and 76-112%, respectively. Statistical studies among results obtained by the addition technique and conventional techniques, using a paired two-tailed Student's t-test and linear regression, demonstrated good agreement among them. PMID:16943611
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu
2006-11-01
Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed. PMID:16519411
Llorent-Martínez, E J; Ortega-Barrales, P; Molina-Díaz, A; Ruiz-Medina, A
2008-12-01
Orbifloxacin (ORBI) is a third-generation fluoroquinolone developed exclusively for use in veterinary medicine, mainly in companion animals. This antimicrobial agent has bactericidal activity against numerous gram-negative and gram-positive bacteria. A few chromatographic methods for its analysis have been described in the scientific literature. Here, coupling of sequential-injection analysis and solid-phase spectroscopy is described in order to develop, for the first time, a terbium-sensitized luminescent optosensor for analysis of ORBI. The cationic resin Sephadex-CM C-25 was used as solid support and measurements were made at 275/545 nm. The system had a linear dynamic range of 10-150 ng mL(-1), with a detection limit of 3.3 ng mL(-1) and an R.S.D. below 3% (n = 10). The analyte was satisfactorily determined in veterinary drugs and dog and horse urine.
Observations Regarding Use of Advanced CFD Analysis, Sensitivity Analysis, and Design Codes in MDO
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Hou, Gene J. W.; Taylor, Arthur C., III
1996-01-01
Observations regarding the use of advanced computational fluid dynamics (CFD) analysis, sensitivity analysis (SA), and design codes in gradient-based multidisciplinary design optimization (MDO) reflect our perception of the interactions required of CFD and our experience in recent aerodynamic design optimization studies using CFD. Sample results from these latter studies are summarized for conventional optimization (analysis - SA codes) and simultaneous analysis and design optimization (design code) using both Euler and Navier-Stokes flow approximations. The amount of computational resources required for aerodynamic design using CFD via analysis - SA codes is greater than that required for design codes. Thus, an MDO formulation that utilizes the more efficient design codes where possible is desired. However, in the aerovehicle MDO problem, the various disciplines that are involved have different design points in the flight envelope; therefore, CFD analysis - SA codes are required at the aerodynamic 'off design' points. The suggested MDO formulation is a hybrid multilevel optimization procedure that consists of both multipoint CFD analysis - SA codes and multipoint CFD design codes that perform suboptimizations.
PARCEQ2D heat transfer grid sensitivity analysis
Saladino, A.J.; Praharaj, S.C.; Collins, F.G. Tennessee Univ., Tullahoma )
1991-01-01
The material presented in this paper is an extension of two-dimensional Aeroassist Flight Experiment (AFE) results shown previously. This study has focused on the heating rate calculations to the AFE obtained from an equilibrium real gas code, with attention placed on the sensitivity of grid dependence and wall temperature. Heat transfer results calculated by the PARCEQ2D code compare well with those computed by other researchers. Temperature convergence in the case of kinetic transport has been accomplished by increasing the wall temperature gradually from 300 K to the wall temperature of 1700 K. 28 refs.
PARCEQ2D heat transfer grid sensitivity analysis
NASA Technical Reports Server (NTRS)
Saladino, Anthony J.; Praharaj, Sarat C.; Collins, Frank G.
1991-01-01
The material presented in this paper is an extension of two-dimensional Aeroassist Flight Experiment (AFE) results shown previously. This study has focused on the heating rate calculations to the AFE obtained from an equilibrium real gas code, with attention placed on the sensitivity of grid dependence and wall temperature. Heat transfer results calculated by the PARCEQ2D code compare well with those computed by other researchers. Temperature convergence in the case of kinetic transport has been accomplished by increasing the wall temperature gradually from 300 K to the wall temperature of 1700 K.
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi
1994-01-01
An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.
Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.
2003-01-01
An efficient incremental iterative approach for differentiating advanced flow codes is successfully demonstrated on a two-dimensional inviscid model problem. The method employs the reverse-mode capability of the automatic differentiation software tool ADIFOR 3.0 and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straightforward, black-box reverse-mode applicaiton of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-rder aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoinct) procedures; then, a very efficient noniterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hesian matrices) of lift, wave drag, and pitching-moment coefficients are calculated with respect to geometric shape, angle of attack, and freestream Mach number.
Highly sensitive Raman system for dissolved gas analysis in water.
Yang, Dewang; Guo, Jinjia; Liu, Qingsheng; Luo, Zhao; Yan, Jingwen; Zheng, Ronger
2016-09-20
The detection of dissolved gases in seawater plays an important role in ocean observation and exploration. As a potential technique for oceanic applications, Raman spectroscopy has already proved its advantages in the simultaneous detection of multiple species during previous deep-sea explorations. Due to the low sensitivity of conventional Raman measurements, there have been many reports of Raman applications on direct seawater detection in high-concentration areas, but few on undersea dissolved gas detection. In this work, we have presented a highly sensitive Raman spectroscopy (HSRS) system with a special designed gas chamber for small amounts of underwater gas extraction. Systematic experiments have been carried out for system evaluation, and the results have shown that the Raman signals obtained by the innovation of a near-concentric cavity was about 21 times stronger than those of conventional side-scattering Raman measurements. Based on this system, we have achieved a low limit of detection of 2.32 and 0.44 μmol/L for CO_{2} and CH_{4}, respectively, in the lab. A test-out experiment has also been accomplished with a gas-liquid separator coupled to the Raman system, and signals of O_{2} and CO_{2} were detected after 1 h of degasification. This system may show potential for gas detection in water, and further work would be done for the improvement of in situ detection.
Developing optical traps for ultra-sensitive analysis
Zhao, X.; Vieira, D.J.; Guckert, R. |; Crane, S.
1998-09-01
The authors describe the coupling of a magneto-optical trap to a mass separator for the ultra-sensitive detection of selected radioactive species. As a proof of principle test, they have demonstrated the trapping of {approximately} 6 million {sup 82} Rb (t{sub 1/2} = 75 s) atoms using an ion implantation and heated foil release method for introducing the sample into a trapping cell with minimal gas loading. Gamma-ray counting techniques were used to determine the efficiencies of each step in the process. By far the weakest step in the process is the efficiency of the optical trap itself (0.3%). Further improvements in the quality of the nonstick dryfilm coating on the inside of the trapping cell and the possible use of larger diameter laser beams are indicated. In the presence of a large background of scattered light, this initial work achieved a detection sensitivity of {approximately} 4,000 trapped atoms. Improved detection schemes using a pulsed trap and gated photon detection method are outlined. Application of this technology to the areas of environmental monitoring and nuclear proliferation are foreseen.
Sensitivity analysis of vegetation-induced flow steering in channels
NASA Astrophysics Data System (ADS)
Bywater-Reyes, S.; Wilcox, A. C.; Lightbody, A.; Stella, J. C.
2014-12-01
Morphodynamic feedbacks result in alternating bars within channels, and the resulting convective accelerations dictate the cross-stream force balance of channels and in turn influence morphology. Pioneer woody riparian trees recruit on river bars and may steer flow and alter this force balance. This study uses two-dimensional hydraulic modeling to test the sensitivity of the flow field to riparian vegetation at the reach scale. We use two test systems with different width-to-depth ratios, substrate sizes, and vegetation structure: the gravel-bed Bitterroot River, MT and the sand-bed Santa Maria River, AZ. We model vegetation explicitly as a drag force by spatially specifying vegetation density, height, and drag coefficient, across varying hydraulic (e.g., discharge, eddy viscosity) conditions and compare velocity vectors between runs. We test variations in vegetation configurations, including the present-day configuration of vegetation in our field systems (extracted from LiDAR), removal of vegetation (e.g., from floods or management actions), and expansion of vegetation. Preliminary model runs suggest that the sensitivity of convective accelerations to vegetation reflects a balance between the extent and density of vegetation inundated and other sources of channel roughness. This research quantifies how vegetation alters hydraulics at the reach scale, a fundamental step to understanding vegetation-morphodynamic interactions.
Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.
2001-01-01
An efficient incremental-iterative approach for differentiating advanced flow codes is successfully demonstrated on a 2D inviscid model problem. The method employs the reverse-mode capability of the automatic- differentiation software tool ADIFOR 3.0, and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straight-forward, black-box reverse- mode application of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-order aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoint) procedures; then, a very efficient non-iterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hessian matrices) of lift, wave-drag, and pitching-moment coefficients are calculated with respect to geometric- shape, angle-of-attack, and freestream Mach number
Design sensitivity analysis for nonlinear magnetostatic problems by continuum approach
NASA Astrophysics Data System (ADS)
Park, Il-Han; Coulomb, J. L.; Hahn, Song-Yop
1992-11-01
Using the material derivative concept of continuum mechanics and an adjoint variable method, in a 2-dimensional nonlinear magnetostatic system the sensitivity formula is derived in a line integral form along the shape modification interface. The sensitivity coefficients are numerically evaluated from the solutions of state and adjoint variables calculated by the existing standard finite element code. To verify this method, the pole shape design problem of a quadrupole is provided. En utilisant la notion de dérivée matérielle de la mécanique des milieux continus et une méthode de variable adjointe, pour des problèmes magnétiques non linéaires bidimensionnels, la formule de sensibilité est dérivée sous forme d'une intégrale de contour sur la surface de modification. Les coefficients de sensibilité sont numériquement évalués avec les variables d'état et adjointes calculées à partir du logiciel existant d'éléments finis. Pour vérifier cette méthode, le problème d'optimisation de forme d'un quadripôle est décrit.
Highly sensitive Raman system for dissolved gas analysis in water.
Yang, Dewang; Guo, Jinjia; Liu, Qingsheng; Luo, Zhao; Yan, Jingwen; Zheng, Ronger
2016-09-20
The detection of dissolved gases in seawater plays an important role in ocean observation and exploration. As a potential technique for oceanic applications, Raman spectroscopy has already proved its advantages in the simultaneous detection of multiple species during previous deep-sea explorations. Due to the low sensitivity of conventional Raman measurements, there have been many reports of Raman applications on direct seawater detection in high-concentration areas, but few on undersea dissolved gas detection. In this work, we have presented a highly sensitive Raman spectroscopy (HSRS) system with a special designed gas chamber for small amounts of underwater gas extraction. Systematic experiments have been carried out for system evaluation, and the results have shown that the Raman signals obtained by the innovation of a near-concentric cavity was about 21 times stronger than those of conventional side-scattering Raman measurements. Based on this system, we have achieved a low limit of detection of 2.32 and 0.44 μmol/L for CO_{2} and CH_{4}, respectively, in the lab. A test-out experiment has also been accomplished with a gas-liquid separator coupled to the Raman system, and signals of O_{2} and CO_{2} were detected after 1 h of degasification. This system may show potential for gas detection in water, and further work would be done for the improvement of in situ detection. PMID:27661606
KODELI, IVAN-ALEXANDER
2008-05-22
latest versions available from NEA-DB). o The memory and data management was updated as well as the language level (code was rewritten from Fortran-77 to Fortran-95). SUSD3D is coupled to several discrete‑ordinates codes via binary interface files. SUSD3D can use the flux moment files produced by discrete ordinates codes: ANISN, DORT, TORT, ONEDANT, TWODANT, and THREEDANT. In some of these codes minor modifications are required. Variable dimensions used in the TORT‑DORT system are supported. In 3D analysis the geometry and material composition is taken directly from the TORT produced VARSCL binary file, reducing in this way the user's input to SUSD3D. Multigroup cross‑section sets are read in the GENDF format of the NJOY/GROUPR code system, and the covariance data are expected in the COVFIL format of NJOY/ERRORR or the COVERX format of PUFF‑2. The ZZ‑VITAMIN‑J/COVA cross section covariance matrix library can be used as an alternative to the NJOY code system. The package includes the ANGELO code to produce the covariance data in the required energy structure in the COVFIL format. The following cross section processing modules to be added to the NJOY‑94 code system are included in the package: o ERR34: an extension of the ERRORR module of the NJOY code system for the File‑34 processing. It is used to prepare multigroup SAD cross sections covariance matrices. o GROUPSR: An additional code module for the preparation of partial cross sections for SAD sensitivity analysis. Updated version of the same code from SUSD, extended to the ENDF‑6 format. o SEADR: An additional code module to prepare group covariance matrices for SAD/SED uncertainty analysis.
2008-05-22
are the latest versions available from NEA-DB). o The memory and data management was updated as well as the language level (code was rewritten from Fortran-77 to Fortran-95). SUSD3D is coupled to several discrete‑ordinates codes via binary interface files. SUSD3D can use the flux moment files produced by discrete ordinates codes: ANISN, DORT, TORT, ONEDANT, TWODANT, and THREEDANT. In some of these codes minor modifications are required. Variable dimensions used in the TORT‑DORT system are supported. In 3D analysis the geometry and material composition is taken directly from the TORT produced VARSCL binary file, reducing in this way the user's input to SUSD3D. Multigroup cross‑section sets are read in the GENDF format of the NJOY/GROUPR code system, and the covariance data are expected in the COVFIL format of NJOY/ERRORR or the COVERX format of PUFF‑2. The ZZ‑VITAMIN‑J/COVA cross section covariance matrix library can be used as an alternative to the NJOY code system. The package includes the ANGELO code to produce the covariance data in the required energy structure in the COVFIL format. The following cross section processing modules to be added to the NJOY‑94 code system are included in the package: o ERR34: an extension of the ERRORR module of the NJOY code system for the File‑34 processing. It is used to prepare multigroup SAD cross sections covariance matrices. o GROUPSR: An additional code module for the preparation of partial cross sections for SAD sensitivity analysis. Updated version of the same code from SUSD, extended to the ENDF‑6 format. o SEADR: An additional code module to prepare group covariance matrices for SAD/SED uncertainty analysis.« less
Quantitative and sensitive analysis of CN molecules using laser induced low pressure He plasma
NASA Astrophysics Data System (ADS)
Pardede, Marincan; Hedwig, Rinda; Abdulmadjid, Syahrun Nur; Lahna, Kurnia; Idris, Nasrullah; Jobiliong, Eric; Suyanto, Hery; Marpaung, Alion Mangasi; Suliyanti, Maria Margaretha; Ramli, Muliadi; Tjia, May On; Lie, Tjung Jie; Lie, Zener Sukra; Kurniawan, Davy Putra; Kurniawan, Koo Hendrik; Kagawa, Kiichiro
2015-03-01
We report the results of experimental study on CN 388.3 nm and C I 247.8 nm emission characteristics using 40 mJ laser irradiation with He and N2 ambient gases. The results obtained with N2 ambient gas show undesirable interference effect between the native CN emission and the emission of CN molecules arising from the recombination of native C ablated from the sample with the N dissociated from the ambient gas. This problem is overcome by the use of He ambient gas at low pressure of 2 kPa, which also offers the additional advantages of cleaner and stronger emission lines. The result of applying this favorable experimental condition to emission spectrochemical measurement of milk sample having various protein concentrations is shown to yield a close to linear calibration curve with near zero extrapolated intercept. Additionally, a low detection limit of 5 μg/g is found in this experiment, making it potentially applicable for quantitative and sensitive CN analysis. The visibility of laser induced breakdown spectroscopy with low pressure He gas is also demonstrated by the result of its application to spectrochemical analysis of fossil samples. Furthermore, with the use of CO2 ambient gas at 600 Pa mimicking the Mars atmosphere, this technique also shows promising applications to exploration in Mars.
Quantitative and sensitive analysis of CN molecules using laser induced low pressure He plasma
Pardede, Marincan; Hedwig, Rinda; Abdulmadjid, Syahrun Nur; Lahna, Kurnia; Idris, Nasrullah; Ramli, Muliadi; Jobiliong, Eric; Suyanto, Hery; Marpaung, Alion Mangasi; Suliyanti, Maria Margaretha; Tjia, May On
2015-03-21
We report the results of experimental study on CN 388.3 nm and C I 247.8 nm emission characteristics using 40 mJ laser irradiation with He and N{sub 2} ambient gases. The results obtained with N{sub 2} ambient gas show undesirable interference effect between the native CN emission and the emission of CN molecules arising from the recombination of native C ablated from the sample with the N dissociated from the ambient gas. This problem is overcome by the use of He ambient gas at low pressure of 2 kPa, which also offers the additional advantages of cleaner and stronger emission lines. The result of applying this favorable experimental condition to emission spectrochemical measurement of milk sample having various protein concentrations is shown to yield a close to linear calibration curve with near zero extrapolated intercept. Additionally, a low detection limit of 5 μg/g is found in this experiment, making it potentially applicable for quantitative and sensitive CN analysis. The visibility of laser induced breakdown spectroscopy with low pressure He gas is also demonstrated by the result of its application to spectrochemical analysis of fossil samples. Furthermore, with the use of CO{sub 2} ambient gas at 600 Pa mimicking the Mars atmosphere, this technique also shows promising applications to exploration in Mars.
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.
1997-01-01
Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.
Le Bahers, Tangui; Labat, Frédéric; Pauporté, Thierry; Ciofini, Ilaria
2010-11-28
We have investigated the role of electrolyte composition, in terms of solvent and additive, on the open-circuit voltage (V(oc)) of ZnO-based dye-sensitized solar cells (DSSCs) using a combined experimental and theoretical approach. Calculations based on density functional theory (DFT) have been performed in order to describe the geometries and adsorption energies of various adsorbed solvents (nitromethane, acetonitrile and dimethylformamide) and p-tert-butylpyridine (TBP) (modeled by methylpyridine) on the ZnO (100) surface using a periodic approach. The densities of states (DOS) have been calculated and the energy position of the conduction band edge (CBE) has been evaluated for the different molecules adsorbed. The effect of the electrolyte composition on the standard redox potential of the iodide/triiodide redox couple has been experimentally determined. These two data values (CBE and standard redox potential) allowed us to determine the dependence of V(oc) on the electrolyte composition. The variations determined using this method were in good agreement with the measured V(oc) for cells made of electrodeposited ZnO films sensitized using D149 (indoline) dye. As in the case of TiO(2)-based cells, a correlation of V(oc) with the donor number of the adsorbed species was found. The present study clearly points out that both the CBE energy and the redox potential variation are important for explaining the experimentally observed changes in the V(oc) of DSSCs. PMID:20949189
Yan, Zhiyong; Wang, Zonghua; Miao, Zhuang; Liu, Yang
2016-01-01
A novel visible-light photoelectrochemical (PEC) biosensor based on localized surface plasmon resonance (LSPR) enhancement and dye sensitization was fabricated for highly sensitive analysis of protein kinase activity with ultralow background. In this strategy, DNA conjugated gold nanoparticles (DNA@AuNPs) were assembled on the phosphorylated kemptide modified TiO2/ITO electrode through the chelation between Zr(4+) ions and phosphate groups, then followed by the intercalation of [Ru(bpy)3](2+) into DNA grooves. The adsorbed [Ru(bpy)3](2+) can harvest visible light to produce excited electrons that inject into the TiO2 conduction band to form photocurrent under visible light irradiation. In addition, the photocurrent efficiency was further improved by the LSPR of AuNPs under the irradiation of visible light. Moreover, because of the excellent conductivity and large surface area of AuNPs that facilitate electron-transfer and accommodate large number of [Ru(bpy)3](2+), the photocurrent was significantly amplified, affording an extremely sensitive PEC analysis of kinase activity with ultralow background signals. The detection limit of as-proposed PEC biosensor was 0.005 U mL(-1) (S/N = 3). The biosensor also showed excellent performances for quantitative kinase inhibitor screening and PKA activities detection in MCF-7 cell lysates under forskolin and ellagic acid stimulation. The developed dye-sensitization and LSPR enhancement visible-light PEC biosensor shows great potential in protein kinases-related clinical diagnosis and drug discovery. PMID:26648204
Inference of Climate Sensitivity from Analysis of Earth's Energy Budget
NASA Astrophysics Data System (ADS)
Forster, Piers M.
2016-06-01
Recent attempts to diagnose equilibrium climate sensitivity (ECS) from changes in Earth's energy budget point toward values at the low end of the Intergovernmental Panel on Climate Change Fifth Assessment Report (AR5)'s likely range (1.5–4.5 K). These studies employ observations but still require an element of modeling to infer ECS. Their diagnosed effective ECS over the historical period of around 2 K holds up to scrutiny, but there is tentative evidence that this underestimates the true ECS from a doubling of carbon dioxide. Different choices of energy imbalance data explain most of the difference between published best estimates, and effective radiative forcing dominates the overall uncertainty. For decadal analyses the largest source of uncertainty comes from a poor understanding of the relationship between ECS and decadal feedback. Considerable progress could be made by diagnosing effective radiative forcing in models.
A sensitivity analysis on component reliability from fatigue life computations
NASA Astrophysics Data System (ADS)
Neal, Donald M.; Matthews, William T.; Vangel, Mark G.; Rudalevige, Trevor
1992-02-01
Some uncertainties in determining high component reliability at a specified lifetime from a case study involving the fatigue life of a helicopter component are identified. Reliabilities are computed from results of a simulation process involving an assumed variability (standard deviation) of the load and strength in determining fatigue life. The uncertainties in the high reliability computation are then examined by introducing small changes in the variability for the given load and strength values in the study. Results showed that for a given component lifetime, a small increase in variability of load or strength produced large differences in the component reliability estimates. Among the factors involved in computing fatigue lifetimes, the component reliability estimates were found to be most sensitive to variability in loading. Component fatigue life probability density functions were obtained from the simulation process for various levels of variability. The range of life estimates were very large for relatively small variability in load and strength.
Comprehensive mechanisms for combustion chemistry: Experiment, modeling, and sensitivity analysis
Dryer, F.L.; Yetter, R.A.
1993-12-01
This research program is an integrated experimental/numerical effort to study pyrolysis and oxidation reactions and mechanisms for small-molecule hydrocarbon structures under conditions representative of combustion environments. The experimental aspects of the work are conducted in large diameter flow reactors, at pressures from one to twenty atmospheres, temperatures from 550 K to 1200 K, and with observed reaction times from 10{sup {minus}2} to 5 seconds. Gas sampling of stable reactant, intermediate, and product species concentrations provides not only substantial definition of the phenomenology of reaction mechanisms, but a significantly constrained set of kinetic information with negligible diffusive coupling. Analytical techniques used for detecting hydrocarbons and carbon oxides include gas chromatography (GC), and gas infrared (NDIR) and FTIR methods are utilized for continuous on-line sample detection of light absorption measurements of OH have also been performed in an atmospheric pressure flow reactor (APFR), and a variable pressure flow (VPFR) reactor is presently being instrumented to perform optical measurements of radicals and highly reactive molecular intermediates. The numerical aspects of the work utilize zero and one-dimensional pre-mixed, detailed kinetic studies, including path, elemental gradient sensitivity, and feature sensitivity analyses. The program emphasizes the use of hierarchical mechanistic construction to understand and develop detailed kinetic mechanisms. Numerical studies are utilized for guiding experimental parameter selections, for interpreting observations, for extending the predictive range of mechanism constructs, and to study the effects of diffusive transport coupling on reaction behavior in flames. Modeling using well defined and validated mechanisms for the CO/H{sub 2}/oxidant systems.
Models for patients' recruitment in clinical trials and sensitivity analysis.
Mijoule, Guillaume; Savy, Stéphanie; Savy, Nicolas
2012-07-20
Taking a decision on the feasibility and estimating the duration of patients' recruitment in a clinical trial are very important but very hard questions to answer, mainly because of the huge variability of the system. The more elaborated works on this topic are those of Anisimov and co-authors, where they investigate modelling of the enrolment period by using Gamma-Poisson processes, which allows to develop statistical tools that can help the manager of the clinical trial to answer these questions and thus help him to plan the trial. The main idea is to consider an ongoing study at an intermediate time, denoted t(1). Data collected on [0,t(1)] allow to calibrate the parameters of the model, which are then used to make predictions on what will happen after t(1). This method allows us to estimate the probability of ending the trial on time and give possible corrective actions to the trial manager especially regarding how many centres have to be open to finish on time. In this paper, we investigate a Pareto-Poisson model, which we compare with the Gamma-Poisson one. We will discuss the accuracy of the estimation of the parameters and compare the models on a set of real case data. We make the comparison on various criteria : the expected recruitment duration, the quality of fitting to the data and its sensitivity to parameter errors. We discuss the influence of the centres opening dates on the estimation of the duration. This is a very important question to deal with in the setting of our data set. In fact, these dates are not known. For this discussion, we consider a uniformly distributed approach. Finally, we study the sensitivity of the expected duration of the trial with respect to the parameters of the model : we calculate to what extent an error on the estimation of the parameters generates an error in the prediction of the duration.
Zhang, Hong-Xuan; Goutsias, John
2011-03-21
Sensitivity analysis is a valuable task for assessing the effects of biological variability on cellular behavior. Available techniques require knowledge of nominal parameter values, which cannot be determined accurately due to experimental uncertainty typical to problems of systems biology. As a consequence, the practical use of existing sensitivity analysis techniques may be seriously hampered by the effects of unpredictable experimental variability. To address this problem, we propose here a probabilistic approach to sensitivity analysis of biochemical reaction systems that explicitly models experimental variability and effectively reduces the impact of this type of uncertainty on the results. The proposed approach employs a recently introduced variance-based method to sensitivity analysis of biochemical reaction systems [Zhang et al., J. Chem. Phys. 134, 094101 (2009)] and leads to a technique that can be effectively used to accommodate appreciable levels of experimental variability. We discuss three numerical techniques for evaluating the sensitivity indices associated with the new method, which include Monte Carlo estimation, derivative approximation, and dimensionality reduction based on orthonormal Hermite approximation. By employing a computational model of the epidermal growth factor receptor signaling pathway, we demonstrate that the proposed technique can greatly reduce the effect of experimental variability on variance-based sensitivity analysis results. We expect that, in cases of appreciable experimental variability, the new method can lead to substantial improvements over existing sensitivity analysis techniques.
Results of an integrated structure-control law design sensitivity analysis
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1988-01-01
Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.
Gu, Binghe; Meldrum, Brian; McCabe, Terry; Phillips, Scott
2012-01-01
A theoretical treatment was developed and validated that relates analyte concentration and mass sensitivities to injection volume, retention factor, particle diameter, column length, column inner diameter and detection wavelength in liquid chromatography, and sample volume and extracted volume in solid-phase extraction (SPE). The principles were applied to improve sensitivity for trace analysis of clopyralid in drinking water. It was demonstrated that a concentration limit of detection of 0.02 ppb (μg/L) for clopyralid could be achieved with the use of simple UV detection and 100 mL of a spiked drinking water sample. This enabled reliable quantitation of clopyralid at the targeted 0.1 ppb level. Using a buffered solution as the elution solvent (potassium acetate buffer, pH 4.5, containing 10% of methanol) in the SPE procedures was found superior to using 100% methanol, as it provided better extraction recovery (70-90%) and precision (5% for a concentration at 0.1 ppb level). In addition, the eluted sample was in a weaker solvent than the mobile phase, permitting the direct injection of the extracted sample, which enabled a faster cycle time of the overall analysis. Excluding the preparation of calibration standards, the analysis of a single sample, including acidification, extraction, elution and LC run, could be completed in 1 h. The method was used successfully for the determination of clopyralid in over 200 clopyralid monoethanolamine-fortified drinking water samples, which were treated with various water treatment resins.
Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao
2015-01-01
There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants.
Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao
2015-01-01
There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants. PMID:25459861
Winter, C.L. . E-mail: lwinter@ucar.edu; Guadagnini, A.; Nychka, D.; Tartakovsky, D.M.
2006-09-01
A multivariate Analysis of Variance (ANOVA) is used to measure the relative sensitivity of groundwater flow to two factors that indicate different dimensions of aquifer heterogeneity. An aquifer is modeled as the union of disjoint volumes, or blocks, composed of different materials with different hydraulic conductivities. The factors are correlation between the hydraulic conductivities of the different materials and the contrast between mean conductivities in the different materials. The precise values of aquifer properties are usually uncertain because they are only sparsely sampled, yet are highly heterogeneous. Hence, the spatial distribution of blocks and the distribution of materials in blocks are uncertain and are modeled as stochastic processes. The ANOVA is performed on a large sample of Monte Carlo simulations of a simple model flow system composed of two materials distributed within three disjoint blocks. Our key finding is that simulated flow is much more sensitive to the contrast between mean conductivities of the blocks than it is to the intensity of correlation, although both factors are statistically significant. The methodology of the experiment - ANOVA performed on Monte Carlo simulations of a multi-material flow system - constitutes the basis of additional studies of more complicated interactions between factors that define flow and transport in aquifers with uncertain properties.
Review of seismic probabilistic risk assessment and the use of sensitivity analysis
Shiu, K.K.; Reed, J.W.; McCann, M.W. Jr.
1985-01-01
This paper presents results of sensitivity reviews performed to address a range of questions which arise in the context of seismic probabilistic risk assessment (PRA). These questions are the subject of this paper. A seismic PRA involves evalution of seismic hazard, component fragilities, and system responses. They are combined in an integrated analysis to obtain various risk measures, such as frequency of plant damage states. Calculation of these measures depends on combination of non-linear functions based on a number of parameters and assumptions used in the quantification process. Therefore it is often difficult to examine seismic PRA results and derive useful insights from them if detailed sensitivity studies are absent. This has been exempified in the process of trying to understand the role of low acceleration earthquakes in overall seismic risk. It is useful to understand, within a probabilistic framework, what uncertainties in the physical properties of the plant can be tolerated, if the risk from a safe shutdown earthquake is to be considered negligible. Seismic event trees and fault trees were developed to model the difference system and plant accident sequences. Hazard curves which represent various sites on the east coast were obtained; alternate structure and equipment fragility data were postulated. Various combinations of hazard and fragility data were analyzed. In addition, system modeling was perturbed to examine the impact upon the final results. Orders of magnitude variation were observed in the plant damage state frequency among the different cases. 7 refs.
We present a multi-faceted sensitivity analysis of a spatially explicit, individual-based model (IBM) (HexSim) of a threatened species, the Northern Spotted Owl (Strix occidentalis caurina) on a national forest in Washington, USA. Few sensitivity analyses have been conducted on ...
1991-03-12
Version 00 SUSD calculates sensitivity coefficients for one- and two-dimensional transport problems. Variance and standard deviation of detector responses or design parameters can be obtained using cross-section covariance matrices. In neutron transport problems, this code can perform sensitivity-uncertainty analysis for secondary angular distribution (SAD) or secondary energy distribution (SED).
ANALYSIS OF DISTRIBUTION FEEDER LOSSES DUE TO ADDITION OF DISTRIBUTED PHOTOVOLTAIC GENERATORS
Tuffner, Francis K.; Singh, Ruchi
2011-08-09
Distributed generators (DG) are small scale power supplying sources owned by customers or utilities and scattered throughout the power system distribution network. Distributed generation can be both renewable and non-renewable. Addition of distributed generation is primarily to increase feeder capacity and to provide peak load reduction. However, this addition comes with several impacts on the distribution feeder. Several studies have shown that addition of DG leads to reduction of feeder loss. However, most of these studies have considered lumped load and distributed load models to analyze the effects on system losses, where the dynamic variation of load due to seasonal changes is ignored. It is very important for utilities to minimize the losses under all scenarios to decrease revenue losses, promote efficient asset utilization, and therefore, increase feeder capacity. This paper will investigate an IEEE 13-node feeder populated with photovoltaic generators on detailed residential houses with water heater, Heating Ventilation and Air conditioning (HVAC) units, lights, and other plug and convenience loads. An analysis of losses for different power system components, such as transformers, underground and overhead lines, and triplex lines, will be performed. The analysis will utilize different seasons and different solar penetration levels (15%, 30%).
Analysis of redox additive-based overcharge protection for rechargeable lithium batteries
NASA Technical Reports Server (NTRS)
Narayanan, S. R.; Surampudi, S.; Attia, A. I.; Bankston, C. P.
1991-01-01
The overcharge condition in secondary lithium batteries employing redox additives for overcharge protection, has been theoretically analyzed in terms of a finite linear diffusion model. The analysis leads to expressions relating the steady-state overcharge current density and cell voltage to the concentration, diffusion coefficient, standard reduction potential of the redox couple, and interelectrode distance. The model permits the estimation of the maximum permissible overcharge rate for any chosen set of system conditions. Digital simulation of the overcharge experiment leads to numerical representation of the potential transients, and estimate of the influence of diffusion coefficient and interelectrode distance on the transient attainment of the steady state during overcharge. The model has been experimentally verified using 1,1-prime-dimethyl ferrocene as a redox additive. The analysis of the experimental results in terms of the theory allows the calculation of the diffusion coefficient and the formal potential of the redox couple. The model and the theoretical results may be exploited in the design and optimization of overcharge protection by the redox additive approach.
Sobuś, Jan; Kubicki, Jacek; Burdziński, Gotard; Ziółek, Marcin
2015-09-21
Comprehensive studies of all charge-separation processes in efficient carbazole dye-sensitized solar cells are correlated with their photovoltaic parameters. An important role of partial, fast electron recombination from the semiconductor nanoparticles to the oxidized dye is revealed; this takes place on the picosecond and sub-nanosecond timescales. The charge-transfer dynamics in cobalt tris(bipyridyl) based electrolytes and iodide-based electrolyte is observed to depend on potential-determining additives in a similar way. Upon addition of 0.5 M 4-tert-butylpiridine to both types of electrolytes, the stability of the cells is greatly improved; the cell photovoltage increases by 150-200 mV, the electron injection rate decreases about five times (from 5 to 1 ps(-1) ), and fast recombination slows down about two to three times. Dye regeneration proceeds at a rate of about 1 μs(-1) in all electrolytes. Electron recombination from titania to cobalt electrolytes is much faster than that to iodide ones.
Towards a controlled sensitivity analysis of model development decisions
NASA Astrophysics Data System (ADS)
Clark, Martyn; Nijssen, Bart
2016-04-01
The current generation of hydrologic models have followed a myriad of different development paths, making it difficult for the community to test underlying hypotheses and identify a clear path to model improvement. Model comparison studies have been undertaken to explore model differences, but these studies have not been able to meaningfully attribute inter-model differences in predictive ability to individual model components because there are often too many structural and implementation differences among the models considered. As a consequence, model comparison studies to date have provided limited insight into the causes of differences in model behavior, and model development has often relied on the inspiration and experience of individual modelers rather than a systematic analysis of model shortcomings. This presentation will discuss a unified approach to process-based hydrologic modeling to enable controlled and systematic analysis of multiple model representations (hypotheses) of hydrologic processes and scaling behavior. Our approach, which we term the Structure for Unifying Multiple Modeling Alternatives (SUMMA), formulates a general set of conservation equations, providing the flexibility to experiment with different spatial representations, different flux parameterizations, different model parameter values, and different time stepping schemes. We will discuss the use of SUMMA to systematically analyze different model development decisions, focusing on both analysis of simulations for intensively instrumented research watersheds as well as simulations across a global dataset of FLUXNET sites. The intent of the presentation is to demonstrate how the systematic analysis of model shortcomings can help identify model weaknesses and inform future model development priorities.
Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data
ERIC Educational Resources Information Center
Xu, Shu; Blozis, Shelley A.
2011-01-01
Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…
Analysis of the benefits of carbon credits to hydrogen addition to midsize gas turbine feedstocks.
Miller, J.; Towns, B.; Keller, Jay O.; Schefer, Robert W.; Skolnik, Edward G.
2006-02-01
The addition of hydrogen to the natural gas feedstocks of midsize (30-150 MW) gas turbines was analyzed as a method of reducing nitrogen oxides (NO{sub x}) and CO{sub 2} emissions. In particular, the costs of hydrogen addition were evaluated against the combined costs for other current NO{sub x} and CO{sub 2} emissions control technologies for both existing and new systems to determine its benefits and market feasibility. Markets for NO{sub x} emissions credits currently exist in California and the Northeast States and are expected to grow. Although regulations are not currently in place in the United States, several other countries have implemented carbon tax and carbon credit programs. The analysis thus assumes that the United States adopts future legislation similar to these programs. Therefore, potential sale of emissions credits for volunteer retrofits was also included in the study. It was found that hydrogen addition is a competitive alternative to traditional emissions abatement techniques under certain conditions. The existence of carbon credits shifts the system economics in favor of hydrogen addition.
Watanabe, Yoshiyuki; Kim, Hyun Soo; Castoro, Ryan J.; Chung, Woonbok; Estecio, Marcos R. H.; Kondo, Kimie; Guo, Yi; Ahmed, Saira S.; Toyota, Minoru; Itoh, Fumio; Suk, Ki Tae; Cho, Mee-Yon; Shen, Lanlan; Jelinek, Jaroslav; Issa, Jean-Pierre J.
2009-01-01
Background & Aims Aberrant DNA methylation is an early and frequent process in gastric carcinogenesis and could be useful for detection of gastric neoplasia. We hypothesized that methylation analysis of DNA recovered from gastric washes could be used to detect gastric cancer. Methods We studied 51 candidate genes in 7 gastric cancer cell lines and 24 samples (training set) and identified 6 for further studies. We examined the methylation status of these genes in a test set consisting of 131 gastric neoplasias at various stages. Finally, we validated the 6 candidate genes in a different population of 40 primary gastric cancer samples and 113 non-neoplastic gastric mucosa samples. Results 6 genes (MINT25, RORA, GDNF, ADAM23, PRDM5, MLF1) showed frequent differential methylation between gastric cancer and normal mucosa in the training, test and validation sets. GDNF and MINT25 were most sensitive molecular markers of early stage gastric cancer while PRDM5 and MLF1 were markers of a field defect. There was a close correlation (r=0.5 to 0.9, p=0.03 to 0.001) between methylation levels in tumor biopsy and gastric washes. MINT25 methylation had the best sensitivity (90%), specificity (96%), and area under the ROC curve (0.961) in terms of tumor detection in gastric washes. Conclusions These findings suggest MINT25 is a sensitive and specific marker for screening in gastric cancer. Additionally we have developed a new methodology for gastric cancer detection by DNA methylation in gastric washes. PMID:19375421
Sensitivity Analysis and Insights into Hydrological Processes and Uncertainty at Different Scales
NASA Astrophysics Data System (ADS)
Haghnegahdar, A.; Razavi, S.; Wheater, H. S.; Gupta, H. V.
2015-12-01
Sensitivity analysis (SA) is an essential tool for providing insight into model behavior, and conducting model calibration and uncertainty assessment. Numerous techniques have been used in environmental modelling studies for sensitivity analysis. However, it is often overlooked that the scale of modelling study, and the metric choice can significantly change the assessment of model sensitivity and uncertainty. In order to identify important hydrological processes across various scales, we conducted a multi-criteria sensitivity analysis using a novel and efficient technique, Variogram Analysis of Response Surfaces (VARS). The analysis was conducted using three different hydrological models, HydroGeoSphere (HGS), Soil and Water Assessment Tool (SWAT), and Modélisation Environmentale-Surface et Hydrologie (MESH). Models were applied at various scales ranging from small (hillslope) to large (watershed) scales. In each case, the sensitivity of simulated streamflow to model processes (represented through parameters) were measured using different metrics selected based on various hydrograph characteristics such as high flows, low flows, and volume. We demonstrate how the scale of the case study and the choice of sensitivity metric(s) can change our assessment of sensitivity and uncertainty. We present some guidelines to better align the metric choice with the objective and scale of a modelling study.
Design tradeoff studies and sensitivity analysis, appendix B
NASA Technical Reports Server (NTRS)
1979-01-01
Further work was performed on the Near Term Hybrid Passenger Vehicle Development Program. Fuel economy on the order of 2 to 3 times that of a conventional vehicle, with a comparable life cycle cost, is possible. The two most significant factors in keeping the life cycle cost down are the retail price increment and the ratio of battery replacement cost to battery life. Both factors can be reduced by reducing the power rating of the electric drive portion of the system relative to the system power requirements. The type of battery most suitable for the hybrid, from the point of view of minimizing life cycle cost, is nickel-iron. The hybrid is much less sensitive than a conventional vehicle is, in terms of the reduction in total fuel consumption and resultant decreases in operating expense, to reductions in vehicle weight, tire rolling resistance, etc., and to propulsion system and drivetrain improvements designed to improve the brake specific fuel consumption of the engine under low road load conditions. It is concluded that modifications to package the propulsion system and battery pack can be easily accommodated within the confines of a modified carryover body such as the Ford Ltd.
Sensitivity Analysis for Atmospheric Infrared Sounder (AIRS) CO2 Retrieval
NASA Technical Reports Server (NTRS)
Gat, Ilana
2012-01-01
The Atmospheric Infrared Sounder (AIRS) is a thermal infrared sensor able to retrieve the daily atmospheric state globally for clear as well as partially cloudy field-of-views. The AIRS spectrometer has 2378 channels sensing from 15.4 micrometers to 3.7 micrometers, of which a small subset in the 15 micrometers region has been selected, to date, for CO2 retrieval. To improve upon the current retrieval method, we extended the retrieval calculations to include a prior estimate component and developed a channel ranking system to optimize the channels and number of channels used. The channel ranking system uses a mathematical formalism to rapidly process and assess the retrieval potential of large numbers of channels. Implementing this system, we identifed a larger optimized subset of AIRS channels that can decrease retrieval errors and minimize the overall sensitivity to other iridescent contributors, such as water vapor, ozone, and atmospheric temperature. This methodology selects channels globally by accounting for the latitudinal, longitudinal, and seasonal dependencies of the subset. The new methodology increases accuracy in AIRS CO2 as well as other retrievals and enables the extension of retrieved CO2 vertical profiles to altitudes ranging from the lower troposphere to upper stratosphere. The extended retrieval method for CO2 vertical profile estimation using a maximum-likelihood estimation method. We use model data to demonstrate the beneficial impact of the extended retrieval method using the new channel ranking system on CO2 retrieval.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1994-01-01
The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.
The application of sensitivity analysis to models of large scale physiological systems
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1974-01-01
A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.
The Effect of Data Scaling on Dual Prices and Sensitivity Analysis in Linear Programs
ERIC Educational Resources Information Center
Adlakha, V. G.; Vemuganti, R. R.
2007-01-01
In many practical situations scaling the data is necessary to solve linear programs. This note explores the relationships in translating the sensitivity analysis between the original and the scaled problems.
Sensitivity Analysis of Parameters Affecting Protection of Water Resources at Hanford WA
DAVIS, J.D.
2002-02-08
The scope of this analysis was to assess the sensitivity of contaminant fluxes from the vadose zone to the water table, to several parameters, some of which can be controlled by operational considerations.
On 3-D modeling and automatic regridding in shape design sensitivity analysis
NASA Technical Reports Server (NTRS)
Choi, Kyung K.; Yao, Tse-Min
1987-01-01
The material derivative idea of continuum mechanics and the adjoint variable method of design sensitivity analysis are used to obtain a computable expression for the effect of shape variations on measures of structural performance of three-dimensional elastic solids.
Xi, Qing; Li, Zhao-Fu; Luo, Chuan
2014-05-01
Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.
Xi, Qing; Li, Zhao-Fu; Luo, Chuan
2014-05-01
Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China. PMID:25055665
[A new highly sensitive method of analysis of thrombocyte aggregation].
Gabbasov, Z A; Popov, E G; Gavrilov, I Iu; Pozin, E Ia; Markosian, R A
1989-01-01
The new method for studies of the platelet aggregation kinetics is based on the creation of a platelet stream through the optic canal and on an analysis of fluctuations in the intensity of the luminous flux that has passed through the sample. The relative dispersion of fluctuations in the intensity of the light passed through the sample is suggested as a parameter for the estimation of the platelet aggregation and for analysis of their aggregation kinetics. The method permits recording the platelet aggregation in citrate plasma, enriched for platelets, after exposure to the inductor in very low concentrations (0.05-0.15 microM ADP). Scanning electron microscopy has shown that aggregates are indeed formed in such cases, and they can be recorded from the increase of the relative dispersion in the fluctuations in the passed light intensity but cannot be recorded by Born's optic method.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
Optimization by decomposition, complex system sensitivity analysis, and a rapid growth of disciplinary sensitivity analysis are some of the recent developments that hold promise of a quantum jump in the support engineers receive from computers in the quantitative aspects of design. Review of the salient points of these techniques is given and illustrated by examples from aircraft design as a process that combines the best of human intellect and computer power to manipulate data.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
Optimization by decomposition, complex system sensitivity analysis, and a rapid growth of disciplinary sensitivity analysis are some of the recent developments that hold promise of a quantum jump in the support engineers receive from computers in the quantitative aspects of design. Review of the salient points of these techniques is given and illustrated by examples from aircraft design as a process that combines the best of human intellect and computer power to manipulate data.
Addition of three-dimensional isoparametric elements to NASA structural analysis program (NASTRAN)
NASA Technical Reports Server (NTRS)
Field, E. I.; Johnson, S. E.
1973-01-01
Implementation is made of the three-dimensional family of linear, quadratic and cubic isoparametric solid elements into the NASA Structural Analysis program, NASTRAN. This work included program development, installation, testing, and documentation. The addition of these elements to NASTRAN provides a significant increase in modeling capability particularly for structures requiring specification of temperatures, material properties, displacements, and stresses which vary throughout each individual element. Complete program documentation is presented in the form of new sections and updates for direct insertion to the three NASTRAN manuals. The results of demonstration test problems are summarized. Excellent results are obtained with the isoparametric elements for static, normal mode, and buckling analyses.
Physical interpretation and sensitivity analysis of the TOPMODEL
NASA Astrophysics Data System (ADS)
Franchini, Marco; Wendling, Jacques; Obled, Charles; Todini, Ezio
1996-02-01
The TOPMODEL is a variable contributing area conceptual model in which the predominant factors determining the formation of runoff are represented by the topography of the basin and a negative exponential law linking the transmissivity of the soil with the distance to the saturated zone below the ground level. Although conceptual, this model is frequently described as a 'physically based model' in the sense that its parameters can be measured directly in situ. In line with the analysis of various conceptual rainfall-runoff models conducted by Franchini and Pacciani ( J. Hydrol., 122: 161-219, 1991), a detailed analysis of the TOPMODEL is performed to arrive at a closer understanding of the correspondence of the assumptions underpinning the model with the physical reality and, in particular, the role that topographic information (expressed by the topographic index curve) and the nature of the soil (expressed by saturated hydraulic conductivity and its decay with soil depth), have within the model itself. Also investigated is the extent to which the model parameters actually reflect the physical properties to which they refer and how far their values offset the inevitable schematisation of the model. The various applications to real situations include the Sieve basin (river Arno tributary), which was used for the comparison of conceptual rainfall-runoff models described in the above-mentioned study by Franchini and Paccini. This allows that analysis to be extended to the TOPMODEL.
Sensitivity and uncertainty analysis applied to the JHR reactivity prediction
Leray, O.; Vaglio-Gaudard, C.; Hudelot, J. P.; Santamarina, A.; Noguere, G.; Di-Salvo, J.
2012-07-01
The on-going AMMON program in EOLE reactor at CEA Cadarache (France) provides experimental results to qualify the HORUS-3D/N neutronics calculation scheme used for the design and safety studies of the new Material Testing Jules Horowitz Reactor (JHR). This paper presents the determination of technological and nuclear data uncertainties on the core reactivity and the propagation of the latter from the AMMON experiment to JHR. The technological uncertainty propagation was performed with a direct perturbation methodology using the 3D French stochastic code TRIPOLI4 and a statistical methodology using the 2D French deterministic code APOLLO2-MOC which leads to a value of 289 pcm (1{sigma}). The Nuclear Data uncertainty propagation relies on a sensitivity study on the main isotopes and the use of a retroactive marginalization method applied to the JEFF 3.1.1 {sup 27}Al evaluation in order to obtain a realistic multi-group covariance matrix associated with the considered evaluation. This nuclear data uncertainty propagation leads to a K{sub eff} uncertainty of 624 pcm for the JHR core and 684 pcm for the AMMON reference configuration core. Finally, transposition and reduction of the prior uncertainty were made using the Representativity method which demonstrates the similarity of the AMMON experiment with JHR (the representativity factor is 0.95). The final impact of JEFF 3.1.1 nuclear data on the Begin Of Life (BOL) JHR reactivity calculated by the HORUS-3D/N V4.0 is a bias of +216 pcm with an associated posterior uncertainty of 304 pcm (1{sigma}). (authors)
Analysis of publically available skin sensitization data from REACH registrations 2008-2014.
Luechtefeld, Thomas; Maertens, Alexandra; Russo, Daniel P; Rovida, Costanza; Zhu, Hao; Hartung, Thomas
2016-01-01
The public data on skin sensitization from REACH registrations already included 19,111 studies on skin sensitization in December 2014, making it the largest repository of such data so far (1,470 substances with mouse LLNA, 2,787 with GPMT, 762 with both in vivo and in vitro and 139 with only in vitro data). 21% were classified as sensitizers. The extracted skin sensitization data was analyzed to identify relationships in skin sensitization guidelines, visualize structural relationships of sensitizers, and build models to predict sensitization. A chemical with molecular weight > 500 Da is generally considered non-sensitizing owing to low bioavailability, but 49 sensitizing chemicals with a molecular weight > 500 Da were found. A chemical similarity map was produced using PubChem's 2D Tanimoto similarity metric and Gephi force layout visualization. Nine clusters of chemicals were identified by Blondel's module recognition algorithm revealing wide module-dependent variation. Approximately 31% of mapped chemicals are Michaell's acceptors but alone this does not imply skin sensitization. A simple sensitization model using molecular weight and five ToxTree structural alerts showed a balanced accuracy of 65.8% (specificity 80.4%, sensitivity 51.4%), demonstrating that structural alerts have information value. A simple variant of k-nearest neighbors outperformed the ToxTree approach even at 75% similarity threshold (82% balanced accuracy at 0.95 threshold). At higher thresholds, the balanced accuracy increased. Lower similarity thresholds decrease sensitivity faster than specificity. This analysis scopes the landscape of chemical skin sensitization, demonstrating the value of large public datasets for health hazard prediction. PMID:26863411
Re-analysis of survival data of cancer patients utilizing additive homeopathy.
Gleiss, Andreas; Frass, Michael; Gaertner, Katharina
2016-08-01
In this short communication we present a re-analysis of homeopathic patient data in comparison to control patient data from the same Outpatient´s Unit "Homeopathy in malignant diseases" of the Medical University of Vienna. In this analysis we took account of a probable immortal time bias. For patients suffering from advanced stages of cancer and surviving the first 6 or 12 months after diagnosis, respectively, the results show that utilizing homeopathy gives a statistically significant (p<0.001) advantage over control patients regarding survival time. In conclusion, bearing in mind all limitations, the results of this retrospective study suggest that patients with advanced stages of cancer might benefit from additional homeopathic treatment until a survival time of up to 12 months after diagnosis. PMID:27515878
A multiple additive regression tree analysis of three exposure measures during Hurricane Katrina.
Curtis, Andrew; Li, Bin; Marx, Brian D; Mills, Jacqueline W; Pine, John
2011-01-01
This paper analyses structural and personal exposure to Hurricane Katrina. Structural exposure is measured by flood height and building damage; personal exposure is measured by the locations of 911 calls made during the response. Using these variables, this paper characterises the geography of exposure and also demonstrates the utility of a robust analytical approach in understanding health-related challenges to disadvantaged populations during recovery. Analysis is conducted using a contemporary statistical approach, a multiple additive regression tree (MART), which displays considerable improvement over traditional regression analysis. By using MART, the percentage of improvement in R-squares over standard multiple linear regression ranges from about 62 to more than 100 per cent. The most revealing finding is the modelled verification that African Americans experienced disproportionate exposure in both structural and personal contexts. Given the impact of exposure to health outcomes, this finding has implications for understanding the long-term health challenges facing this population.
Huggett, A; Petersen, B J; Walker, R; Fisher, C E; Notermans, S H; Rombouts, F M; Abbott, P; Debackere, M; Hathaway, S C; Hecker, E F; Knaap, A G; Kuznesof, P M; Meyland, I; Moy, G; Narbonne, J F; Paakkanen, J; Smith, M R; Tennant, D; Wagstaffe, P; Wargo, J; Würtzen, G
1998-06-01
Internationally acceptable norms need to incorporate sound science and consistent risk management principles in an open and transparent manner, as set out in the Agreement on the Application of Sanitary and Phytosanitary Measures (the SPS Agreement). The process of risk analysis provides a procedure to reach these goals. The interaction between risk assessors and risk managers is considered vital to this procedure. This paper reports the outcome of a meeting of risk assessors and risk managers on specific aspects of risk analysis and its application to international standard setting for food additives and contaminants. Case studies on aflatoxins and aspartame were used to identify the key steps of the interaction process which ensure scientific justification for risk management decisions. A series of recommendations were proposed in order to enhance the scientific transparency in these critical phases of the standard setting procedure.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2015-05-01
Sensitivity analysis is an essential paradigm in Earth and Environmental Systems modeling. However, the term "sensitivity" has a clear definition, based in partial derivatives, only when specified locally around a particular point (e.g., optimal solution) in the problem space. Accordingly, no unique definition exists for "global sensitivity" across the problem space, when considering one or more model responses to different factors such as model parameters or forcings. A variety of approaches have been proposed for global sensitivity analysis, based on different philosophies and theories, and each of these formally characterizes a different "intuitive" understanding of sensitivity. These approaches focus on different properties of the model response at a fundamental level and may therefore lead to different (even conflicting) conclusions about the underlying sensitivities. Here we revisit the theoretical basis for sensitivity analysis, summarize and critically evaluate existing approaches in the literature, and demonstrate their flaws and shortcomings through conceptual examples. We also demonstrate the difficulty involved in interpreting "global" interaction effects, which may undermine the value of existing interpretive approaches. With this background, we identify several important properties of response surfaces that are associated with the understanding and interpretation of sensitivities in the context of Earth and Environmental System models. Finally, we highlight the need for a new, comprehensive framework for sensitivity analysis that effectively characterizes all of the important sensitivity-related properties of model response surfaces.
NASA Astrophysics Data System (ADS)
Edouard, C.; Petit, M.; Forgez, C.; Bernard, J.; Revel, R.
2016-09-01
In this work, a simplified electrochemical and thermal model that can predict both physicochemical and aging behavior of Li-ion batteries is studied. A sensitivity analysis of all its physical parameters is performed in order to find out their influence on the model output based on simulations under various conditions. The results gave hints on whether a parameter needs particular attention when measured or identified and on the conditions (e.g. temperature, discharge rate) under which it is the most sensitive. A specific simulation profile is designed for parameters involved in aging equations in order to determine their sensitivity. Finally, a step-wise method is followed to limit the influence of parameter values when identifying some of them, according to their relative sensitivity from the study. This sensitivity analysis and the subsequent step-wise identification method show very good results, such as a better fitting of the simulated cell voltage with experimental data.
Sensitivity analysis applied to the construction of radial basis function networks.
Shi, D; Yeung, D S; Gao, J
2005-09-01
Conventionally, a radial basis function (RBF) network is constructed by obtaining cluster centers of basis function by maximum likelihood learning. This paper proposes a novel learning algorithm for the construction of radial basis function using sensitivity analysis. In training, the number of hidden neurons and the centers of their radial basis functions are determined by the maximization of the output's sensitivity to the training data. In classification, the minimal number of such hidden neurons with the maximal sensitivity will be the most generalizable to unknown data. Our experimental results show that our proposed sensitivity-based RBF classifier outperforms the conventional RBFs and is as accurate as support vector machine (SVM). Hence, sensitivity analysis is expected to be a new alternative way to the construction of RBF networks.
Electric power exchanges with sensitivity matrices: an experimental analysis
Drozdal, Martin
2001-01-01
We describe a fast and incremental method for power flows computation. Fast in the sense that it can be used for real time power flows computation, and incremental in the sense that it computes any additional increase/decrease in line congestion caused by a particular contract. This is, to our best knowledge, the only suitable method for real time power flows computation, that at the same time offers a powerful way of dealing with congestion contingency. Many methods for this purpose have been designed, or thought of, but those either lack speed or being incremental, or have never been coded and tested. The author is in the process of obtaining a patent on methods, algorithms, and procedures described in this paper.
NASA Astrophysics Data System (ADS)
Hasuike, Takashi; Katagiri, Hideki
2010-10-01
This paper focuses on the proposition of a portfolio selection problem considering an investor's subjectivity and the sensitivity analysis for the change of subjectivity. Since this proposed problem is formulated as a random fuzzy programming problem due to both randomness and subjectivity presented by fuzzy numbers, it is not well-defined. Therefore, introducing Sharpe ratio which is one of important performance measures of portfolio models, the main problem is transformed into the standard fuzzy programming problem. Furthermore, using the sensitivity analysis for fuzziness, the analytical optimal portfolio with the sensitivity factor is obtained.
Design component method for sensitivity analysis of built-up structures
NASA Technical Reports Server (NTRS)
Choi, Kyung K.; Seong, Hwai G.
1986-01-01
A 'design component method' that provides a unified and systematic organization of design sensitivity analysis for built-up structures is developed and implemented. Both conventional design variables, such as thickness and cross-sectional area, and shape design variables of components of built-up structures are considered. It is shown that design of components of built-up structures can be characterized and system design sensitivity expressions obtained by simply adding contributions from each component. The method leads to a systematic organization of computations for design sensitivity analysis that is similar to the way in which computations are organized within a finite element code.
Analysis of rate-sensitive skin in gas wells
Meehan, D.N.; Schell, E.J.
1983-01-01
This study documents the analysis of rate dependent skin in a gas well. Three build-up tests and an isochronal test are analyzed in some detail. The results indicate the rate dependent skin is due to non-Darcy flow near the well bore. Evidence is presented that suggest the non-Darcy flow results from calcium carbonate scale partially plugging the perforations. Also, the summary of a pressure build-up study is included on the wells recently drilled in Champlin's Stratton-Agua Dulce field.
An analysis of rate-sensitive skin in gas wells
Meehan, D.N.; Schell, E.J.
1983-10-01
This paper documents the analysis of rate dependent skin in a gas well. Three build-up tests and an isochronal test are analyzed in some detail. The results indicate the rate dependent skin is due to nondarcy flow near the wellbore. Evidence is presented that suggest the non-darcy flow results from calcium carbonate scale partially plugging the perforations. Also, the summary of a pressure build-up study is included on the wells recently drilled in Champlin's Stratton-Agua Dulce Field.
NASA Astrophysics Data System (ADS)
Peeters, L. J. M.; Podger, G. M.; Smith, T.; Pickett, T.; Bark, R. H.; Cuddy, S. M.
2014-09-01
The simulation of routing and distribution of water through a regulated river system with a river management model will quickly result in complex and nonlinear model behaviour. A robust sensitivity analysis increases the transparency of the model and provides both the modeller and the system manager with a better understanding and insight on how the model simulates reality and management operations. In this study, a robust, density-based sensitivity analysis, developed by Plischke et al. (2013), is applied to an eWater Source river management model. This sensitivity analysis methodology is extended to not only account for main effects but also for interaction effects. The combination of sensitivity indices and scatter plots enables the identification of major linear effects as well as subtle minor and nonlinear effects. The case study is an idealized river management model representing typical conditions of the southern Murray-Darling Basin in Australia for which the sensitivity of a variety of model outcomes to variations in the driving forces, inflow to the system, rainfall and potential evapotranspiration, is examined. The model outcomes are most sensitive to the inflow to the system, but the sensitivity analysis identified minor effects of potential evapotranspiration and nonlinear interaction effects between inflow and potential evapotranspiration.
Jia, Wei; Ling, Yun; Lin, Yuanhui; Chang, James; Chu, Xiaogang
2014-04-01
A new method combining QuEChERS with ultrahigh-performance liquid chromatography and electrospray ionization quadrupole Orbitrap high-resolution mass spectrometry (UHPLC/ESI Q-Orbitrap) was developed for the highly accurate and sensitive screening of 43 antioxidants, preservatives and synthetic sweeteners in dairy products. Response surface methodology was employed to optimize a quick, easy, cheap, effective, rugged, and safe (QuEChERS) sample preparation method for the determination of 42 different analytes in dairy products for the first time. After optimization, the maximum predicted recovery was 99.33% rate for aspartame under the optimized conditions of 10 mL acetionitrile, 1.52 g sodium acetate, 410 mg PSA and 404 mgC18. For the matrices studied, the recovery rates of the other 42 compounds ranged from 89.4% to 108.2%, with coefficient of variation <6.4%. UHPLC/ESI Q-Orbitrap Mass full scan mode acquired full MS data was used to identify and quantify additives, and data-dependent scan mode obtained fragment ion spectra for confirmation. The mass accuracy typically obtained is routinely better than 1.5ppm, and only need to calibrate once a week. The 43 compounds behave dynamic in the range 0.001-1000 μg kg(-1) concentration, with correlation coefficient >0.999. The limits of detection for the analytes are in the range 0.0001-3.6 μg kg(-1). This method has been successfully applied on screening of antioxidants, preservatives and synthetic sweeteners in commercial dairy product samples, and it is very useful for fast screening of different food additives.
Kinetic modeling and sensitivity analysis of plasma-assisted combustion
NASA Astrophysics Data System (ADS)
Togai, Kuninori
Plasma-assisted combustion (PAC) is a promising combustion enhancement technique that shows great potential for applications to a number of different practical combustion systems. In this dissertation, the chemical kinetics associated with PAC are investigated numerically with a newly developed model that describes the chemical processes induced by plasma. To support the model development, experiments were performed using a plasma flow reactor in which the fuel oxidation proceeds with the aid of plasma discharges below and above the self-ignition thermal limit of the reactive mixtures. The mixtures used were heavily diluted with Ar in order to study the reactions with temperature-controlled environments by suppressing the temperature changes due to chemical reactions. The temperature of the reactor was varied from 420 K to 1250 K and the pressure was fixed at 1 atm. Simulations were performed for the conditions corresponding to the experiments and the results are compared against each other. Important reaction paths were identified through path flux and sensitivity analyses. Reaction systems studied in this work are oxidation of hydrogen, ethylene, and methane, as well as the kinetics of NOx in plasma. In the fuel oxidation studies, reaction schemes that control the fuel oxidation are analyzed and discussed. With all the fuels studied, the oxidation reactions were extended to lower temperatures with plasma discharges compared to the cases without plasma. The analyses showed that radicals produced by dissociation of the reactants in plasma plays an important role of initiating the reaction sequence. At low temperatures where the system exhibits a chain-terminating nature, reactions of HO2 were found to play important roles on overall fuel oxidation. The effectiveness of HO2 as a chain terminator was weakened in the ethylene oxidation system, because the reactions of C 2H4 + O that have low activation energies deflects the flux of O atoms away from HO2. For the
Zhang, S.; Toll, J.; Cothern, K.
1995-12-31
The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysis provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.
Eigenvalue and eigenvector sensitivity and approximate analysis for repeated eigenvalue problems
NASA Technical Reports Server (NTRS)
Hou, Gene J. W.; Kenny, Sean P.
1991-01-01
A set of computationally efficient equations for eigenvalue and eigenvector sensitivity analysis are derived, and a method for eigenvalue and eigenvector approximate analysis in the presence of repeated eigenvalues is presented. The method developed for approximate analysis involves a reparamaterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations of changes in both the eigenvalues and eigenvectors associated with the repeated eigenvalue problem. Examples are given to demonstrate the application of such equations for sensitivity and approximate analysis.
NASA Technical Reports Server (NTRS)
Bittker, David A.; Radhakrishnan, Krishnan
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 3 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 3 explains the kinetics and kinetics-plus-sensitivity analysis problems supplied with LSENS and presents sample results. These problems illustrate the various capabilities of, and reaction models that can be solved by, the code and may provide a convenient starting point for the user to construct the problem data file required to execute LSENS. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1996-01-01
An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.
Sensitivity analysis of permeability parameters for flows on Barcelona networks
NASA Astrophysics Data System (ADS)
Rarità, Luigi; D'Apice, Ciro; Piccoli, Benedetto; Helbing, Dirk
We consider the problem of optimizing vehicular traffic flows on an urban network of Barcelona type, i.e. square network with streets of not equal length. In particular, we describe the effects of variation of permeability parameters, that indicate the amount of flow allowed to enter a junction from incoming roads. On each road, a model suggested by Helbing et al. (2007) [11] is considered: free and congested regimes are distinguished, characterized by an arrival flow and a departure flow, the latter depending on a permeability parameter. Moreover we provide a rigorous derivation of the model from fluid dynamic ones, using recent results of Bretti et al. (2006) [3]. For solving the dynamics at nodes of the network, a Riemann solver maximizing the through flux is used, see Coclite et al. (2005) [4] and Helbing et al. (2007) [11]. The network dynamics gives rise to complicate equations, where the evolution of fluxes at a single node may involve time-delayed terms from all other nodes. Thus we propose an alternative hybrid approach, introducing additional logic variables. Finally we compute the effects of variations on permeability parameters over the hybrid dynamics and test the obtained results via simulations.
Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil
NASA Technical Reports Server (NTRS)
Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris
2016-01-01
Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.
NASA Technical Reports Server (NTRS)
Grady, Joseph E.; Haller, William J.; Poinsatte, Philip E.; Halbig, Michael C.; Schnulo, Sydney L.; Singh, Mrityunjay; Weir, Don; Wali, Natalie; Vinup, Michael; Jones, Michael G.; Patterson, Clark; Santelle, Tom; Mehl, Jeremy
2015-01-01
The research and development activities reported in this publication were carried out under NASA Aeronautics Research Institute (NARI) funded project entitled "A Fully Nonmetallic Gas Turbine Engine Enabled by Additive Manufacturing." The objective of the project was to conduct evaluation of emerging materials and manufacturing technologies that will enable fully nonmetallic gas turbine engines. The results of the activities are described in three part report. The first part of the report contains the data and analysis of engine system trade studies, which were carried out to estimate reduction in engine emissions and fuel burn enabled due to advanced materials and manufacturing processes. A number of key engine components were identified in which advanced materials and additive manufacturing processes would provide the most significant benefits to engine operation. The technical scope of activities included an assessment of the feasibility of using additive manufacturing technologies to fabricate gas turbine engine components from polymer and ceramic matrix composites, which were accomplished by fabricating prototype engine components and testing them in simulated engine operating conditions. The manufacturing process parameters were developed and optimized for polymer and ceramic composites (described in detail in the second and third part of the report). A number of prototype components (inlet guide vane (IGV), acoustic liners, engine access door) were additively manufactured using high temperature polymer materials. Ceramic matrix composite components included turbine nozzle components. In addition, IGVs and acoustic liners were tested in simulated engine conditions in test rigs. The test results are reported and discussed in detail.
EH AND S ANALYSIS OF DYE-SENSITIZED PHOTOVOLTAIC SOLAR CELL PRODUCTION.
BOWERMAN,B.; FTHENAKIS,V.
2001-10-01
Photovoltaic solar cells based on a dye-sensitized nanocrystalline titanium dioxide photoelectrode have been researched and reported since the early 1990's. Commercial production of dye-sensitized photovoltaic solar cells has recently been reported in Australia. In this report, current manufacturing methods are described, and estimates are made of annual chemical use and emissions during production. Environmental, health and safety considerations for handling these materials are discussed. This preliminary EH and S evaluation of dye-sensitized titanium dioxide solar cells indicates that some precautions will be necessary to mitigate hazards that could result in worker exposure. Additional information required for a more complete assessment is identified.
The sensitivity analysis of the economic and economic statistical designs of the synthetic X¯ chart
NASA Astrophysics Data System (ADS)
Yeong, Wai Chung; Khoo, Michael Boon Chong; Chong, Jia Kit; Lim, Shun Jinn; Teoh, Wei Lin
2014-12-01
The economic and economic statistical designs allow the practitioner to implement the control chart in an economically optimal manner. For the economic design, the optimal chart parameters are obtained to minimize the cost, while for the economic statistical design, additional constraints in terms of the average run length is imposed. However, these designs involve the estimation of quite a number of input parameters. Some of these input parameters are difficult to estimate accurately. Thus, a sensitivity analysis is required in order to identify which parameters need to be estimated accurately, and which requires just a rough estimation. This study focuses on the significance of 11 input parameters toward the optimal cost and average run lengths of the synthetic ¯X chart. The significant input parameters are identified through a two-level fractional factorial design, which allows interaction effects to be identified. An analysis of variance is performed to obtain the P-values by using the Minitab software. The significant input parameters and interactions on the optimal cost and average run lengths are identified based on a 5% significance level. The results of this study show that the input parameters which are significant towards the economic design may not be significant for the economic statistical design, and vice versa. This study also shows that there are quite a number of significant interaction effects, which may mask the significance of the main effects.
NASA Astrophysics Data System (ADS)
Zamani, S. A.; Hossinzade, M.
2011-08-01
In the deep drawing of cups, the earing defects are caused by planar anisotropy in the sheet. Several methods have been developed to predicting the optimal blank to avoid the formation of ears. In the deep drawing processes, the use of an optimal blank shape not only saves the material but also it may reduce the occurrence of defects e.g. wrinkling and tearing. In addition, finding the desirable flange contour, eliminate the trimming step in the deep drawing of parts with flange. In this study, a systematic method based on sensitivity analysis has been used to find the optimal blank to obtain a desirable flange contour for a cylindrical cup by hydromechanical deep drawing process. With the aid of a well-known dynamic explicit analysis code ABAQUS; optimum initial blank shape has been obtained. With the predicted optimal blank, both computer simulation and experiment are performed and the effect of blank shape on the some parameters such as thickness distribution and blank holder force during the forming process were investigated. It is shown that formed cup with an optimum blank shape has a better thickness distribution in the both rolled and perpendicular of rolled direction. And it was shown that the blank holder force of modified blank is higher than circular blank.
Biomechanical modeling and sensitivity analysis of bipedal running ability. II. Extinct taxa.
Hutchinson, John R
2004-10-01
Using an inverse dynamics biomechanical analysis that was previously validated for extant bipeds, I calculated the minimum amount of actively contracting hindlimb extensor muscle that would have been needed for rapid bipedal running in several extinct dinosaur taxa. I analyzed models of nine theropod dinosaurs (including birds) covering over five orders of magnitude in size. My results uphold previous findings that large theropods such as Tyrannosaurus could not run very quickly, whereas smaller theropods (including some extinct birds) were adept runners. Furthermore, my results strengthen the contention that many nonavian theropods, especially larger individuals, used fairly upright limb orientations, which would have reduced required muscular force, and hence muscle mass. Additional sensitivity analysis of muscle fascicle lengths, moment arms, and limb orientation supports these conclusions and points out directions for future research on the musculoskeletal limits on running ability. Although ankle extensor muscle support is shown to have been important for all taxa, the ability of hip extensor muscles to support the body appears to be a crucial limit for running capacity in larger taxa. I discuss what speeds were possible for different theropod dinosaurs, and how running ability evolved in an inverse relationship to body size in archosaurs.
Results of an integrated structure/control law design sensitivity analysis
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1989-01-01
A design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations is discussed. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changes in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient than finite difference methods for the computation of the equivalent sensitivity information.
[Local sensitivity and its stationarity analysis for urban rainfall runoff modelling].
Lin, Jie; Huang, Jin-Liang; Du, Peng-Fei; Tu, Zhen-Shun; Li, Qing-Sheng
2010-09-01
Sensitivity analysis of urban-runoff simulation is a crucial procedure for parameter identification and uncertainty analysis. Local sensitivity analysis using Morris screening method was carried out for urban rainfall runoff modelling based on Storm Water Management Model (SWMM). The results showed that Area, % Imperv and Dstore-Imperv are the most sensitive parameters for both total runoff volume and peak flow. Concerning total runoff volume, the sensitive indices of Area, % Imperv and Dstore-Imperv were 0.46-1.0, 0.61-1.0, -0.050(-) - 5.9, respectively; while with respect to peak runoff, they were 0.48-0.89, 0.59-0.83, 0(-) -9.6, respectively. In comparison, the most sensitive indices (Morris) for all parameters with regard to total runoff volume and peak flow appeared in the rainfall event with least rainfall; and less sensitive indices happened in the rainfall events with heavier rainfall. Furthermore, there is considerable variability in sensitive indices for each rainfall event. % Zero-Imperv's coefficient variations have the largest values among all parameters for total runoff volume and peak flow, namely 221.24% and 228.10%. On the contrary, the coefficient variations of conductivity among all parameters for both total runoff volume and peak flow are the smallest, namely 0.
Zhang, Ruihua; Guo, Hongwei; Asundi, Anand K
2016-09-20
In fringe projection profilometry, phase sensitivity is one of the important factors affecting measurement accuracy. A typical fringe projection system consists of one camera and one projector. To gain insight into its phase sensitivity, we perform in this paper a strict analysis in theory about the dependence of phase sensitivities on fringe directions. We use epipolar geometry as a tool to derive the relationship between fringe distortions and depth variations of the measured surface, and further formularize phase sensitivity as a function of the angle between fringe direction and the epipolar line. The results reveal that using the fringes perpendicular to the epipolar lines enables us to achieve the maximum phase sensitivities, whereas if the fringes have directions along the epipolar lines, the phase sensitivities decline to zero. Based on these results, we suggest the optimal fringes being circular-arc-shaped and centered at the epipole, which enables us to give the best phase sensitivities over the whole fringe pattern, and the quasi-optimal fringes, being straight and perpendicular to the connecting line between the fringe pattern center and the epipole, can achieve satisfyingly high phase sensitivities over whole fringe patterns in the situation that the epipole locates far away from the fringe pattern center. The experimental results demonstrate that our analyses are practical and correct, and that our optimized fringes are effective in improving the phase sensitivities and, further, the measurement accuracies. PMID:27661597
Mass analysis addition to the Differential Ion Flux Probe (DIFP) study
NASA Technical Reports Server (NTRS)
Wright, K. H., Jr.; Jolley, Richard
1994-01-01
The objective of this study is to develop a technique to measure the characteristics of space plasmas under highly disturbed conditions; e.g., non-Maxwellian plasmas with strong drifting populations and plasmas contaminated by spacecraft outgassing. The approach, conducted in conjunction with current MSFC activities, is to extend the capabilities of the Differential Ion Flux Probe (DIFP) to include a high throughput mass measurement that does not require either high voltage or contamination sensitive devices such as channeltron electron multipliers or microchannel plates. This will significantly reduce the complexity and expense of instrument fabrication, testing, and integration of flight hardware compared to classical mass analyzers. The feasibility of the enhanced DIFP has been verified by using breadboard test models in a controlled plasma environment. The ability to manipulate particles through the instrument regardless of incident angle, energy, or ionic component has been amply demonstrated. The energy analysis mode is differential and leads directly to a time-of-flight mass measurement. With the new design, the DIFP will separate multiple ion streams and analyze each stream independently for ion flux intensity, velocity (including direction of motion), mass, and temperature (or energy distribution). In particular, such an instrument will be invaluable on follow-on electrodynamic TSS missions and, possibly, for environmental monitoring on the space station.
Dynamics and sensitivity analysis of high-frequency conduction block
NASA Astrophysics Data System (ADS)
Ackermann, D. Michael; Bhadra, Niloy; Gerges, Meana; Thomas, Peter J.
2011-10-01
The local delivery of extracellular high-frequency stimulation (HFS) has been shown to be a fast acting and quickly reversible method of blocking neural conduction and is currently being pursued for several clinical indications. However, the mechanism for this type of nerve block remains unclear. In this study, we investigate two hypotheses: (1) depolarizing currents promote conduction block via inactivation of sodium channels and (2) the gating dynamics of the fast sodium channel are the primary determinate of minimal blocking frequency. Hypothesis 1 was investigated using a combined modeling and experimental study to investigate the effect of depolarizing and hyperpolarizing currents on high-frequency block. The results of the modeling study show that both depolarizing and hyperpolarizing currents play an important role in conduction block and that the conductance to each of three ionic currents increases relative to resting values during HFS. However, depolarizing currents were found to promote the blocking effect, and hyperpolarizing currents were found to diminish the blocking effect. Inward sodium currents were larger than the sum of the outward currents, resulting in a net depolarization of the nodal membrane. Our experimental results support these findings and closely match results from the equivalent modeling scenario: intra-peritoneal administration of the persistent sodium channel blocker ranolazine resulted in an increase in the amplitude of HFS required to produce conduction block in rats, confirming that depolarizing currents promote the conduction block phenomenon. Hypothesis 2 was investigated using a spectral analysis of the channel gating variables in a single-fiber axon model. The results of this study suggested a relationship between the dynamical properties of specific ion channel gating elements and the contributions of corresponding conductances to block onset. Specifically, we show that the dynamics of the fast sodium inactivation gate are
Fitzpatrick, Clare K; Baldwin, Mark A; Rullkoetter, Paul J; Laz, Peter J
2011-01-01
Many aspects of biomechanics are variable in nature, including patient geometry, joint mechanics, implant alignment and clinical outcomes. Probabilistic methods have been applied in computational models to predict distributions of performance given uncertain or variable parameters. Sensitivity analysis is commonly used in conjunction with probabilistic methods to identify the parameters that most significantly affect the performance outcome; however, it does not consider coupled relationships for multiple output measures. Principal component analysis (PCA) has been applied to characterize common modes of variation in shape and kinematics. In this study, a novel, combined probabilistic and PCA approach was developed to characterize relationships between multiple input parameters and output measures. To demonstrate the benefits of the approach, it was applied to implanted patellofemoral (PF) mechanics to characterize relationships between femoral and patellar component alignment and loading and the resulting joint mechanics. Prior studies assessing PF sensitivity have performed individual perturbation of alignment parameters. However, the probabilistic and PCA approach enabled a more holistic evaluation of sensitivity, including identification of combinations of alignment parameters that most significantly contributed to kinematic and contact mechanics outcomes throughout the flexion cycle, and the predictive capability to estimate joint mechanics based on alignment conditions without requiring additional analysis. The approach showed comparable results for Monte Carlo sampling with 500 trials and the more efficient Latin Hypercube sampling with 50 trials. The probabilistic and PCA approach has broad applicability to biomechanical analysis and can provide insight into the interdependencies between implant design, alignment and the resulting mechanics.
Fitzpatrick, Clare K; Baldwin, Mark A; Rullkoetter, Paul J; Laz, Peter J
2011-01-01
Many aspects of biomechanics are variable in nature, including patient geometry, joint mechanics, implant alignment and clinical outcomes. Probabilistic methods have been applied in computational models to predict distributions of performance given uncertain or variable parameters. Sensitivity analysis is commonly used in conjunction with probabilistic methods to identify the parameters that most significantly affect the performance outcome; however, it does not consider coupled relationships for multiple output measures. Principal component analysis (PCA) has been applied to characterize common modes of variation in shape and kinematics. In this study, a novel, combined probabilistic and PCA approach was developed to characterize relationships between multiple input parameters and output measures. To demonstrate the benefits of the approach, it was applied to implanted patellofemoral (PF) mechanics to characterize relationships between femoral and patellar component alignment and loading and the resulting joint mechanics. Prior studies assessing PF sensitivity have performed individual perturbation of alignment parameters. However, the probabilistic and PCA approach enabled a more holistic evaluation of sensitivity, including identification of combinations of alignment parameters that most significantly contributed to kinematic and contact mechanics outcomes throughout the flexion cycle, and the predictive capability to estimate joint mechanics based on alignment conditions without requiring additional analysis. The approach showed comparable results for Monte Carlo sampling with 500 trials and the more efficient Latin Hypercube sampling with 50 trials. The probabilistic and PCA approach has broad applicability to biomechanical analysis and can provide insight into the interdependencies between implant design, alignment and the resulting mechanics. PMID:20825941
Cacuci, Dan G.; Ionescu-Bujor, Mihaela
2004-07-15
Part II of this review paper highlights the salient features of the most popular statistical methods currently used for local and global sensitivity and uncertainty analysis of both large-scale computational models and indirect experimental measurements. These statistical procedures represent sampling-based methods (random sampling, stratified importance sampling, and Latin Hypercube sampling), first- and second-order reliability algorithms (FORM and SORM, respectively), variance-based methods (correlation ratio-based methods, the Fourier Amplitude Sensitivity Test, and the Sobol Method), and screening design methods (classical one-at-a-time experiments, global one-at-a-time design methods, systematic fractional replicate designs, and sequential bifurcation designs). It is emphasized that all statistical uncertainty and sensitivity analysis procedures first commence with the 'uncertainty analysis' stage and only subsequently proceed to the 'sensitivity analysis' stage; this path is the exact reverse of the conceptual path underlying the methods of deterministic sensitivity and uncertainty analysis where the sensitivities are determined prior to using them for uncertainty analysis. By comparison to deterministic methods, statistical methods for uncertainty and sensitivity analysis are relatively easier to develop and use but cannot yield exact values of the local sensitivities. Furthermore, current statistical methods have two major inherent drawbacks as follows: 1. Since many thousands of simulations are needed to obtain reliable results, statistical methods are at best expensive (for small systems) or, at worst, impracticable (e.g., for large time-dependent systems).2. Since the response sensitivities and parameter uncertainties are inherently and inseparably amalgamated in the results produced by these methods, improvements in parameter uncertainties cannot be directly propagated to improve response uncertainties; rather, the entire set of simulations and
Global in Time Analysis and Sensitivity Analysis for the Reduced NS-α Model of Incompressible Flow
NASA Astrophysics Data System (ADS)
Rebholz, Leo; Zerfas, Camille; Zhao, Kun
2016-09-01
We provide a detailed global in time analysis, and sensitivity analysis and testing, for the recently proposed (by the authors) reduced NS-α model. We extend the known analysis of the model to the global in time case by proving it is globally well-posed, and also prove some new results for its long time treatment of energy. We also derive PDE system that describes the sensitivity of the model with respect to the filtering radius parameter, and prove it is well-posed. An efficient numerical scheme for the sensitivity system is then proposed and analyzed, and proven to be stable and optimally accurate. Finally, two physically meaningful test problems are simulated: channel flow past a cylinder (including lift and drag calculations) and turbulent channel flow with {Re_{τ}=590} . The numerical results reveal that sensitivity is created near boundaries, and thus this is where the choice of the filtering radius is most critical.
Sensitivity Theory and Mental Retardation: Why Functional Analysis Is Not Enough.
ERIC Educational Resources Information Center
Reiss, Steven; Havercamp, Susan M.
1997-01-01
Discusses problems of using applied behavior analysis for individuals with mental retardation who have behavior disorders and presents a sensitivity model of motivation that stresses analysis in terms of aberrant environments, aberrant contingencies, and aberrant motivation. Describes implications for communication theory, assessment, and…
Tang, Christoph M; Stroud, Dave; Mackinnon, Fiona; Makepeace, Katherine; Plested, Joyce; Moxon, E Richard; Chalmers, Ronald
2002-02-01
Lipopolysaccharide (LPS) is important for the virulence of Neisseria meningitidis, and is the target of immune responses. We took advantage of a monoclonal antibody (Mab B5) that recognises phosphoethanolamine (PEtn) attached to the inner core of meningococcal LPS to identify genes required for the addition of PEtn to LPS. Insertional mutants that lost Mab B5 reactivity were isolated and characterised, but failed to yield genes directly responsible for PEtn substitution. Subsequent genetic linkage analysis was used to define a region of DNA containing a single intact open reading frame which is sufficient to confer B5 reactivity to a B5 negative meningococcal isolate. The results provide an initial characterisation of the genetic basis of a key, immunodominant epitope of meningococcal LPS.
Kabala, Z.J.; Milly, P.C.D. )
1990-04-01
Sensitivity analysis is one of the tools available for analyzing the effects of parameter uncertainty and soil heterogeneity on the transport of moisture in the unsaturated similar porous media. Direct differentiation of the discretized Richards equation with respect to parameters defining spatial variability leads to linear systems of equations for elementary sensitivities that are readily solved in conjunction with the original equation. These elementary sensitivities can be easily transformed into approximations of functional sensitivities and into sensitivities of boundary fluxes. A numerical implementation of this technique in one space dimension yields results that are consistent with exact analytical solutions and with numerical perturbation calculations. The effects of a given heterogeneity can be modeled adequately provided that the maximum relative change of the scale factor from one grid to the next not exceed a number on the order of unity.
NASA Astrophysics Data System (ADS)
Kim, Sungho; Kim, Heekang
2016-10-01
This paper presents a weathering sensitivity analysis method for the safety diagnosis of Seongsan Ilchulbong Peak using hyperspectral images. Remote sensing-based safety diagnosis is important for preventing accidents in famous mountains. A hyperspectral correlation-based method is proposed to evaluate the weathering sensitivity. The three issues are how to reduce the illumination effect, how to remove camera motion while acquiring images on a boat, and how to define the weathering sensitivity index. A novel minimum subtraction and maximum normalization (MSM-norm) method is proposed to solve the shadow and specular illumination problem. Geometrically distorted hyperspectral images are corrected by estimating the borderline of the mountain and sea surface. The final issue is solved by proposing a weathering sensitivity index (WS-Index) based on a spectral angle mapper. Real experiments on the Seongsan Ilchulbong Peak (UNESCO, World Natural Heritage) highlighted the feasibility of the proposed method in safety diagnosis by the weathering sensitivity index.
Liu, Wei; Xu, Libin; Lamberson, Connor; Haas, Dorothea; Korade, Zeljka; Porter, Ned A.
2014-01-01
We describe a highly sensitive method for the detection of 7-dehydrocholesterol (7-DHC), the biosynthetic precursor of cholesterol, based on its reactivity with 4-phenyl-1,2,4-triazoline-3,5-dione (PTAD) in a Diels-Alder cycloaddition reaction. Samples of biological tissues and fluids with added deuterium-labeled internal standards were derivatized with PTAD and analyzed by LC-MS. This protocol permits fast processing of samples, short chromatography times, and high sensitivity. We applied this method to the analysis of cells, blood, and tissues from several sources, including human plasma. Another innovative aspect of this study is that it provides a reliable and highly reproducible measurement of 7-DHC in 7-dehydrocholesterol reductase (Dhcr7)-HET mouse (a model for Smith-Lemli-Opitz syndrome) samples, showing regional differences in the brain tissue. We found that the levels of 7-DHC are consistently higher in Dhcr7-HET mice than in controls, with the spinal cord and peripheral nerve showing the biggest differences. In addition to 7-DHC, sensitive analysis of desmosterol in tissues and blood was also accomplished with this PTAD method by assaying adducts formed from the PTAD “ene” reaction. The method reported here may provide a highly sensitive and high throughput way to identify at-risk populations having errors in cholesterol biosynthesis. PMID:24259532
NASA Astrophysics Data System (ADS)
Hwang, Joonki; Park, Aaron; Chung, Jin Hyuk; Choi, Namhyun; Park, Jun-Qyu; Cho, Soo Gyeong; Baek, Sung-June; Choo, Jaebum
2013-06-01
Recently, the development of methods for the identification of explosive materials that are faster, more sensitive, easier to use, and more cost-effective has become a very important issue for homeland security and counter-terrorism applications. However, limited applicability of several analytical methods such as, the incapability of detecting explosives in a sealed container, the limited portability of instruments, and false alarms due to the inherent lack of selectivity, have motivated the increased interest in the application of Raman spectroscopy for the rapid detection and identification of explosive materials. Raman spectroscopy has received a growing interest due to its stand-off capacity, which allows samples to be analyzed at distance from the instrument. In addition, Raman spectroscopy has the capability to detect explosives in sealed containers such as glass or plastic bottles. We report a rapid and sensitive recognition technique for explosive compounds using Raman spectroscopy and principal component analysis (PCA). Seven hundreds of Raman spectra (50 measurements per sample) for 14 selected explosives were collected, and were pretreated with noise suppression and baseline elimination methods. PCA, a well-known multivariate statistical method, was applied for the proper evaluation, feature extraction, and identification of measured spectra. Here, a broad wavenumber range (200- 3500 cm-1) on the collected spectra set was used for the classification of the explosive samples into separate classes. It was found that three principal components achieved 99.3 % classification rates in the sample set. The results show that Raman spectroscopy in combination with PCA is well suited for the identification and differentiation of explosives in the field.
Personalization of models with many model parameters: an efficient sensitivity analysis approach.
Donders, W P; Huberts, W; van de Vosse, F N; Delhaas, T
2015-10-01
Uncertainty quantification and global sensitivity analysis are indispensable for patient-specific applications of models that enhance diagnosis or aid decision-making. Variance-based sensitivity analysis methods, which apportion each fraction of the output uncertainty (variance) to the effects of individual input parameters or their interactions, are considered the gold standard. The variance portions are called the Sobol sensitivity indices and can be estimated by a Monte Carlo (MC) approach (e.g., Saltelli's method [1]) or by employing a metamodel (e.g., the (generalized) polynomial chaos expansion (gPCE) [2, 3]). All these methods require a large number of model evaluations when estimating the Sobol sensitivity indices for models with many parameters [4]. To reduce the computational cost, we introduce a two-step approach. In the first step, a subset of important parameters is identified for each output of interest using the screening method of Morris [5]. In the second step, a quantitative variance-based sensitivity analysis is performed using gPCE. Efficient sampling strategies are introduced to minimize the number of model runs required to obtain the sensitivity indices for models considering multiple outputs. The approach is tested using a model that was developed for predicting post-operative flows after creation of a vascular access for renal failure patients. We compare the sensitivity indices obtained with the novel two-step approach with those obtained from a reference analysis that applies Saltelli's MC method. The two-step approach was found to yield accurate estimates of the sensitivity indices at two orders of magnitude lower computational cost. PMID:26017545
Spatial sensitivity analysis of snow cover data in a distributed rainfall-runoff model
NASA Astrophysics Data System (ADS)
Berezowski, T.; Nossent, J.; Chormański, J.; Batelaan, O.
2015-04-01
As the availability of spatially distributed data sets for distributed rainfall-runoff modelling is strongly increasing, more attention should be paid to the influence of the quality of the data on the calibration. While a lot of progress has been made on using distributed data in simulations of hydrological models, sensitivity of spatial data with respect to model results is not well understood. In this paper we develop a spatial sensitivity analysis method for spatial input data (snow cover fraction - SCF) for a distributed rainfall-runoff model to investigate when the model is differently subjected to SCF uncertainty in different zones of the model. The analysis was focussed on the relation between the SCF sensitivity and the physical and spatial parameters and processes of a distributed rainfall-runoff model. The methodology is tested for the Biebrza River catchment, Poland, for which a distributed WetSpa model is set up to simulate 2 years of daily runoff. The sensitivity analysis uses the Latin-Hypercube One-factor-At-a-Time (LH-OAT) algorithm, which employs different response functions for each spatial parameter representing a 4 × 4 km snow zone. The results show that the spatial patterns of sensitivity can be easily interpreted by co-occurrence of different environmental factors such as geomorphology, soil texture, land use, precipitation and temperature. Moreover, the spatial pattern of sensitivity under different response functions is related to different spatial parameters and physical processes. The results clearly show that the LH-OAT algorithm is suitable for our spatial sensitivity analysis approach and that the SCF is spatially sensitive in the WetSpa model. The developed method can be easily applied to other models and other spatial data.
Estep, Donald
2015-11-30
This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.
Geometrically nonlinear design sensitivity analysis on parallel-vector high-performance computers
NASA Technical Reports Server (NTRS)
Baddourah, Majdi A.; Nguyen, Duc T.
1993-01-01
Parallel-vector solution strategies for generation and assembly of element matrices, solution of the resulted system of linear equations, calculations of the unbalanced loads, displacements, stresses, and design sensitivity analysis (DSA) are all incorporated into the Newton Raphson (NR) procedure for nonlinear finite element analysis and DSA. Numerical results are included to show the performance of the proposed method for structural analysis and DSA in a parallel-vector computer environment.
NASA Astrophysics Data System (ADS)
Kannan, N.; White, S. M.; Worrall, F.; Whelan, M. J.
2007-01-01
SummaryDistributed models used in hydrological modelling, have many parameters. To get useful results from the model, every parameter is required to have a sensible value. Usually a calibration is undertaken to reduce the uncertainties associated with the estimation of model parameters. To ensure efficient calibration, a sensitivity analysis is conducted to identify the most sensitive parameters. This paper describes simple and efficient approaches for sensitivity analysis, calibration and identification of the best methodology within a modelling framework. For this study, the SWAT-2000 model was used on a small catchment of 141.5 ha in the Unilever Colworth estate, in Bedfordshire, England. Acceptable performance in hydrological modelling, and correct simulation of the processes driving the water balance were essential requirements for subsequent pesticide modelling. SWAT gives various options for both evapotranspiration and runoff modelling. Identification of the best modelling option for these processes is a pre-requisite to achieve these requirements. As a first step, a sensitivity analysis was conducted to identify the sensitive parameters affecting stream flow for subsequent application in stream flow calibration. Hydrological modelling has been carried out for the catchment for the period September 1999 to May 2002 inclusive using both daily and sub-daily rainfall data. The Hargreaves and Penman-Montieth methods of evapotranspiration estimation and the NRCS curve number (CN) and Green and Ampt infiltration methods for runoff estimation techniques were used, in four different combinations, to identify the combination of methodologies that best reproduced the observed data. In addition, as the initial calibration period, starting in September 1999, was substantially wetter than the following corresponding validation period, the calibration and validation periods are interchanged to test the impact of calibration using wet or dry periods.
Sensitivity and uncertainty analysis of reactivities for UO2 and MOX fueled PWR cells
NASA Astrophysics Data System (ADS)
Foad, Basma; Takeda, Toshikazu
2015-12-01
The purpose of this paper is to apply our improved method for calculating sensitivities and uncertainties of reactivity responses for UO2 and MOX fueled pressurized water reactor cells. The improved method has been used to calculate sensitivity coefficients relative to infinite dilution cross-sections, where the self-shielding effect is taken into account. Two types of reactivities are considered: Doppler reactivity and coolant void reactivity, for each type of reactivity, the sensitivities are calculated for small and large perturbations. The results have demonstrated that the reactivity responses have larger relative uncertainty than eigenvalue responses. In addition, the uncertainty of coolant void reactivity is much greater than Doppler reactivity especially for large perturbations. The sensitivity coefficients and uncertainties of both reactivities were verified by comparing with SCALE code results using ENDF/B-VII library and good agreements have been found.
Sensitivity and uncertainty analysis of reactivities for UO2 and MOX fueled PWR cells
Foad, Basma; Takeda, Toshikazu
2015-12-31
The purpose of this paper is to apply our improved method for calculating sensitivities and uncertainties of reactivity responses for UO{sub 2} and MOX fueled pressurized water reactor cells. The improved method has been used to calculate sensitivity coefficients relative to infinite dilution cross-sections, where the self-shielding effect is taken into account. Two types of reactivities are considered: Doppler reactivity and coolant void reactivity, for each type of reactivity, the sensitivities are calculated for small and large perturbations. The results have demonstrated that the reactivity responses have larger relative uncertainty than eigenvalue responses. In addition, the uncertainty of coolant void reactivity is much greater than Doppler reactivity especially for large perturbations. The sensitivity coefficients and uncertainties of both reactivities were verified by comparing with SCALE code results using ENDF/B-VII library and good agreements have been found.
NASA Technical Reports Server (NTRS)
Hou, Gene
2004-01-01
The focus of this research is on the development of analysis and sensitivity analysis equations for nonlinear, transient heat transfer problems modeled by p-version, time discontinuous finite element approximation. The resulting matrix equation of the state equation is simply in the form ofA(x)x = c, representing a single step, time marching scheme. The Newton-Raphson's method is used to solve the nonlinear equation. Examples are first provided to demonstrate the accuracy characteristics of the resultant finite element approximation. A direct differentiation approach is then used to compute the thermal sensitivities of a nonlinear heat transfer problem. The report shows that only minimal coding effort is required to enhance the analysis code with the sensitivity analysis capability.
Smith MB, LaSala PR, Woods GL. In vitro testing of antimicrobial agents. In: McPherson RA, Pincus MR, eds. Henry's Clinical Diagnosis and Management by Laboratory Methods . 22nd ed. Philadelphia, PA: Elsevier ...
Design sensitivity analysis and optimization tool (DSO) for sizing design applications
NASA Technical Reports Server (NTRS)
Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa
1992-01-01
The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.
Pujol-Vila, F; Vigués, N; Díaz-González, M; Muñoz-Berbel, X; Mas, J
2015-05-15
Global urban and industrial growth, with the associated environmental contamination, is promoting the development of rapid and inexpensive general toxicity methods. Current microbial methodologies for general toxicity determination rely on either bioluminescent bacteria and specific medium solution (i.e. Microtox(®)) or low sensitivity and diffusion limited protocols (i.e. amperometric microbial respirometry). In this work, fast and sensitive optical toxicity bioassay based on dual wavelength analysis of bacterial ferricyanide reduction kinetics is presented, using Escherichia coli as a bacterial model. Ferricyanide reduction kinetic analysis (variation of ferricyanide absorption with time), much more sensitive than single absorbance measurements, allowed for direct and fast toxicity determination without pre-incubation steps (assay time=10 min) and minimizing biomass interference. Dual wavelength analysis at 405 (ferricyanide and biomass) and 550 nm (biomass), allowed for ferricyanide monitoring without interference of biomass scattering. On the other hand, refractive index (RI) matching with saccharose reduced bacterial light scattering around 50%, expanding the analytical linear range in the determination of absorbent molecules. With this method, different toxicants such as metals and organic compounds were analyzed with good sensitivities. Half maximal effective concentrations (EC50) obtained after 10 min bioassay, 2.9, 1.0, 0.7 and 18.3 mg L(-1) for copper, zinc, acetic acid and 2-phenylethanol respectively, were in agreement with previously reported values for longer bioassays (around 60 min). This method represents a promising alternative for fast and sensitive water toxicity monitoring, opening the possibility of quick in situ analysis.
Durakovic, Amal; Andersen, Henrik; Christiansen, Anders; Hammen, Irena
2015-01-01
Background The purpose of the current study was to clarify the sensitivity and complication rate of the radial (endobronchial ultrasound, EBUS) without the use of guide-sheath (GS) and fluoroscopy for lung cancer (LC), by measuring the distance from the orifice of the bronchus to the pulmonary lesion, as well as to analyze factors that can predict the diagnostic outcome. Materials and methods A total of 147 patients with peripheral pulmonary lesions (PPL) underwent radial EBUS-guided transbronchial biopsy (TBB) in between August 1, 2013, and August 31, 2014. We analyzed retrospectively radiological data, diagnostic work-up in everyday clinical settings, final diagnosis and complication rates, as well as factors influencing the diagnostic outcome. Results Around 63.9% of PPLs were visualized by ultrasound. A definitive malignant diagnosis was established in 39 patients (26.5%) using radial EBUS. In the remaining 108 patients, additional procedures were performed. We missed LC diagnosis in 40 cases that results in a sensitivity of 49%. For malignant lesions visualized by radial EBUS, the sensitivity was 60%, compared with 24% for not visualized lesions. For malignant lesions, logistic regression was performed to identify the factors that had significant influence on visualization of the lesion and on diagnostic yield. Logistic regression analysis showed significant odds ratios (OR) for visualization depending on location of the lesion; upper lobe lesions were identified more frequent with OR of 3.85 (95% CI 1.42 – 10.98, p=0.009). Size above 30 mm had a non-significant OR of 2.11 (95% CI 0.80−5.73, p=0.134) for visualization. Diagnostic yield was only significantly influenced by visualization with the radial EBUS, OR 3.70 (95% CI 1.35−11.02, p=0.014). Location (p=0.745) and size above 30 mm (p=0.308) showed no significant increase in diagnostic yield. Other lesion characteristics defined on computed tomography, such as distance to carina and pleura, did not
Changes of pore systems and infiltration analysis in two degraded soils after rock fragment addition
NASA Astrophysics Data System (ADS)
Gargiulo, Laura; Coppola, Antonio; De Mascellis, Roberto; Basile, Angelo; Mele, Giacomo; Terribile, Fabio
2015-04-01
Many soils in arid and semi-arid environments contain high amounts of rock fragments as a result of both natural soil forming processes and human activities. The amount, dimension and shape of rock fragment strongly influence soil structure development and therefore many soil processes (e.g. infiltration, water storage, solute transport, etc.). The aim of this work was to test the effects on both infiltration process and soil pore formation following an addition of rock fragments. The test was performed on two different soils: a clayey soil (Alfisol) and a clay loamy soil (Entisol) showing both a natural compact structure and water stagnation problems in field. Three concentrations of 4-8mm rock fragments (15%, 25% and 35%) were added to air-dried soils and the repacked samples have been subject to nine wet/dry cycles in order to induce soil structure formation and its stabilization. The process of infiltration was monitored at -12 cm of pressure heads imposed at the soil surface and kept constant for a certain time by a tension infiltrometer. Moreover, k(h) was determined imposing -9, -6,-3 and -1 cm at soil surface and applying a steady-state solution. After the hydrological measurements the soil samples were resin-impregnated and images of vertical sections of the samples, acquired at 20µm resolution, were analyzed in order to quantify the pore size distribution. This latter was calculated using the "successive opening" approach. The Entisol samples showed similar infiltration curves I(t) among the 4 treatments, with higher percentage of stones (i.e. 25 and 35%) showing a faster rising in the early-time (< 2 min) infiltration; the Alfisol samples are spread, showing a higher variability: limiting the analysis to the first three, despite they show a similar shape, the higher the stones content the lower the cumulated infiltration. The behavior of the 35% sample diverges from the others: it shows a fast rising step at the very early time (< 2 min) followed by a
Multiobjective sensitivity analysis and optimization of a distributed hydrologic model MOBIDIC
NASA Astrophysics Data System (ADS)
Yang, J.; Castelli, F.; Chen, Y.
2014-03-01
Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives which arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for a distributed hydrologic model MOBIDIC, which combines two sensitivity analysis techniques (Morris method and State Dependent Parameter method) with a multiobjective optimization (MOO) approach ϵ-NSGAII. This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina with three objective functions, i.e., standardized root mean square error of logarithmic transformed discharge, water balance index, and mean absolute error of logarithmic transformed flow duration curve, and its results were compared with those with a single objective optimization (SOO) with the traditional Nelder-Mead Simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show: (1) the two sensitivity analysis techniques are effective and efficient to determine the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization; (2) both MOO and SOO lead to acceptable simulations, e.g., for MOO, average Nash-Sutcliffe is 0.75 in the calibration period and 0.70 in the validation period; (3) evaporation and surface runoff shows similar importance to watershed water balance while the contribution of baseflow can be ignored; (4) compared to SOO which was dependent of initial starting location, MOO provides more insight on parameter sensitivity and conflicting characteristics of these objective functions. Multiobjective sensitivity analysis and optimization
Analysis of Time to Event Outcomes in Randomized Controlled Trials by Generalized Additive Models
Argyropoulos, Christos; Unruh, Mark L.
2015-01-01
Background Randomized Controlled Trials almost invariably utilize the hazard ratio calculated with a Cox proportional hazard model as a treatment efficacy measure. Despite the widespread adoption of HRs, these provide a limited understanding of the treatment effect and may even provide a biased estimate when the assumption of proportional hazards in the Cox model is not verified by the trial data. Additional treatment effect measures on the survival probability or the time scale may be used to supplement HRs but a framework for the simultaneous generation of these measures is lacking. Methods By splitting follow-up time at the nodes of a Gauss Lobatto numerical quadrature rule, techniques for Poisson Generalized Additive Models (PGAM) can be adopted for flexible hazard modeling. Straightforward simulation post-estimation transforms PGAM estimates for the log hazard into estimates of the survival function. These in turn were used to calculate relative and absolute risks or even differences in restricted mean survival time between treatment arms. We illustrate our approach with extensive simulations and in two trials: IPASS (in which the proportionality of hazards was violated) and HEMO a long duration study conducted under evolving standards of care on a heterogeneous patient population. Findings PGAM can generate estimates of the survival function and the hazard ratio that are essentially identical to those obtained by Kaplan Meier curve analysis and the Cox model. PGAMs can simultaneously provide multiple measures of treatment efficacy after a single data pass. Furthermore, supported unadjusted (overall treatment effect) but also subgroup and adjusted analyses, while incorporating multiple time scales and accounting for non-proportional hazards in survival data. Conclusions By augmenting the HR conventionally reported, PGAMs have the potential to support the inferential goals of multiple stakeholders involved in the evaluation and appraisal of clinical trial
Comparative proteomic analysis of drug sodium iron chlorophyllin addition to Hep 3B cell line.
Zhang, Jun; Wang, Wenhai; Yang, Fengying; Zhou, Xinwen; Jin, Hong; Yang, Peng-yuan
2012-09-21
The human hepatoma 3B cell line was chosen as an experimental model for in vitro test of drug screening. The drugs included chlorophyllin and its derivatives such as fluo-chlorophyllin, sodium copper chlorophyllin, and sodium iron chlorophyllin. The 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide (MTT) method was used in this study to obtain the primary screening results. The results showed that sodium iron chlorophyllin had the best LC(50) value. Proteomic analysis was then performed for further investigation of the effect of sodium iron chlorophyllin addition to the Hep 3B cell line. The proteins identified from a total protein extract of Hep 3B before and after the drug addition were compared by two-dimensional-gel-electrophoresis. Then 32 three-fold differentially expressed proteins were successfully identified by MALDI-TOF-TOF-MS. There are 29 unique proteins among those identified proteins. These proteins include proliferating cell nuclear antigen (PCNA), T-complex protein, heterogeneous nuclear protein, nucleophosmin, heat shock protein A5 (HspA5) and peroxiredoxin. HspA5 is one of the proteins which are involved in protecting cancer cells against stress-induced apoptosis in cultured cells, protecting them against apoptosis through various mechanisms. Peroxiredoxin has anti-oxidant function and is related to cell proliferation, and signal transduction. It can protect the oxidation of other proteins. Peroxiredoxin has a close relationship with cancer and can eventually become a disease biomarker. This might help to develop a novel treatment method for carcinoma cancer.
Methane flux in non-wetland soils in response to nitrogen addition: a meta-analysis.
Aronson, E L; Helliker, B R
2010-11-01
The controls on methane (CH4) flux into and out of soils are not well understood. Environmental variables including temperature, precipitation, and nitrogen (N) status can have strong effects on the magnitude and direction (e.g., uptake vs. release) of CH4 flux. To better understand the interactions between CH4-cycling microorganisms and N in the non-wetland soil system, a meta-analysis was performed on published literature comparing CH4 flux in N amended and matched control plots. An appropriate study index was developed for this purpose. It was found that smaller amounts of N tended to stimulate CH4 uptake while larger amounts tended to inhibit uptake by the soil. When all other variables were accounted for, the switch occurred at 100 kg N x ha(-1) x yr(-1). Managed land and land with a longer duration of fertilization showed greater inhibition of CH4 uptake with added N. These results support the hypotheses that large amounts of available N can inhibit methanotrophy, but also that methanotrophs in upland soils can be N limited in their consumption of CH4 from the atmosphere. There were interactions between other variables and N addition on the CH4 flux response: lower temperature and, to a lesser extent, higher precipitation magnified the inhibition of CH4 uptake due to N addition. Several mechanisms that may cause these trends are discussed, but none could be conclusively supported with this approach. Further controlled and in situ study should be undertaken to isolate the correct mechanism(s) responsible and to model upland CH4 flux. PMID:21141185
NASA Astrophysics Data System (ADS)
Li, Hui-Chuan
2014-10-01
This study examines students' procedural and conceptual achievement in fraction addition in England and Taiwan. A total of 1209 participants (561 British students and 648 Taiwanese students) at ages 12 and 13 were recruited from England and Taiwan to take part in the study. A quantitative design by means of a self-designed written test is adopted as central to the methodological considerations. The test has two major parts: the concept part and the skill part. The former is concerned with students' conceptual knowledge of fraction addition and the latter is interested in students' procedural competence when adding fractions. There were statistically significant differences both in concept and skill parts between the British and Taiwanese groups with the latter having a higher score. The analysis of the students' responses to the skill section indicates that the superiority of Taiwanese students' procedural achievements over those of their British peers is because most of the former are able to apply algorithms to adding fractions far more successfully than the latter. Earlier, Hart [1] reported that around 30% of the British students in their study used an erroneous strategy (adding tops and bottoms, for example, 2/3 + 1/7 = 3/10) while adding fractions. This study also finds that nearly the same percentage of the British group remained using this erroneous strategy to add fractions as Hart found in 1981. The study also provides evidence to show that students' understanding of fractions is confused and incomplete, even those who are successfully able to perform operations. More research is needed to be done to help students make sense of the operations and eventually attain computational competence with meaningful grounding in the domain of fractions.
Distributed Energy Resources at Naval Base Ventura County Building1512: A Sensitivity Analysis
Bailey, Owen C.; Marnay, Chris
2005-06-05
This report is the second of a two-part study by BerkeleyLab of a DER (distributed energy resources) system at Navy Base VenturaCounty (NBVC). First, a preliminary assessment ofthe cost effectivenessof distributed energy resources at Naval Base Ventura County (NBVC)Building 1512 was conducted in response to the base s request for designassistance to the Federal Energy Management Program (Bailey and Marnay,2004). That report contains a detailed description of the site and theDER-CAM (Consumer Adoption Model) parameters used. This second reportcontains sensitivity analyses of key parameters in the DER system modelof Building 1512 at NBVC and additionally considers the potential forabsorption-powered refrigeration.The prior analysis found that under thecurrent tariffs, and given assumptions about the performance andstructure of building energy loads and available generating technologycharacteristics, installing a 600 kW DER system with absorption coolingand recovery heat capabilities could deliver cost savings of about 14percent, worth $55,000 per year. However, under current conditions, thisstudy also suggested that significant savings could be obtained ifBuilding 1512 changed from its current direct access contract to a SCETOU-8 (Southern California Edison time of use tariff number 8) ratewithout installing a DER system. Evaluated on this tariff, the potentialsavings from installation of a DER system would be about 4 percent of thetotal bill, or $16,000 per year.
Strong, Mark; Oakley, Jeremy E.; Brennan, Alan
2013-01-01
The partial expected value of perfect information (EVPI) quantifies the expected benefit of learning the values of uncertain parameters in a decision model. Partial EVPI is commonly estimated via a 2-level Monte Carlo procedure in which parameters of interest are sampled in an outer loop, and then conditional on these, the remaining parameters are sampled in an inner loop. This is computationally demanding and may be difficult if correlation between input parameters results in conditional distributions that are hard to sample from. We describe a novel nonparametric regression-based method for estimating partial EVPI that requires only the probabilistic sensitivity analysis sample (i.e., the set of samples drawn from the joint distribution of the parameters and the corresponding net benefits). The method is applicable in a model of any complexity and with any specification of input parameter distribution. We describe the implementation of the method via 2 nonparametric regression modeling approaches, the Generalized Additive Model and the Gaussian process. We demonstrate in 2 case studies the superior efficiency of the regression method over the 2-level Monte Carlo method. R code is made available to implement the method. PMID:24246566
Parameter sensitivity analysis of tailored-pulse loading stimulation of Devonian gas shale
Barbour, T.G.; Mihalik, G.R.
1980-11-01
An evaluation of three tailored-pulse loading parameters has been undertaken to access their importance in gas well stimulation technology. This numerical evaluation was performed using STEALTH finite-difference codes and was intended to provide a measure of the effects of various tailored-pulse load configurations on fracture development in Devonian gas shale. The three parameters considered in the sensitivity analysis were: loading rate; decay rate; and sustained peak pressures. By varying these parameters in six computations and comparing the relative differences in fracture initiation and propagation the following conclusions were drawn: (1) Fracture initiation is directly related to the loading rate aplied to the wellbore wall. Loading rates of 10, 100 and 1000 GPa/sec were modeled. (2) If yielding of the rock can be prevented or minimized, by maintaining low peak pressures in the wellbore, increasing the pulse loading rate, to say 10,000 GPa/sec or more, should initiate additional multiple fractures. (3) Fracture initiation does not appear to be related to the tailored-pulse decay rate. Fracture extension may be influenced by the rate of decay. The slower the decay rate, the longer the crack extension. (4) Fracture initiation does not appear to be improved by a high pressure plateau in the tailored-pulse. Fracture propagation may be enhanced if the maintained wellbore pressure plateau is of sufficient magnitude to extent the range of the tangential tensile stresses to greater radial distances. 26 figures, 2 tables.