Sensitivity analysis of geometric errors in additive manufacturing medical models.
Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian
2015-03-01
Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms.
... page: //medlineplus.gov/ency/article/003741.htm Sensitivity analysis To use the sharing features on this page, please enable JavaScript. Sensitivity analysis determines the effectiveness of antibiotics against microorganisms (germs) ...
Sensitivity Analysis in Engineering
NASA Technical Reports Server (NTRS)
Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)
1987-01-01
The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.
Integrated Sensitivity Analysis Workflow
Friedman-Hill, Ernest J.; Hoffman, Edward L.; Gibson, Marcus J.; Clay, Robert L.
2014-08-01
Sensitivity analysis is a crucial element of rigorous engineering analysis, but performing such an analysis on a complex model is difficult and time consuming. The mission of the DART Workbench team at Sandia National Laboratories is to lower the barriers to adoption of advanced analysis tools through software integration. The integrated environment guides the engineer in the use of these integrated tools and greatly reduces the cycle time for engineering analysis.
Scaling in sensitivity analysis
Link, W.A.; Doherty, P.F.
2002-01-01
Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.
Interference and Sensitivity Analysis
VanderWeele, Tyler J.; Tchetgen Tchetgen, Eric J.; Halloran, M. Elizabeth
2014-01-01
Causal inference with interference is a rapidly growing area. The literature has begun to relax the “no-interference” assumption that the treatment received by one individual does not affect the outcomes of other individuals. In this paper we briefly review the literature on causal inference in the presence of interference when treatments have been randomized. We then consider settings in which causal effects in the presence of interference are not identified, either because randomization alone does not suffice for identification, or because treatment is not randomized and there may be unmeasured confounders of the treatment-outcome relationship. We develop sensitivity analysis techniques for these settings. We describe several sensitivity analysis techniques for the infectiousness effect which, in a vaccine trial, captures the effect of the vaccine of one person on protecting a second person from infection even if the first is infected. We also develop two sensitivity analysis techniques for causal effects in the presence of unmeasured confounding which generalize analogous techniques when interference is absent. These two techniques for unmeasured confounding are compared and contrasted. PMID:25620841
Sensitivity testing and analysis
Neyer, B.T.
1991-01-01
New methods of sensitivity testing and analysis are proposed. The new test method utilizes Maximum Likelihood Estimates to pick the next test level in order to maximize knowledge of both the mean, {mu}, and the standard deviation, {sigma} of the population. Simulation results demonstrate that this new test provides better estimators (less bias and smaller variance) of both {mu} and {sigma} than the other commonly used tests (Probit, Bruceton, Robbins-Monro, Langlie). A new method of analyzing sensitivity tests is also proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions, for {mu}, {sigma}, and arbitrary percentiles. Unlike presently used methods, such as the program ASENT which is based on the Cramer-Rao theorem, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The new test and analysis methods will be explained and compared to the presently used methods. 19 refs., 12 figs.
New sensitivity analysis attack
NASA Astrophysics Data System (ADS)
El Choubassi, Maha; Moulin, Pierre
2005-03-01
The sensitivity analysis attacks by Kalker et al. constitute a known family of watermark removal attacks exploiting a vulnerability in some watermarking protocols: the attacker's unlimited access to the watermark detector. In this paper, a new attack on spread spectrum schemes is designed. We first examine one of Kalker's algorithms and prove its convergence using the law of large numbers, which gives more insight into the problem. Next, a new algorithm is presented and compared to existing ones. Various detection algorithms are considered including correlation detectors and normalized correlation detectors, as well as other, more complicated algorithms. Our algorithm is noniterative and requires at most n+1 operations, where n is the dimension of the signal. Moreover, the new approach directly estimates the watermark by exploiting the simple geometry of the detection boundary and the information leaked by the detector.
Recent developments in structural sensitivity analysis
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Adelman, Howard M.
1988-01-01
Recent developments are reviewed in two major areas of structural sensitivity analysis: sensitivity of static and transient response; and sensitivity of vibration and buckling eigenproblems. Recent developments from the standpoint of computational cost, accuracy, and ease of implementation are presented. In the area of static response, current interest is focused on sensitivity to shape variation and sensitivity of nonlinear response. Two general approaches are used for computing sensitivities: differentiation of the continuum equations followed by discretization, and the reverse approach of discretization followed by differentiation. It is shown that the choice of methods has important accuracy and implementation implications. In the area of eigenproblem sensitivity, there is a great deal of interest and significant progress in sensitivity of problems with repeated eigenvalues. In addition to reviewing recent contributions in this area, the paper raises the issue of differentiability and continuity associated with the occurrence of repeated eigenvalues.
Nitrogen Addition Enhances Drought Sensitivity of Young Deciduous Tree Species
Dziedek, Christoph; Härdtle, Werner; von Oheimb, Goddert; Fichtner, Andreas
2016-01-01
Understanding how trees respond to global change drivers is central to predict changes in forest structure and functions. Although there is evidence on the mode of nitrogen (N) and drought (D) effects on tree growth, our understanding of the interplay of these factors is still limited. Simultaneously, as mixtures are expected to be less sensitive to global change as compared to monocultures, we aimed to investigate the combined effects of N addition and D on the productivity of three tree species (Fagus sylvatica, Quercus petraea, Pseudotsuga menziesii) in relation to functional diverse species mixtures using data from a 4-year field experiment in Northwest Germany. Here we show that species mixing can mitigate the negative effects of combined N fertilization and D events, but the community response is mainly driven by the combination of certain traits rather than the tree species richness of a community. For beech, we found that negative effects of D on growth rates were amplified by N fertilization (i.e., combined treatment effects were non-additive), while for oak and fir, the simultaneous effects of N and D were additive. Beech and oak were identified as most sensitive to combined N+D effects with a strong size-dependency observed for beech, suggesting that the negative impact of N+D becomes stronger with time as beech grows larger. As a consequence, the net biodiversity effect declined at the community level, which can be mainly assigned to a distinct loss of complementarity in beech-oak mixtures. This pattern, however, was not evident in the other species-mixtures, indicating that neighborhood composition (i.e., trait combination), but not tree species richness mediated the relationship between tree diversity and treatment effects on tree growth. Our findings point to the importance of the qualitative role (‘trait portfolio’) that biodiversity play in determining resistance of diverse tree communities to environmental changes. As such, they provide
Additional EIPC Study Analysis. Final Report
Hadley, Stanton W; Gotham, Douglas J.; Luciani, Ralph L.
2014-12-01
Between 2010 and 2012 the Eastern Interconnection Planning Collaborative (EIPC) conducted a major long-term resource and transmission study of the Eastern Interconnection (EI). With guidance from a Stakeholder Steering Committee (SSC) that included representatives from the Eastern Interconnection States Planning Council (EISPC) among others, the project was conducted in two phases. Phase 1 involved a long-term capacity expansion analysis that involved creation of eight major futures plus 72 sensitivities. Three scenarios were selected for more extensive transmission- focused evaluation in Phase 2. Five power flow analyses, nine production cost model runs (including six sensitivities), and three capital cost estimations were developed during this second phase. The results from Phase 1 and 2 provided a wealth of data that could be examined further to address energy-related questions. A list of 14 topics was developed for further analysis. This paper brings together the earlier interim reports of the first 13 topics plus one additional topic into a single final report.
Multiple predictor smoothing methods for sensitivity analysis.
Helton, Jon Craig; Storlie, Curtis B.
2006-08-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.
Lombardi, D.P.
1992-08-01
The Chemical Hazard Prediction Model (D2PC) developed by the US Army will play a critical role in the Chemical Stockpile Emergency Preparedness Program by predicting chemical agent transport and dispersion through the atmosphere after an accidental release. To aid in the analysis of the output calculated by D2PC, this sensitivity analysis was conducted to provide information on model response to a variety of input parameters. The sensitivity analysis focused on six accidental release scenarios involving chemical agents VX, GB, and HD (sulfur mustard). Two categories, corresponding to conservative most likely and worst case meteorological conditions, provided the reference for standard input values. D2PC displayed a wide variety of sensitivity to the various input parameters. The model displayed the greatest overall sensitivity to wind speed, mixing height, and breathing rate. For other input parameters, sensitivity was mixed but generally lower. Sensitivity varied not only with parameter, but also over the range of values input for a single parameter. This information on model response can provide useful data for interpreting D2PC output.
SEP thrust subsystem performance sensitivity analysis
NASA Technical Reports Server (NTRS)
Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.
1973-01-01
This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.
Additional Investigations of Ice Shape Sensitivity to Parameter Variations
NASA Technical Reports Server (NTRS)
Miller, Dean R.; Potapczuk, Mark G.; Langhals, Tammy J.
2006-01-01
A second parameter sensitivity study was conducted at the NASA Glenn Research Center's Icing Research Tunnel (IRT) using a 36 in. chord (0.91 m) NACA-0012 airfoil. The objective of this work was to further investigate the feasibility of using ice shape feature changes to define requirements for the simulation and measurement of SLD and appendix C icing conditions. A previous study concluded that it was feasible to use changes in ice shape features (e.g., ice horn angle, ice horn thickness, and ice shape mass) to detect relatively small variations in icing spray condition parameters (LWC, MVD, and temperature). The subject of this current investigation extends the scope of this previous work, by also examining the effect of icing tunnel spray-bar parameter variations (water pressure, air pressure) on ice shape feature changes. The approach was to vary spray-bar water pressure and air pressure, and then evaluate the effects of these parameter changes on the resulting ice shapes. This paper will provide a description of the experimental method, present selected experimental results, and conclude with an evaluation of these results.
Comparative Sensitivity Analysis of Muscle Activation Dynamics
Rockenfeller, Robert; Günther, Michael; Schmitt, Syn; Götz, Thomas
2015-01-01
We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379
[Sensitivity analysis in health investment projects].
Arroyave-Loaiza, G; Isaza-Nieto, P; Jarillo-Soto, E C
1994-01-01
This paper discusses some of the concepts and methodologies frequently used in sensitivity analyses in the evaluation of investment programs. In addition, a concrete example is presented: a hospital investment in which four indicators were used to design different scenarios and their impact on investment costs. This paper emphasizes the importance of this type of analysis in the field of management of health services, and more specifically in the formulation of investment programs.
Using Dynamic Sensitivity Analysis to Assess Testability
NASA Technical Reports Server (NTRS)
Voas, Jeffrey; Morell, Larry; Miller, Keith
1990-01-01
This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.
Stiff DAE integrator with sensitivity analysis capabilities
Serban, R.
2007-11-26
IDAS is a general purpose (serial and parallel) solver for differential equation (ODE) systems with senstivity analysis capabilities. It provides both forward and adjoint sensitivity analysis options.
Scalable analysis tools for sensitivity analysis and UQ (3160) results.
Karelitz, David B.; Ice, Lisa G.; Thompson, David C.; Bennett, Janine C.; Fabian, Nathan; Scott, W. Alan; Moreland, Kenneth D.
2009-09-01
The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.
Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations
NASA Technical Reports Server (NTRS)
Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.
2017-01-01
A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.
A review of sensitivity analysis techniques
Hamby, D.M.
1993-12-31
Mathematical models are utilized to approximate various highly complex engineering, physical, environmental, social, and economic phenomena. Model parameters exerting the most influence on model results are identified through a {open_quotes}sensitivity analysis.{close_quotes} A comprehensive review is presented of more than a dozen sensitivity analysis methods. The most fundamental of sensitivity techniques utilizes partial differentiation whereas the simplest approach requires varying parameter values one-at-a-time. Correlation analysis is used to determine relationships between independent and dependent variables. Regression analysis provides the most comprehensive sensitivity measure and is commonly utilized to build response surfaces that approximate complex models.
Iterative methods for design sensitivity analysis
NASA Technical Reports Server (NTRS)
Belegundu, A. D.; Yoon, B. G.
1989-01-01
A numerical method is presented for design sensitivity analysis, using an iterative-method reanalysis of the structure generated by a small perturbation in the design variable; a forward-difference scheme is then employed to obtain the approximate sensitivity. Algorithms are developed for displacement and stress sensitivity, as well as for eignevalues and eigenvector sensitivity, and the iterative schemes are modified so that the coefficient matrices are constant and therefore decomposed only once.
NASA Technical Reports Server (NTRS)
Kirshen, N.; Mill, T.
1973-01-01
The effect of formulation components and the addition of fire retardants on the impact sensitivity of Viton B fluoroelastomer in liquid oxygen was studied with the objective of developing a procedure for reliably reducing this sensitivity. Component evaluation, carried out on more than 40 combinations of components and cure cycles, showed that almost all the standard formulation agents, including carbon, MgO, Diak-3, and PbO2, will sensitize the Viton stock either singly or in combinations, some combinations being much more sensitive than others. Cure and postcure treatments usually reduced the sensitivity of a given formulation, often dramatically, but no formulated Viton was as insensitive as the pure Viton B stock. Coating formulated Viton with a thin layer of pure Viton gave some indication of reduced sensitivity, but additional tests are needed. It is concluded that sensitivity in formulated Viton arises from a variety of sources, some physical and some chemical in origin. Elemental analyses for all the formulated Vitons are reported as are the results of a literature search on the subject of LOX impact sensitivity.
Structural sensitivity analysis: Methods, applications and needs
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.; Camarda, C. J.; Walsh, J. L.
1984-01-01
Innovative techniques applicable to sensitivity analysis of discretized structural systems are reviewed. The techniques include a finite difference step size selection algorithm, a method for derivatives of iterative solutions, a Green's function technique for derivatives of transient response, simultaneous calculation of temperatures and their derivatives, derivatives with respect to shape, and derivatives of optimum designs with respect to problem parameters. Computerized implementations of sensitivity analysis and applications of sensitivity derivatives are also discussed. Some of the critical needs in the structural sensitivity area are indicated along with plans for dealing with some of those needs.
Structural sensitivity analysis: Methods, applications, and needs
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.; Camarda, C. J.; Walsh, J. L.
1984-01-01
Some innovative techniques applicable to sensitivity analysis of discretized structural systems are reviewed. These techniques include a finite-difference step-size selection algorithm, a method for derivatives of iterative solutions, a Green's function technique for derivatives of transient response, a simultaneous calculation of temperatures and their derivatives, derivatives with respect to shape, and derivatives of optimum designs with respect to problem parameters. Computerized implementations of sensitivity analysis and applications of sensitivity derivatives are also discussed. Finally, some of the critical needs in the structural sensitivity area are indicated along with Langley plans for dealing with some of these needs.
Sensitivity analysis, optimization, and global critical points
Cacuci, D.G. )
1989-11-01
The title of this paper suggests that sensitivity analysis, optimization, and the search for critical points in phase-space are somehow related; the existence of such a kinship has been undoubtedly felt by many of the nuclear engineering practitioners of optimization and/or sensitivity analysis. However, a unified framework for displaying this relationship has so far been lacking, especially in a global setting. The objective of this paper is to present such a global and unified framework and to suggest, within this framework, a new direction for future developments for both sensitivity analysis and optimization of the large nonlinear systems encountered in practical problems.
Robust Sensitivity Analysis for Multi-Attribute Deterministic Hierarchical Value Models
2002-03-01
Method (Satisfying Method) Disjunctive Method Standart Level Elimination by Aspect Lexicograhic Semi order Lexicographic Method Ordinal Weigted Sum...framework for sensitivity analysis of hierarchical additive value models and standardizes the sensitivity analysis notation and terminology . Finally
A global analysis of soil acidification caused by nitrogen addition
NASA Astrophysics Data System (ADS)
Tian, Dashuan; Niu, Shuli
2015-02-01
Nitrogen (N) deposition-induced soil acidification has become a global problem. However, the response patterns of soil acidification to N addition and the underlying mechanisms remain far from clear. Here, we conducted a meta-analysis of 106 studies to reveal global patterns of soil acidification in responses to N addition. We found that N addition significantly reduced soil pH by 0.26 on average globally. However, the responses of soil pH varied with ecosystem types, N addition rate, N fertilization forms, and experimental durations. Soil pH decreased most in grassland, whereas boreal forest was not observed a decrease to N addition in soil acidification. Soil pH decreased linearly with N addition rates. Addition of urea and NH4NO3 contributed more to soil acidification than NH4-form fertilizer. When experimental duration was longer than 20 years, N addition effects on soil acidification diminished. Environmental factors such as initial soil pH, soil carbon and nitrogen content, precipitation, and temperature all influenced the responses of soil pH. Base cations of Ca2+, Mg2+ and K+ were critical important in buffering against N-induced soil acidification at the early stage. However, N addition has shifted global soils into the Al3+ buffering phase. Overall, this study indicates that acidification in global soils is very sensitive to N deposition, which is greatly modified by biotic and abiotic factors. Global soils are now at a buffering transition from base cations (Ca2+, Mg2+ and K+) to non-base cations (Mn2+ and Al3+). This calls our attention to care about the limitation of base cations and the toxic impact of non-base cations for terrestrial ecosystems with N deposition.
Updated Chemical Kinetics and Sensitivity Analysis Code
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan
2005-01-01
An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Haihua Zhao; Vincent A. Mousseau
2011-09-01
Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed
[Total analysis of organic rubber additives].
He, Wen-Xuan; Robert, Shanks; You, Ye-Ming
2010-03-01
In the present paper, after middle pressure chromatograph separation using both positive phase and reversed-phase conditions, the organic additives in ethylene-propylene rubber were identified by infrared spectrometer. At the same time, by using solid phase extraction column to maintain the main component-fuel oil in organic additves to avoid its interfering with minor compounds, other organic additves were separated and analysed by GC/Ms. In addition, the remaining active compound such as benzoyl peroxide was identified by CC/Ms, through analyzing acetone extract directly. Using the above mentioned techniques, soften agents (fuel oil, plant oil and phthalte), curing agent (benzoylperoxide), vulcanizing accelerators (2-mercaptobenzothiazole, ethyl thiuram and butyl thiuram), and antiagers (2, 6-Di-tert-butyl-4-methyl phenol and styrenated phenol) in ethylene-propylene rubber were identified. Although the technique was established in ethylene-propylene rubber system, it can be used in other rubber system.
Coal Transportation Rate Sensitivity Analysis
2005-01-01
On December 21, 2004, the Surface Transportation Board (STB) requested that the Energy Information Administration (EIA) analyze the impact of changes in coal transportation rates on projected levels of electric power sector energy use and emissions. Specifically, the STB requested an analysis of changes in national and regional coal consumption and emissions resulting from adjustments in railroad transportation rates for Wyoming's Powder River Basin (PRB) coal using the National Energy Modeling System (NEMS). However, because NEMS operates at a relatively aggregate regional level and does not represent the costs of transporting coal over specific rail lines, this analysis reports on the impacts of interregional changes in transportation rates from those used in the Annual Energy Outlook 2005 (AEO2005) reference case.
NASA Technical Reports Server (NTRS)
Smalheer, C. V.
1973-01-01
The chemistry of lubricant additives is discussed to show what the additives are chemically and what functions they perform in the lubrication of various kinds of equipment. Current theories regarding the mode of action of lubricant additives are presented. The additive groups discussed include the following: (1) detergents and dispersants, (2) corrosion inhibitors, (3) antioxidants, (4) viscosity index improvers, (5) pour point depressants, and (6) antifouling agents.
Seo, Sujin; Zhou, Xiangfei; Liu, Gang Logan
2016-07-01
Plasmonic substrates have fixed sensitivity once the geometry of the structure is defined. In order to improve the sensitivity, significant research effort has been focused on designing new plasmonic structures, which involves high fabrication costs; however, a method is reported for improving sensitivity not by redesigning the structure but by simply assembling plasmonic nanoparticles (NPs) near the evanescent field of the underlying 3D plasmonic nanostructure. Here, a nanoscale Lycurgus cup array (nanoLCA) is employed as a base colorimetric plasmonic substrate and an assembly template. Compared to the nanoLCA, the NP assembled nanoLCA (NP-nanoLCA) exhibits much higher sensitivity for both bulk refractive index sensing and biotin-streptavidin binding detection. The limit of detection of the NP-nanoLCA is at least ten times smaller when detecting biotin-streptavidin conjugation. The numerical calculations confirm the importance of the additive plasmon coupling between the NPs and the nanoLCA for a denser and stronger electric field in the same 3D volumetric space. Tunable sensitivity is accomplished by controlling the number of NPs in each nanocup, or the number density of the hot spots. This simple yet scalable and cost-effective method of using additive heterogeneous plasmon coupling effects will benefit various chemical, medical, and environmental plasmon-based sensors.
Sensitivity analysis for solar plates
NASA Technical Reports Server (NTRS)
Aster, R. W.
1986-01-01
Economic evaluation methods and analyses of emerging photovoltaic (PV) technology since 1976 was prepared. This type of analysis was applied to the silicon research portion of the PV Program in order to determine the importance of this research effort in relationship to the successful development of commercial PV systems. All four generic types of PV that use silicon were addressed: crystal ingots grown either by the Czochralski method or an ingot casting method; ribbons pulled directly from molten silicon; an amorphous silicon thin film; and use of high concentration lenses. Three technologies were analyzed: the Union Carbide fluidized bed reactor process, the Hemlock process, and the Union Carbide Komatsu process. The major components of each process were assessed in terms of the costs of capital equipment, labor, materials, and utilities. These assessments were encoded as the probabilities assigned by experts for achieving various cost values or production rates.
Sensitivity analysis for solar plates
NASA Astrophysics Data System (ADS)
Aster, R. W.
1986-02-01
Economic evaluation methods and analyses of emerging photovoltaic (PV) technology since 1976 was prepared. This type of analysis was applied to the silicon research portion of the PV Program in order to determine the importance of this research effort in relationship to the successful development of commercial PV systems. All four generic types of PV that use silicon were addressed: crystal ingots grown either by the Czochralski method or an ingot casting method; ribbons pulled directly from molten silicon; an amorphous silicon thin film; and use of high concentration lenses. Three technologies were analyzed: the Union Carbide fluidized bed reactor process, the Hemlock process, and the Union Carbide Komatsu process. The major components of each process were assessed in terms of the costs of capital equipment, labor, materials, and utilities. These assessments were encoded as the probabilities assigned by experts for achieving various cost values or production rates.
Adjoint sensitivity analysis of an ultrawideband antenna
Stephanson, M B; White, D A
2011-07-28
The frequency domain finite element method using H(curl)-conforming finite elements is a robust technique for full-wave analysis of antennas. As computers become more powerful, it is becoming feasible to not only predict antenna performance, but also to compute sensitivity of antenna performance with respect to multiple parameters. This sensitivity information can then be used for optimization of the design or specification of manufacturing tolerances. In this paper we review the Adjoint Method for sensitivity calculation, and apply it to the problem of optimizing a Ultrawideband antenna.
2014-08-10
Michelle L. Pantoya, Michael A. Daniels Se. TASK NUMBER Sf. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAMES AND ADDRESSES 8. PERFORMING ORGANIZATION...of composite energetic materials using carbon nanotube additives Kade H. Poper a, Eric S. Collins a, Michelle L. Pantoya a, *, Michael A. Daniels b a...Thermochim. Acta 451 (1 2) (2006). [2] Chelsea Weir, Michelle L. Pantoya, Michael Daniels , Electrostatic discharge sensitivity and electrical conductivity
The Effect of Gaseous Additives on Dynamic Pressure Output and Ignition Sensitivity of Nanothermites
NASA Astrophysics Data System (ADS)
Puszynski, Jan; Doorenbos, Zac; Walters, Ian; Redner, Paul; Kapoor, Deepak; Swiatkiewicz, Jacek
2011-06-01
This contribution addresses important combustion characteristics of nanothermite systems. In this research the following nanothermites were investigated: a) Al-Bi2O3, b)Al-Fe2O3 and c)Al-Bi2O3-Fe2O3. The effect of various gasifying additives (such as nitrocellulose (NC) and cellulose acetate butyrate (CAB)) as well as reactant stoichiometry, reactant particle size and shape on processability, ignition delay time and dynamic pressure outputs at different locations in a combustion chamber will be presented. In addition, this contribution will report electrostatic and friction sensitivities of standard and modified nanothermites.
GPT-Free Sensitivity Analysis for Reactor Depletion and Analysis
NASA Astrophysics Data System (ADS)
Kennedy, Christopher Brandon
model (ROM) error. When building a subspace using the GPT-Free approach, the reduction error can be selected based on an error tolerance for generic flux response-integrals. The GPT-Free approach then solves the fundamental adjoint equation with randomly generated sets of input parameters. Using properties from linear algebra, the fundamental k-eigenvalue sensitivities, spanned by the various randomly generated models, can be related to response sensitivity profiles by a change of basis. These sensitivity profiles are the first-order derivatives of responses to input parameters. The quality of the basis is evaluated using the kappa-metric, developed from Wilks' order statistics, on the user-defined response functionals that involve the flux state-space. Because the kappa-metric is formed from Wilks' order statistics, a probability-confidence interval can be established around the reduction error based on user-defined responses such as fuel-flux, max-flux error, or other generic inner products requiring the flux. In general, The GPT-Free approach will produce a ROM with a quantifiable, user-specified reduction error. This dissertation demonstrates the GPT-Free approach for steady state and depletion reactor calculations modeled by SCALE6, an analysis tool developed by Oak Ridge National Laboratory. Future work includes the development of GPT-Free for new Monte Carlo methods where the fundamental adjoint is available. Additionally, the approach in this dissertation examines only the first derivatives of responses, the response sensitivity profile; extension and/or generalization of the GPT-Free approach to higher order response sensitivity profiles is natural area for future research.
Influence of electrolyte co-additives on the performance of dye-sensitized solar cells
2011-01-01
The presence of specific chemical additives in the redox electrolyte results in an efficient increase of the photovoltaic performance of dye-sensitized solar cells (DSCs). The most effective additives are 4-tert-butylpyridine (TBP), N-methylbenzimidazole (NMBI) and guanidinium thiocyanate (GuNCS) that are adsorbed onto the photoelectrode/electrolyte interface, thus shifting the semiconductor's conduction band edge and preventing recombination with triiodides. In a comparative work, we investigated in detail the action of TBP and NMBI additives in ionic liquid-based redox electrolytes with varying iodine concentrations, in order to extract the optimum additive/I2 ratio for each system. Different optimum additive/I2 ratios were determined for TBP and NMBI, despite the fact that both generally work in a similar way. Further addition of GuNCS in the optimized electrolytic media causes significant synergistic effects, the action of GuNCS being strongly influenced by the nature of the corresponding co-additive. Under the best operation conditions, power conversion efficiencies as high as 8% were obtained. PMID:21711833
Global sensitivity analysis in stochastic simulators of uncertain reaction networks.
Navarro Jimenez, M; Le Maître, O P; Knio, O M
2016-12-28
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Sensitivity analysis of uncertainty in model prediction.
Russi, Trent; Packard, Andrew; Feeley, Ryan; Frenklach, Michael
2008-03-27
Data Collaboration is a framework designed to make inferences from experimental observations in the context of an underlying model. In the prior studies, the methodology was applied to prediction on chemical kinetics models, consistency of a reaction system, and discrimination among competing reaction models. The present work advances Data Collaboration by developing sensitivity analysis of uncertainty in model prediction with respect to uncertainty in experimental observations and model parameters. Evaluation of sensitivity coefficients is performed alongside the solution of the general optimization ansatz of Data Collaboration. The obtained sensitivity coefficients allow one to determine which experiment/parameter uncertainty contributes the most to the uncertainty in model prediction, rank such effects, consider new or even hypothetical experiments to perform, and combine the uncertainty analysis with the cost of uncertainty reduction, thereby providing guidance in selecting an experimental/theoretical strategy for community action.
Design sensitivity analysis of boundary element substructures
NASA Technical Reports Server (NTRS)
Kane, James H.; Saigal, Sunil; Gallagher, Richard H.
1989-01-01
The ability to reduce or condense a three-dimensional model exactly, and then iterate on this reduced size model representing the parts of the design that are allowed to change in an optimization loop is discussed. The discussion presents the results obtained from an ongoing research effort to exploit the concept of substructuring within the structural shape optimization context using a Boundary Element Analysis (BEA) formulation. The first part contains a formulation for the exact condensation of portions of the overall boundary element model designated as substructures. The use of reduced boundary element models in shape optimization requires that structural sensitivity analysis can be performed. A reduced sensitivity analysis formulation is then presented that allows for the calculation of structural response sensitivities of both the substructured (reduced) and unsubstructured parts of the model. It is shown that this approach produces significant computational economy in the design sensitivity analysis and reanalysis process by facilitating the block triangular factorization and forward reduction and backward substitution of smaller matrices. The implementatior of this formulation is discussed and timings and accuracies of representative test cases presented.
NIR sensitivity analysis with the VANE
NASA Astrophysics Data System (ADS)
Carrillo, Justin T.; Goodin, Christopher T.; Baylot, Alex E.
2016-05-01
Near infrared (NIR) cameras, with peak sensitivity around 905-nm wavelengths, are increasingly used in object detection applications such as pedestrian detection, occupant detection in vehicles, and vehicle detection. In this work, we present the results of simulated sensitivity analysis for object detection with NIR cameras. The analysis was conducted using high performance computing (HPC) to determine the environmental effects on object detection in different terrains and environmental conditions. The Virtual Autonomous Navigation Environment (VANE) was used to simulate highresolution models for environment, terrain, vehicles, and sensors. In the experiment, an active fiducial marker was attached to the rear bumper of a vehicle. The camera was mounted on a following vehicle that trailed at varying standoff distances. Three different terrain conditions (rural, urban, and forest), two environmental conditions (clear and hazy), three different times of day (morning, noon, and evening), and six different standoff distances were used to perform the sensor sensitivity analysis. The NIR camera that was used for the simulation is the DMK firewire monochrome on a pan-tilt motor. Standoff distance was varied along with environment and environmental conditions to determine the critical failure points for the sensor. Feature matching was used to detect the markers in each frame of the simulation, and the percentage of frames in which one of the markers was detected was recorded. The standoff distance produced the biggest impact on the performance of the camera system, while the camera system was not sensitive to environment conditions.
Sensitive chiral analysis by CE: an update.
Sánchez-Hernández, Laura; Crego, Antonio Luis; Marina, María Luisa; García-Ruiz, Carmen
2008-01-01
A general view of the different strategies used in the last years to enhance the detection sensitivity in chiral analysis by CE is provided in this article. With this purpose and in order to update the previous review by García-Ruiz et al., the articles appeared on this subject from January 2005 to March 2007 are considered. Three were the main strategies employed to increase the detection sensitivity in chiral analysis by CE: (i) the use of off-line sample treatment techniques, (ii) the employment of in-capillary preconcentration techniques based on electrophoretic principles, and (iii) the use of alternative detection systems to the widely employed on-column UV-Vis absorption detection. Combinations of two or three of the above-mentioned strategies gave rise to adequate concentration detection limits up to 10(-10) M enabling enantiomer analysis in a variety of real samples including complex biological matrices.
Automation of primal and sensitivity analysis of transient coupled problems
NASA Astrophysics Data System (ADS)
Korelc, Jože
2009-10-01
The paper describes a hybrid symbolic-numeric approach to automation of primal and sensitivity analysis of computational models formulated and solved by finite element method. The necessary apparatus for the automation of steady-state, steady-state coupled, transient and transient coupled problems is introduced as combination of a symbolic system, an automatic differentiation (AD) technique and an automatic code generation. For this purpose the paper extends the classical formulation of AD by additional operators necessary for a high abstract description of primal and sensitivity analysis of the typical computational models. An appropriate abstract description for the fully implicit primal and sensitivity analysis of hyperelastic and elasto-plastic problems and a symbolic input for the generation of necessary user subroutines for the two-dimensional, hyperelastic finite element are presented at the end.
Nursing-sensitive indicators: a concept analysis
Heslop, Liza; Lu, Sai
2014-01-01
Aim To report a concept analysis of nursing-sensitive indicators within the applied context of the acute care setting. Background The concept of ‘nursing sensitive indicators’ is valuable to elaborate nursing care performance. The conceptual foundation, theoretical role, meaning, use and interpretation of the concept tend to differ. The elusiveness of the concept and the ambiguity of its attributes may have hindered research efforts to advance its application in practice. Design Concept analysis. Data sources Using ‘clinical indicators’ or ‘quality of nursing care’ as subject headings and incorporating keyword combinations of ‘acute care’ and ‘nurs*’, CINAHL and MEDLINE with full text in EBSCOhost databases were searched for English language journal articles published between 2000–2012. Only primary research articles were selected. Methods A hybrid approach was undertaken, incorporating traditional strategies as per Walker and Avant and a conceptual matrix based on Holzemer's Outcomes Model for Health Care Research. Results The analysis revealed two main attributes of nursing-sensitive indicators. Structural attributes related to health service operation included: hours of nursing care per patient day, nurse staffing. Outcome attributes related to patient care included: the prevalence of pressure ulcer, falls and falls with injury, nosocomial selective infection and patient/family satisfaction with nursing care. Conclusion This concept analysis may be used as a basis to advance understandings of the theoretical structures that underpin both research and practical application of quality dimensions of nursing care performance. PMID:25113388
SENSITIVITY ANALYSIS FOR OSCILLATING DYNAMICAL SYSTEMS
WILKINS, A. KATHARINA; TIDOR, BRUCE; WHITE, JACOB; BARTON, PAUL I.
2012-01-01
Boundary value formulations are presented for exact and efficient sensitivity analysis, with respect to model parameters and initial conditions, of different classes of oscillating systems. Methods for the computation of sensitivities of derived quantities of oscillations such as period, amplitude and different types of phases are first developed for limit-cycle oscillators. In particular, a novel decomposition of the state sensitivities into three parts is proposed to provide an intuitive classification of the influence of parameter changes on period, amplitude and relative phase. The importance of the choice of time reference, i.e., the phase locking condition, is demonstrated and discussed, and its influence on the sensitivity solution is quantified. The methods are then extended to other classes of oscillatory systems in a general formulation. Numerical techniques are presented to facilitate the solution of the boundary value problem, and the computation of different types of sensitivities. Numerical results are verified by demonstrating consistency with finite difference approximations and are superior both in computational efficiency and in numerical precision to existing partial methods. PMID:23296349
Demonstration sensitivity analysis for RADTRAN III
Neuhauser, K S; Reardon, P C
1986-10-01
A demonstration sensitivity analysis was performed to: quantify the relative importance of 37 variables to the total incident free dose; assess the elasticity of seven dose subgroups to those same variables; develop density distributions for accident dose to combinations of accident data under wide-ranging variations; show the relationship between accident consequences and probabilities of occurrence; and develop limits for the variability of probability consequence curves.
Optimal control concepts in design sensitivity analysis
NASA Technical Reports Server (NTRS)
Belegundu, Ashok D.
1987-01-01
A close link is established between open loop optimal control theory and optimal design by noting certain similarities in the gradient calculations. The resulting benefits include a unified approach, together with physical insights in design sensitivity analysis, and an efficient approach for simultaneous optimal control and design. Both matrix displacement and matrix force methods are considered, and results are presented for dynamic systems, structures, and elasticity problems.
The Volatility of Data Space: Topology Oriented Sensitivity Analysis
Du, Jing; Ligmann-Zielinska, Arika
2015-01-01
Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929
Sensitivity analysis of a ground-water-flow model
Torak, Lynn J.; ,
1991-01-01
A sensitivity analysis was performed on 18 hydrological factors affecting steady-state groundwater flow in the Upper Floridan aquifer near Albany, southwestern Georgia. Computations were based on a calibrated, two-dimensional, finite-element digital model of the stream-aquifer system and the corresponding data inputs. Flow-system sensitivity was analyzed by computing water-level residuals obtained from simulations involving individual changes to each hydrological factor. Hydrological factors to which computed water levels were most sensitive were those that produced the largest change in the sum-of-squares of residuals for the smallest change in factor value. Plots of the sum-of-squares of residuals against multiplier or additive values that effect change in the hydrological factors are used to evaluate the influence of each factor on the simulated flow system. The shapes of these 'sensitivity curves' indicate the importance of each hydrological factor to the flow system. Because the sensitivity analysis can be performed during the preliminary phase of a water-resource investigation, it can be used to identify the types of hydrological data required to accurately characterize the flow system prior to collecting additional data or making management decisions.
Efficient sensitivity analysis method for chaotic dynamical systems
Liao, Haitao
2016-05-15
The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.
Sensitivity Analysis of Automated Ice Edge Detection
NASA Astrophysics Data System (ADS)
Moen, Mari-Ann N.; Isaksem, Hugo; Debien, Annekatrien
2016-08-01
The importance of highly detailed and time sensitive ice charts has increased with the increasing interest in the Arctic for oil and gas, tourism, and shipping. Manual ice charts are prepared by national ice services of several Arctic countries. Methods are also being developed to automate this task. Kongsberg Satellite Services uses a method that detects ice edges within 15 minutes after image acquisition. This paper describes a sensitivity analysis of the ice edge, assessing to which ice concentration class from the manual ice charts it can be compared to. The ice edge is derived using the Ice Tracking from SAR Images (ITSARI) algorithm. RADARSAT-2 images of February 2011 are used, both for the manual ice charts and the automatic ice edges. The results show that the KSAT ice edge lies within ice concentration classes with very low ice concentration or open water.
A Sensitivity Analysis of Entry Age Normal Military Retirement Costs.
1983-09-01
sensitivity analysis of both the individual and aggregate entryU age normal actuarial cost models under differing economic, man- agerial and legal assumptions... actuarial cost models under dif- fering economic, managerial and legal assumptions. In addition to the above, a set of simple estimating equations... actuarially com- * puted variables are listed since the model uses each pay- grade’s individual actuarial data (e.g. the life expectancy of a retiring
Measuring Road Network Vulnerability with Sensitivity Analysis
Jun-qiang, Leng; Long-hai, Yang; Liu, Wei-yi; Zhao, Lin
2017-01-01
This paper focuses on the development of a method for road network vulnerability analysis, from the perspective of capacity degradation, which seeks to identify the critical infrastructures in the road network and the operational performance of the whole traffic system. This research involves defining the traffic utility index and modeling vulnerability of road segment, route, OD (Origin Destination) pair and road network. Meanwhile, sensitivity analysis method is utilized to calculate the change of traffic utility index due to capacity degradation. This method, compared to traditional traffic assignment, can improve calculation efficiency and make the application of vulnerability analysis to large actual road network possible. Finally, all the above models and calculation method is applied to actual road network evaluation to verify its efficiency and utility. This approach can be used as a decision-supporting tool for evaluating the performance of road network and identifying critical infrastructures in transportation planning and management, especially in the resource allocation for mitigation and recovery. PMID:28125706
NASA Astrophysics Data System (ADS)
Kaiser, C.; Solaiman, Z. M.; Kilburn, M. R.; Clode, P. L.; Fuchslueger, L.; Koranda, M.; Murphy, D. V.
2012-04-01
The release of carbon through plant roots to the soil has been recognized as a governing factor for soil microbial community composition and decomposition processes, constituting an important control for ecosystem biogeochemical cycles. Moreover, there is increasing awareness that the flux of recently assimilated carbon from plants to the soil may regulate ecosystem response to environmental change, as the rate of the plant-soil carbon transfer will likely be affected by increased plant C assimilation caused by increasing atmospheric CO2 levels. What has received less attention so far is how sensitive the plant-soil C transfer would be to possible regulations coming from belowground, such as soil N addition or microbial community changes resulting from anthropogenic inputs such as biochar amendments. In this study we investigated the size, rate and sensitivity of the transfer of recently assimilated plant C through the root-soil-mycorrhiza-microbial continuum. Wheat plants associated with arbuscular mycorrhizal fungi were grown in split-boxes which were filled either with soil or a soil-biochar mixture. Each split-box consisted of two compartments separated by a membrane which was penetrable for mycorrhizal hyphae but not for roots. Wheat plants were only grown in one compartment while the other compartment served as an extended soil volume which was only accessible by mycorrhizal hyphae associated with the plant roots. After plants were grown for four weeks we used a double-labeling approach with 13C and 15N in order to investigate interactions between C and N flows in the plant-soil-microorganism system. Plants were subjected to an enriched 13CO2 atmosphere for 8 hours during which 15NH4 was added to a subset of split-boxes to either the root-containing or the root-free compartment. Both, 13C and 15N fluxes through the plant-soil continuum were monitored over 24 hours by stable isotope methods (13C phospho-lipid fatty acids by GC-IRMS, 15N/13C in bulk plant
Chemistry in Protoplanetary Disks: A Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Vasyunin, A. I.; Semenov, D.; Henning, Th.; Wakelam, V.; Herbst, Eric; Sobolev, A. M.
2008-01-01
We study how uncertainties in the rate coefficients of chemical reactions in the RATE 06 database affect abundances and column densities of key molecules in protoplanetary disks. We randomly varied the gas-phase reaction rates within their uncertainty limits and calculated the time-dependent abundances and column densities using a gas-grain chemical model and a flaring steady state disk model. We find that key species can be separated into two distinct groups according to the sensitivity of their column densities to the rate uncertainties. The first group includes CO, C+, H+3, H2O, NH3, N2H+, and HCNH+. For these species the column densities are not very sensitive to the rate uncertainties, but the abundances in specific regions are. The second group includes CS, CO2, HCO+, H2CO, C2H, CN, HCN, HNC, and other, more complex species, for which high abundances and abundance uncertainties coexist in the same disk region, leading to larger scatters in column densities. However, even for complex and heavy molecules, the dispersion in their column densities is not more than a factor of ~4. We perform a sensitivity analysis of the computed abundances to rate uncertainties and identify those reactions with the most problematic rate coefficients. We conclude that the rate coefficients of about a hundred chemical reactions need to be determined more accurately in order to greatly improve the reliability of modern astrochemical models. This improvement should be an ultimate goal of future laboratory studies and theoretical investigations.
LCA data quality: sensitivity and uncertainty analysis.
Guo, M; Murphy, R J
2012-10-01
Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions.
Simple Sensitivity Analysis for Orion GNC
NASA Technical Reports Server (NTRS)
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
Svendsen, Claus; Spurgeon, David J.
2016-01-01
A wealth of studies has investigated how chemical sensitivity is affected by temperature, however, almost always under different constant rather than more realistic fluctuating regimes. Here we compared how the nematode Caenorhabditis elegans responds to copper at constant temperatures (8–24°C) and under fluctuation conditions of low (±4°C) and high (±8°C) amplitude (averages of 12, 16, 20°C and 16°C respectively). The DEBkiss model was used to interpret effects on energy budgets. Increasing constant temperature from 12–24°C reduced time to first egg, life-span and population growth rates consistent with temperature driven metabolic rate change. Responses at 8°C did not, however, accord with this pattern (including a deviation from the Temperature Size Rule), identifying a cold stress effect. High amplitude variation and low amplitude variation around a mean temperature of 12°C impacted reproduction and body size compared to nematodes kept at the matching average constant temperatures. Copper exposure affected reproduction, body size and life-span and consequently population growth. Sensitivity to copper (EC50 values), was similar at intermediate temperatures (12, 16, 20°C) and higher at 24°C and especially the innately stressful 8°C condition. Temperature variation did not increase copper sensitivity. Indeed under variable conditions including time at the stressful 8°C condition, sensitivity was reduced. DEBkiss identified increased maintenance costs and increased assimilation as possible mechanisms for cold and higher copper concentration effects. Model analysis of combined variable temperature effects, however, demonstrated no additional joint stressor response. Hence, concerns that exposure to temperature fluctuations may sensitise species to co-stressor effects seem unfounded in this case. PMID:26784453
Sensitivity to food additives, vaso-active amines and salicylates: a review of the evidence.
Skypala, Isabel J; Williams, M; Reeves, L; Meyer, R; Venter, C
2015-01-01
Although there is considerable literature pertaining to IgE and non IgE-mediated food allergy, there is a paucity of information on non-immune mediated reactions to foods, other than metabolic disorders such as lactose intolerance. Food additives and naturally occurring 'food chemicals' have long been reported as having the potential to provoke symptoms in those who are more sensitive to their effects. Diets low in 'food chemicals' gained prominence in the 1970s and 1980s, and their popularity remains, although the evidence of their efficacy is very limited. This review focuses on the available evidence for the role and likely adverse effects of both added and natural 'food chemicals' including benzoate, sulphite, monosodium glutamate, vaso-active or biogenic amines and salicylate. Studies assessing the efficacy of the restriction of these substances in the diet have mainly been undertaken in adults, but the paper will also touch on the use of such diets in children. The difficulty of reviewing the available evidence is that few of the studies have been controlled and, for many, considerable time has elapsed since their publication. Meanwhile dietary patterns and habits have changed hugely in the interim, so the conclusions may not be relevant for our current dietary norms. The conclusion of the review is that there may be some benefit in the removal of an additive or a group of foods high in natural food chemicals from the diet for a limited period for certain individuals, providing the diagnostic pathway is followed and the foods are reintroduced back into the diet to assess for the efficacy of removal. However diets involving the removal of multiple additives and food chemicals have the very great potential to lead to nutritional deficiency especially in the paediatric population. Any dietary intervention, whether for the purposes of diagnosis or management of food allergy or food intolerance, should be adapted to the individual's dietary habits and a suitably
Additional EIPC Study Analysis: Interim Report on High Priority Topics
Hadley, Stanton W
2013-11-01
Between 2010 and 2012 the Eastern Interconnection Planning Collaborative (EIPC) conducted a major long-term resource and transmission study of the Eastern Interconnection (EI). With guidance from a Stakeholder Steering Committee (SSC) that included representatives from the Eastern Interconnection States Planning Council (EISPC) among others, the project was conducted in two phases. Phase 1 involved a long-term capacity expansion analysis that involved creation of eight major futures plus 72 sensitivities. Three scenarios were selected for more extensive transmission- focused evaluation in Phase 2. Five power flow analyses, nine production cost model runs (including six sensitivities), and three capital cost estimations were developed during this second phase. The results from Phase 1 and 2 provided a wealth of data that could be examined further to address energy-related questions. A list of 13 topics was developed for further analysis; this paper discusses the first five.
Synthesis, Characterization, and Sensitivity Analysis of Urea Nitrate (UN)
2015-04-01
ARL-TR-7250 ● APR 2015 US Army Research Laboratory Synthesis, Characterization, and Sensitivity Analysis of Urea Nitrate (UN...Characterization, and Sensitivity Analysis of Urea Nitrate (UN) by William M Sherrill Weapons and Materials Research Directorate...Characterization, and Sensitivity Analysis of Urea Nitrate (UN) 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S
Spectrograph sensitivity analysis: an efficient tool for different design phases
NASA Astrophysics Data System (ADS)
Genoni, M.; Riva, M.; Pariani, G.; Aliverti, M.; Moschetti, M.
2016-08-01
In this paper we present an efficient tool developed to perform opto-mechanical tolerance and sensitivity analysis both for the preliminary and final design phases of a spectrograph. With this tool it will be possible to evaluate the effect of mechanical perturbation of each single spectrograph optical element in terms of image stability, i.e. the motion of the echellogram on the spectrograph focal plane, and of image quality, i.e. the spot size of the different echellogram wavelengths. We present the MATLAB-Zemax script architecture of the tool. In addition we present the detailed results concerning its application to the sensitivity analysis of the ESPRESSO spectrograph (the Echelle Spectrograph for Rocky Exoplanets and Stable Spectroscopic Observations which will be soon installed on ESO's Very Large Telescope) in the framework of the incoming assembly, alignment and integration phases.
A Sensitivity Analysis of SOLPS Plasma Detachment
NASA Astrophysics Data System (ADS)
Green, D. L.; Canik, J. M.; Eldon, D.; Meneghini, O.; AToM SciDAC Collaboration
2016-10-01
Predicting the scrape off layer plasma conditions required for the ITER plasma to achieve detachment is an important issue when considering divertor heat load management options that are compatible with desired core plasma operational scenarios. Given the complexity of the scrape off layer, such predictions often rely on an integrated model of plasma transport with many free parameters. However, the sensitivity of any given prediction to the choices made by the modeler is often overlooked due to the logistical difficulties in completing such a study. Here we utilize an OMFIT workflow to enable a sensitivity analysis of the midplane density at which detachment occurs within the SOLPS model. The workflow leverages the TaskFarmer technology developed at NERSC to launch many instances of the SOLPS integrated model in parallel to probe the high dimensional parameter space of SOLPS inputs. We examine both predictive and interpretive models where the plasma diffusion coefficients are chosen to match an empirical scaling for divertor heat flux width or experimental profiles respectively. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility, and is supported under Contracts DE-AC02-05CH11231, DE-AC05-00OR22725 and DE-SC0012656.
Stormwater quality models: performance and sensitivity analysis.
Dotto, C B S; Kleidorfer, M; Deletic, A; Fletcher, T D; McCarthy, D T; Rauch, W
2010-01-01
The complex nature of pollutant accumulation and washoff, along with high temporal and spatial variations, pose challenges for the development and establishment of accurate and reliable models of the pollution generation process in urban environments. Therefore, the search for reliable stormwater quality models remains an important area of research. Model calibration and sensitivity analysis of such models are essential in order to evaluate model performance; it is very unlikely that non-calibrated models will lead to reasonable results. This paper reports on the testing of three models which aim to represent pollutant generation from urban catchments. Assessment of the models was undertaken using a simplified Monte Carlo Markov Chain (MCMC) method. Results are presented in terms of performance, sensitivity to the parameters and correlation between these parameters. In general, it was suggested that the tested models poorly represent reality and result in a high level of uncertainty. The conclusions provide useful information for the improvement of existing models and insights for the development of new model formulations.
Surface analysis and evaluation of progressive addition lens
NASA Astrophysics Data System (ADS)
Li, Zhiying; Li, Dan
2016-10-01
The Progressive addition lens is used increasingly extensive with its advantages of meeting the requirements of distant and near vision at the same time. Started from the surface equations of progressive addition lens, combined with evaluation method of spherical power and cylinder power, the relationship equations between the surface sag and optical power distribution are derived. According to the requirements on difference of actual and nominal optical power from Chinese National Standard, the tolerance analysis and evaluation of prototype progressive addition surface with addition of 2.5m-1 ( 7.5m-1 10m-1 ) is given in detail. The tolerance analysis method provides theoretical proof for lens processing control accuracy, and the processing feasibility of lens is evaluated much more reasonably.
Computed Tomography Inspection and Analysis for Additive Manufacturing Components
NASA Technical Reports Server (NTRS)
Beshears, Ronald D.
2016-01-01
Computed tomography (CT) inspection was performed on test articles additively manufactured from metallic materials. Metallic AM and machined wrought alloy test articles with programmed flaws were inspected using a 2MeV linear accelerator based CT system. Performance of CT inspection on identically configured wrought and AM components and programmed flaws was assessed using standard image analysis techniques to determine the impact of additive manufacturing on inspectability of objects with complex geometries.
Sensitivity analysis of distributed volcanic source inversion
NASA Astrophysics Data System (ADS)
Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José
2016-04-01
A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep
Kade H. Poper; Eric S. Collins; Michelle L. Pantoya; Michael Daniels
2014-10-01
Powder energetic materials are highly sensitive to electrostatic discharge (ESD) ignition. This study shows that small concentrations of carbon nanotubes (CNT) added to the highly reactive mixture of aluminum and copper oxide (Al + CuO) significantly reduces ESD ignition sensitivity. CNT act as a conduit for electric energy, bypassing energy buildup and desensitizing the mixture to ESD ignition. The lowest CNT concentration needed to desensitize ignition is 3.8 vol.% corresponding to percolation corresponding to an electrical conductivity of 0.04 S/cm. Conversely, added CNT increased Al + CuO thermal ignition sensitivity to a hot wire igniter.
Longitudinal Genetic Analysis of Anxiety Sensitivity
ERIC Educational Resources Information Center
Zavos, Helena M. S.; Gregory, Alice M.; Eley, Thalia C.
2012-01-01
Anxiety sensitivity is associated with both anxiety and depression and has been shown to be heritable. Little, however, is known about the role of genetic influence on continuity and change of symptoms over time. The authors' aim was to examine the stability of anxiety sensitivity during adolescence. By using a genetically sensitive design, the…
Objective analysis of the ARM IOP data: method and sensitivity
Cedarwall, R; Lin, J L; Xie, S C; Yio, J J; Zhang, M H
1999-04-01
Motivated by the need of to obtain accurate objective analysis of field experimental data to force physical parameterizations in numerical models, this paper -first reviews the existing objective analysis methods and interpolation schemes that are used to derive atmospheric wind divergence, vertical velocity, and advective tendencies. Advantages and disadvantages of each method are discussed. It is shown that considerable uncertainties in the analyzed products can result from the use of different analysis schemes and even more from different implementations of a particular scheme. The paper then describes a hybrid approach to combine the strengths of the regular grid method and the line-integral method, together with a variational constraining procedure for the analysis of field experimental data. In addition to the use of upper air data, measurements at the surface and at the top-of-the-atmosphere are used to constrain the upper air analysis to conserve column-integrated mass, water, energy, and momentum. Analyses are shown for measurements taken in the Atmospheric Radiation Measurement Programs (ARM) July 1995 Intensive Observational Period (IOP). Sensitivity experiments are carried out to test the robustness of the analyzed data and to reveal the uncertainties in the analysis. It is shown that the variational constraining process significantly reduces the sensitivity of the final data products.
Global sensitivity analysis in wind energy assessment
NASA Astrophysics Data System (ADS)
Tsvetkova, O.; Ouarda, T. B.
2012-12-01
Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present
Optimal Multicomponent Analysis Using the Generalized Standard Addition Method.
ERIC Educational Resources Information Center
Raymond, Margaret; And Others
1983-01-01
Describes an experiment on the simultaneous determination of chromium and magnesium by spectophotometry modified to include the Generalized Standard Addition Method computer program, a multivariate calibration method that provides optimal multicomponent analysis in the presence of interference and matrix effects. Provides instructions for…
Analysis of Saccharides by the Addition of Amino Acids
NASA Astrophysics Data System (ADS)
Ozdemir, Abdil; Lin, Jung-Lee; Gillig, Kent J.; Gulfen, Mustafa; Chen, Chung-Hsuan
2016-06-01
In this work, we present the detection sensitivity improvement of electrospray ionization (ESI) mass spectrometry of neutral saccharides in a positive ion mode by the addition of various amino acids. Saccharides of a broad molecular weight range were chosen as the model compounds in the present study. Saccharides provide strong noncovalent interactions with amino acids, and the complex formation enhances the signal intensity and simplifies the mass spectra of saccharides. Polysaccharides provide a polymer-like ESI spectrum with a basic subunit difference between multiply charged chains. The protonated spectra of saccharides are not well identified because of different charge state distributions produced by the same molecules. Depending on the solvent used and other ions or molecules present in the solution, noncovalent interactions with saccharides may occur. These interactions are affected by the addition of amino acids. Amino acids with polar side groups show a strong tendency to interact with saccharides. In particular, serine shows a high tendency to interact with saccharides and significantly improves the detection sensitivity of saccharide compounds.
Sensitivity Analysis of Wing Aeroelastic Responses
NASA Technical Reports Server (NTRS)
Issac, Jason Cherian
1995-01-01
Design for prevention of aeroelastic instability (that is, the critical speeds leading to aeroelastic instability lie outside the operating range) is an integral part of the wing design process. Availability of the sensitivity derivatives of the various critical speeds with respect to shape parameters of the wing could be very useful to a designer in the initial design phase, when several design changes are made and the shape of the final configuration is not yet frozen. These derivatives are also indispensable for a gradient-based optimization with aeroelastic constraints. In this study, flutter characteristic of a typical section in subsonic compressible flow is examined using a state-space unsteady aerodynamic representation. The sensitivity of the flutter speed of the typical section with respect to its mass and stiffness parameters, namely, mass ratio, static unbalance, radius of gyration, bending frequency, and torsional frequency is calculated analytically. A strip theory formulation is newly developed to represent the unsteady aerodynamic forces on a wing. This is coupled with an equivalent plate structural model and solved as an eigenvalue problem to determine the critical speed of the wing. Flutter analysis of the wing is also carried out using a lifting-surface subsonic kernel function aerodynamic theory (FAST) and an equivalent plate structural model. Finite element modeling of the wing is done using NASTRAN so that wing structures made of spars and ribs and top and bottom wing skins could be analyzed. The free vibration modes of the wing obtained from NASTRAN are input into FAST to compute the flutter speed. An equivalent plate model which incorporates first-order shear deformation theory is then examined so it can be used to model thick wings, where shear deformations are important. The sensitivity of natural frequencies to changes in shape parameters is obtained using ADIFOR. A simple optimization effort is made towards obtaining a minimum weight
Wear-Out Sensitivity Analysis Project Abstract
NASA Technical Reports Server (NTRS)
Harris, Adam
2015-01-01
During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.
Sensitivity analysis of hydrodynamic stability operators
NASA Technical Reports Server (NTRS)
Schmid, Peter J.; Henningson, Dan S.; Khorrami, Mehdi R.; Malik, Mujeeb R.
1992-01-01
The eigenvalue sensitivity for hydrodynamic stability operators is investigated. Classical matrix perturbation techniques as well as the concept of epsilon-pseudoeigenvalues are applied to show that parts of the spectrum are highly sensitive to small perturbations. Applications are drawn from incompressible plane Couette, trailing line vortex flow and compressible Blasius boundary layer flow. Parametric studies indicate a monotonically increasing effect of the Reynolds number on the sensitivity. The phenomenon of eigenvalue sensitivity is due to the non-normality of the operators and their discrete matrix analogs and may be associated with large transient growth of the corresponding initial value problem.
[Kinetic analysis of additive effect on desulfurization activity].
Han, Kui-hua; Zhao, Jian-li; Lu, Chun-mei; Wang, Yong-zheng; Zhao, Gai-ju; Cheng, Shi-qing
2006-02-01
The additive effects of A12O3, Fe2O3 and MnCO3 on CaO sulfation kinetics were investigated by thermogravimetic analysis method and modified grain model. The activation energy (Ea) and the pre-exponential factor (k0) of surface reaction, the activation energy (Ep) and the pre-exponential factor (D0) of product layer diffusion reaction were calculated according to the model. Additions of MnCO3 can enhance the initial reaction rate, product layer diffusion and the final CaO conversion of sorbents, the effect mechanism of which is similar to that of Fe2O3. The method based isokinetic temperature Ts and activation energy can not estimate the contribution of additive to the sulfation reactivity, the rate constant of the surface reaction (k), and the effective diffusivity of reactant in the product layer (Ds) under certain experimental conditions can reflect the effect of additives on the activation. Unstoichiometric metal oxide may catalyze the surface reaction and promote the diffusivity of reactant in the product layer by the crystal defect and distinct diffusion of cation and anion. According to the mechanism and effect of additive on the sulfation, the effective temperature and the stoichiometric relation of reaction, it is possible to improve the utilization of sorbent by compounding more additives to the calcium-based sorbent.
Dai, Zhongmin; Hu, Jiajie; Xu, Xingkun; Zhang, Lujun; Brookes, Philip C.; He, Yan; Xu, Jianming
2016-01-01
Sensitive responses among bacterial and fungal communities to pyrogenic organic matter (PyOM) (biochar) addition in rhizosphere and bulk soils are poorly understood. We conducted a pot experiment with manure and straw PyOMs added to an acidic paddy soil, and identified the sensitive “responders” whose relative abundance was significantly increased/decreased among the whole microbial community following PyOM addition. Results showed that PyOMs significantly (p < 0.05) increased root growth, and simultaneously changed soil chemical parameters by decreasing soil acidity and increasing biogenic resource. PyOM-induced acidity and biogenic resource co-determined bacterial responder community structure whereas biogenic resource was the dominant parameter structuring fungal responder community. Both number and proportion of responders in rhizosphere soil was larger than in bulk soil, regardless of PyOM types and microbial domains, indicating the microbial community in rhizosphere soil was sensitive to PyOM addition than bulk soil. The significant increased root biomass and length caused by PyOM addition, associated with physiological processes, e.g. C exudates secretion, likely favored more sensitive responders in rhizosphere soil than in bulk soil. Our study identified the responders at fine taxonomic resolution in PyOM amended soils, improved the understanding of their ecological phenomena associated with PyOM addition, and examined their interactions with plant roots. PMID:27824111
LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.
2000-01-01
A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).
Sensitivity analysis of textural parameters for vertebroplasty
NASA Astrophysics Data System (ADS)
Tack, Gye Rae; Lee, Seung Y.; Shin, Kyu-Chul; Lee, Sung J.
2002-05-01
Vertebroplasty is one of the newest surgical approaches for the treatment of the osteoporotic spine. Recent studies have shown that it is a minimally invasive, safe, promising procedure for patients with osteoporotic fractures while providing structural reinforcement of the osteoporotic vertebrae as well as immediate pain relief. However, treatment failures due to excessive bone cement injection have been reported as one of complications. It is believed that control of bone cement volume seems to be one of the most critical factors in preventing complications. We believed that an optimal bone cement volume could be assessed based on CT data of a patient. Gray-level run length analysis was used to extract textural information of the trabecular. At initial stage of the project, four indices were used to represent the textural information: mean width of intertrabecular space, mean width of trabecular, area of intertrabecular space, and area of trabecular. Finally, the area of intertrabecular space was selected as a parameter to estimate an optimal bone cement volume and it was found that there was a strong linear relationship between these 2 variables (correlation coefficient = 0.9433, standard deviation = 0.0246). In this study, we examined several factors affecting overall procedures. The threshold level, the radius of rolling ball and the size of region of interest were selected for the sensitivity analysis. As the level of threshold varied with 9, 10, and 11, the correlation coefficient varied from 0.9123 to 0.9534. As the radius of rolling ball varied with 45, 50, and 55, the correlation coefficient varied from 0.9265 to 0.9730. As the size of region of interest varied with 58 x 58, 64 x 64, and 70 x 70, the correlation coefficient varied from 0.9685 to 0.9468. Finally, we found that strong correlation between actual bone cement volume (Y) and the area (X) of the intertrabecular space calculated from the binary image and the linear equation Y = 0.001722 X - 2
Derivative based sensitivity analysis of gamma index
Sarkar, Biplab; Pradhan, Anirudh; Ganesh, T.
2015-01-01
Originally developed as a tool for patient-specific quality assurance in advanced treatment delivery methods to compare between measured and calculated dose distributions, the gamma index (γ) concept was later extended to compare between any two dose distributions. It takes into effect both the dose difference (DD) and distance-to-agreement (DTA) measurements in the comparison. Its strength lies in its capability to give a quantitative value for the analysis, unlike other methods. For every point on the reference curve, if there is at least one point in the evaluated curve that satisfies the pass criteria (e.g., δDD = 1%, δDTA = 1 mm), the point is included in the quantitative score as “pass.” Gamma analysis does not account for the gradient of the evaluated curve - it looks at only the minimum gamma value, and if it is <1, then the point passes, no matter what the gradient of evaluated curve is. In this work, an attempt has been made to present a derivative-based method for the identification of dose gradient. A mathematically derived reference profile (RP) representing the penumbral region of 6 MV 10 cm × 10 cm field was generated from an error function. A general test profile (GTP) was created from this RP by introducing 1 mm distance error and 1% dose error at each point. This was considered as the first of the two evaluated curves. By its nature, this curve is a smooth curve and would satisfy the pass criteria for all points in it. The second evaluated profile was generated as a sawtooth test profile (STTP) which again would satisfy the pass criteria for every point on the RP. However, being a sawtooth curve, it is not a smooth one and would be obviously poor when compared with the smooth profile. Considering the smooth GTP as an acceptable profile when it passed the gamma pass criteria (1% DD and 1 mm DTA) against the RP, the first and second order derivatives of the DDs (δD’, δD”) between these two curves were derived and used as the boundary
Topographic Avalanche Risk: DEM Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Nazarkulova, Ainura; Strobl, Josef
2015-04-01
GIS-based models are frequently used to assess the risk and trigger probabilities of (snow) avalanche releases, based on parameters and geomorphometric derivatives like elevation, exposure, slope, proximity to ridges and local relief energy. Numerous models, and model-based specific applications and project results have been published based on a variety of approaches and parametrizations as well as calibrations. Digital Elevation Models (DEM) come with many different resolution (scale) and quality (accuracy) properties, some of these resulting from sensor characteristics and DEM generation algorithms, others from different DEM processing workflows and analysis strategies. This paper explores the impact of using different types and characteristics of DEMs for avalanche risk modeling approaches, and aims at establishing a framework for assessing the uncertainty of results. The research question is derived from simply demonstrating the differences in release risk areas and intensities by applying identical models to DEMs with different properties, and then extending this into a broader sensitivity analysis. For the quantification and calibration of uncertainty parameters different metrics are established, based on simple value ranges, probabilities, as well as fuzzy expressions and fractal metrics. As a specific approach the work on DEM resolution-dependent 'slope spectra' is being considered and linked with the specific application of geomorphometry-base risk assessment. For the purpose of this study focusing on DEM characteristics, factors like land cover, meteorological recordings and snowpack structure and transformation are kept constant, i.e. not considered explicitly. Key aims of the research presented here are the development of a multi-resolution and multi-scale framework supporting the consistent combination of large area basic risk assessment with local mitigation-oriented studies, and the transferability of the latter into areas without availability of
ANALYSIS OF MPC ACCESS REQUIREMENTS FOR ADDITION OF FILLER MATERIALS
W. Wallin
1996-09-03
This analysis is prepared by the Mined Geologic Disposal System (MGDS) Waste Package Development Department (WPDD) in response to a request received via a QAP-3-12 Design Input Data Request (Ref. 5.1) from WAST Design (formerly MRSMPC Design). The request is to provide: Specific MPC access requirements for the addition of filler materials at the MGDS (i.e., location and size of access required). The objective of this analysis is to provide a response to the foregoing request. The purpose of this analysis is to provide a documented record of the basis for the response. The response is stated in Section 8 herein. The response is based upon requirements from an MGDS perspective.
Su, Zhaohong; Xu, Haitao; Xu, Xiaolin; Zhang, Yi; Ma, Yan; Li, Chaorong; Xie, Qingji
2017-03-01
Effective covalent immobilization of quinone and aptamer onto a gold electrode via thiol addition (a Michael addition) for sensitive and selective protein (with thrombin as the model) biosensing is reported, with a detection limit down to 20 fM for thrombin. Briefly, the thiol addition reaction of a gold electrode-supported 1,6-hexanedithiol (HDT) with p-benzoquinone (BQ) yielded BQ-HDT/Au, and the similar reaction of thiolated thrombin aptamer (TTA) with activated BQ-HDT/Au under 0.3V led to formation of a gold electrode-supported novel electrochemical probe TTA-BQ-HDT/Au. The thus-prepared TTA-BQ-HDT/Au exhibits a pair of well-defined redox peaks of quinone moiety, and the TTA-thrombin interaction can sensitively decrease the electrochemical signal. Herein the thiol addition acts as an effective and convenient binding protocols for aptasensing, and a new method (electrochemical conversion of Michael addition complex for signal generation) for the fabrication of biosensor is presented. The cyclic voltammetry (CV) was used to characterize the film properties. In addition, the proposed amperometric aptasensor exhibits good sensitivity, selectivity, and reproducibility. The aptasensor also has acceptable recovery for detection in complex protein sample.
Influence of TiO2 nanofiber additives for high efficient dye-sensitized solar cells.
Hwang, Kyung-Jun; Lee, Jae-Wook; Park, Ju-Young; Kim, Sun-Il
2011-02-01
TiO2 nanofibers were prepared from a mixture of titanium-tetra-isopropoxide and poly vinyl pyrrolidone by applying the electrospinning method. The samples were characterized by XRD, FE-SEM, TEM and BET analyses. The diameter of electrospun TiO2 nanofibers is in the range of 70 approximately 160 nm. To improve the short-circuit photocurrent, we added the TiO2 nanofibers in the TiO2 electrode of dye-sensitized solar cells (DSSCs). TiO2 nanofibers added in DSSCs can make up to 20% more conversion energy than the conventional DSSC with only TiO2 films only.
Spectral Envelopes and Additive + Residual Analysis/Synthesis
NASA Astrophysics Data System (ADS)
Rodet, Xavier; Schwarz, Diemo
The subject of this chapter is the estimation, representation, modification, and use of spectral envelopes in the context of sinusoidal-additive-plus-residual analysis/synthesis. A spectral envelope is an amplitude-vs-frequency function, which may be obtained from the envelope of a short-time spectrum (Rodet et al., 1987; Schwarz, 1998). [Precise definitions of such an envelope and short-time spectrum (STS) are given in Section 2.] The additive-plus-residual analysis/synthesis method is based on a representation of signals in terms of a sum of time-varying sinusoids and of a non-sinusoidal residual signal [e.g., see Serra (1989), Laroche et al. (1993), McAulay and Quatieri (1995), and Ding and Qian (1997)]. Many musical sound signals may be described as a combination of a nearly periodic waveform and colored noise. The nearly periodic part of the signal can be viewed as a sum of sinusoidal components, called partials, with time-varying frequency and amplitude. Such sinusoidal components are easily observed on a spectral analysis display (Fig. 5.1) as obtained, for instance, from a discrete Fourier transform.
A discourse on sensitivity analysis for discretely-modeled structures
NASA Technical Reports Server (NTRS)
Adelman, Howard M.; Haftka, Raphael T.
1991-01-01
A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.
NASA Astrophysics Data System (ADS)
Lu, Futai; Wang, Xuexiang; Zhao, Yanming; Yang, Guang; Zhang, Jie; Zhang, Bao; Feng, Yaqing
2016-11-01
The introduction of an additional acceptor to a typical donor-π bridge-acceptor (D-π-A) type porphyrin sensitizer results in a D-A-π-A featured porphyrin. Two porphyrins containing an additional acceptor with different electron-withdrawing abilities such as 2,3-diphenylquinoxaline (DPQ) for LP-11 and 2,1,3-benzothiadiazole (BTD) for LP-12 between the porphyrin core and the anchoring group have been synthesized for use as sensitizers in dye-sensitized solar cells (DSCs). Compared to LP-11, LP-12 with the stronger electron-withdrawing additional acceptor BTD possesses better light harvesting properties with regard to red-shifted Q-band absorption and a broader IPCE spectrum, resulting in a greater short circuit photocurrent density (Jsc) output. Interestingly, the steric hindrance of the DPQ group is favorable for suppressing dye aggregation, leading to a larger open-circuit voltage (Voc) value for LP-11-based cell. However, the loss in Voc of LP-12 is overcompensated by an improvement in Jsc. The optimized cell based on LP-12 achieves the better performance with a Jsc of 15.51 mA cm-2, a Voc of 674 mV, a fill factor (FF) of 0.7 and an overall power conversion efficiency (PCE) of 7.37% under standard AM 1.5 G irradiation. The findings provide a guidance for the future molecular design of highly efficient porphyrin sensitizers for use in DSCs.
Grid sensitivity for aerodynamic optimization and flow analysis
NASA Technical Reports Server (NTRS)
Sadrehaghighi, I.; Tiwari, S. N.
1993-01-01
After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.
NASA Technical Reports Server (NTRS)
Winters, J. M.; Stark, L.
1984-01-01
Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.
Discrete analysis of spatial-sensitivity models
NASA Technical Reports Server (NTRS)
Nielsen, Kenneth R. K.; Wandell, Brian A.
1988-01-01
Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.
Shape design sensitivity analysis and optimal design of structural systems
NASA Technical Reports Server (NTRS)
Choi, Kyung K.
1987-01-01
The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.
Sensitivity Analysis of Situational Awareness Measures
NASA Technical Reports Server (NTRS)
Shively, R. J.; Davison, H. J.; Burdick, M. D.; Rutkowski, Michael (Technical Monitor)
2000-01-01
A great deal of effort has been invested in attempts to define situational awareness, and subsequently to measure this construct. However, relatively less work has focused on the sensitivity of these measures to manipulations that affect the SA of the pilot. This investigation was designed to manipulate SA and examine the sensitivity of commonly used measures of SA. In this experiment, we tested the most commonly accepted measures of SA: SAGAT, objective performance measures, and SART, against different levels of SA manipulation to determine the sensitivity of such measures in the rotorcraft flight environment. SAGAT is a measure in which the simulation blanks in the middle of a trial and the pilot is asked specific, situation-relevant questions about the state of the aircraft or the objective of a particular maneuver. In this experiment, after the pilot responded verbally to several questions, the trial continued from the point frozen. SART is a post-trial questionnaire that asked for subjective SA ratings from the pilot at certain points in the previous flight. The objective performance measures included: contacts with hazards (power lines and towers) that impeded the flight path, lateral and vertical anticipation of these hazards, response time to detection of other air traffic, and response time until an aberrant fuel gauge was detected. An SA manipulation of the flight environment was chosen that undisputedly affects a pilot's SA-- visibility. Four variations of weather conditions (clear, light rain, haze, and fog) resulted in a different level of visibility for each trial. Pilot SA was measured by either SAGAT or the objective performance measures within each level of visibility. This enabled us to not only determine the sensitivity within a measure, but also between the measures. The SART questionnaire and the NASA-TLX, a measure of workload, were distributed after every trial. Using the newly developed rotorcraft part-task laboratory (RPTL) at NASA Ames
Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation
NASA Astrophysics Data System (ADS)
Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten
2015-04-01
Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.
Decreasing Cloudiness Over China: An Updated Analysis Examining Additional Variables
Kaiser, D.P.
2000-01-14
As preparation of the IPCC's Third Assessment Report takes place, one of the many observed climate variables of key interest is cloud amount. For several nations of the world, there exist records of surface-observed cloud amount dating back to the middle of the 20th Century or earlier, offering valuable information on variations and trends. Studies using such databases include Sun and Groisman (1999) and Kaiser and Razuvaev (1995) for the former Soviet Union, Angel1 et al. (1984) for the United States, Henderson-Sellers (1986) for Europe, Jones and Henderson-Sellers (1992) for Australia, and Kaiser (1998) for China. The findings of Kaiser (1998) differ from the other studies in that much of China appears to have experienced decreased cloudiness over recent decades (1954-1994), whereas the other land regions for the most part show evidence of increasing cloud cover. This paper expands on Kaiser (1998) by analyzing trends in additional meteorological variables for Chi na [station pressure (p), water vapor pressure (e), and relative humidity (rh)] and extending the total cloud amount (N) analysis an additional two years (through 1996).
Sensitivity analysis and optimization of thin-film thermoelectric coolers
NASA Astrophysics Data System (ADS)
Harsha Choday, Sri; Roy, Kaushik
2013-06-01
The cooling performance of a thermoelectric (TE) material is dependent on the figure-of-merit (ZT = S2σT/κ), where S is the Seebeck coefficient, σ and κ are the electrical and thermal conductivities, respectively. The standard definition of ZT assigns equal importance to power factor (S2σ) and thermal conductivity. In this paper, we analyze the relative importance of each thermoelectric parameter on the cooling performance using the mathematical framework of sensitivity analysis. In addition, the impact of the electrical/thermal contact parasitics on bulk and superlattice Bi2Te3 is also investigated. In the presence of significant contact parasitics, we find that the carrier concentration that results in best cooling is lower than that of the highest ZT. We also establish the level of contact parasitics that are needed such that their impact on TE cooling is negligible.
Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.
Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis
2015-01-01
Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the
Sensitivity analysis for aeroacoustic and aeroelastic design of turbomachinery blades
NASA Technical Reports Server (NTRS)
Lorence, Christopher B.; Hall, Kenneth C.
1995-01-01
A new method for computing the effect that small changes in the airfoil shape and cascade geometry have on the aeroacoustic and aeroelastic behavior of turbomachinery cascades is presented. The nonlinear unsteady flow is assumed to be composed of a nonlinear steady flow plus a small perturbation unsteady flow that is harmonic in time. First, the full potential equation is used to describe the behavior of the nonlinear mean (steady) flow through a two-dimensional cascade. The small disturbance unsteady flow through the cascade is described by the linearized Euler equations. Using rapid distortion theory, the unsteady velocity is split into a rotational part that contains the vorticity and an irrotational part described by a scalar potential. The unsteady vorticity transport is described analytically in terms of the drift and stream functions computed from the steady flow. Hence, the solution of the linearized Euler equations may be reduced to a single inhomogeneous equation for the unsteady potential. The steady flow and small disturbance unsteady flow equations are discretized using bilinear quadrilateral isoparametric finite elements. The nonlinear mean flow solution and streamline computational grid are computed simultaneously using Newton iteration. At each step of the Newton iteration, LU decomposition is used to solve the resulting set of linear equations. The unsteady flow problem is linear, and is also solved using LU decomposition. Next, a sensitivity analysis is performed to determine the effect small changes in cascade and airfoil geometry have on the mean and unsteady flow fields. The sensitivity analysis makes use of the nominal steady and unsteady flow LU decompositions so that no additional matrices need to be factored. Hence, the present method is computationally very efficient. To demonstrate how the sensitivity analysis may be used to redesign cascades, a compressor is redesigned for improved aeroelastic stability and two different fan exit guide
Moghtaderi, Mozhgan; Hejrati, Zinatosadat; Dehghani, Zahra; Dehghani, Faranak; Kolahi, Niloofar
2016-06-01
There has been a great increase in the consumption of various food additives in recent years. The purpose of this study was to identify the incidence of sensitization to food additives by using skin prick test in patients with allergy and to determine the concordance rate between positive skin tests and oral challenge in hypersensitivity to additives. This cross-sectional study included 125 (female 71, male 54) patients aged 2-76 years with allergy and 100 healthy individuals. Skin tests were performed in both patient and control groups with 25 fresh food additives. Among patients with allergy, 22.4% showed positive skin test at least to one of the applied materials. Skin test was negative to all tested food additives in control group. Oral food challenge was done in 28 patients with positive skin test, in whom 9 patients showed reaction to culprit (Concordance rate=32.1%). The present study suggested that about one-third of allergic patients with positive reaction to food additives showed positive oral challenge; it may be considered the potential utility of skin test to identify the role of food additives in patients with allergy.
NASA Astrophysics Data System (ADS)
Afrooz, Malihe; Dehghani, Hossein
2014-09-01
In this study, we report the influence of a phosphate additive on the performance of dye-sensitized solar cells (DSSCs) based on 2-cyano-3-(4-(diphenylamino)phenyl)acrylic acid (TPA) as sensitizer. The DSSCs are fabricated by incorporating tributyl phosphate (TBPP) as an additive in the electrolyte and is attained an efficiency of about 3.03% under standard air mass 1.5 global (AM 1.5G) simulated sunlight, corresponding to 35% efficiency increment compare to the standard liquid electrolyte. An improvement in both open circuit voltage (Voc) and short circuit current (Jsc) obtains by adjusting the concentration of TBPP in the electrolyte, which attributes to enlarge energy difference between the Fermi level (EF) of TiO2 and the redox potential of electrolyte and suppression of charge recombination from the conduction band (CB) of TiO2 to the oxidized ions in the redox electrolyte. Electrochemical impedance analyses (EIS) reveals a dramatic increase in charge transfer resistance at the dyed-TiO2/electrolyte interface and the electron density in the CB of TiO2 that the more prominent photoelectric conversion efficiency (η) improvement with TBPP additive results by the efficient inhibition of recombination processes. This striking result leads to use a family of electron donor groups in many compounds as highly efficient additive.
Boundary formulations for sensitivity analysis without matrix derivatives
NASA Technical Reports Server (NTRS)
Kane, J. H.; Guru Prasad, K.
1993-01-01
A new hybrid approach to continuum structural shape sensitivity analysis employing boundary element analysis (BEA) is presented. The approach uses iterative reanalysis to obviate the need to factor perturbed matrices in the determination of surface displacement and traction sensitivities via a univariate perturbation/finite difference (UPFD) step. The UPFD approach makes it possible to immediately reuse existing subroutines for computation of BEA matrix coefficients in the design sensitivity analysis process. The reanalysis technique computes economical response of univariately perturbed models without factoring perturbed matrices. The approach provides substantial computational economy without the burden of a large-scale reprogramming effort.
LSENS - GENERAL CHEMICAL KINETICS AND SENSITIVITY ANALYSIS CODE
NASA Technical Reports Server (NTRS)
Bittker, D. A.
1994-01-01
LSENS has been developed for solving complex, homogeneous, gas-phase, chemical kinetics problems. The motivation for the development of this program is the continuing interest in developing detailed chemical reaction mechanisms for complex reactions such as the combustion of fuels and pollutant formation and destruction. A reaction mechanism is the set of all elementary chemical reactions that are required to describe the process of interest. Mathematical descriptions of chemical kinetics problems constitute sets of coupled, nonlinear, first-order ordinary differential equations (ODEs). The number of ODEs can be very large because of the numerous chemical species involved in the reaction mechanism. Further complicating the situation are the many simultaneous reactions needed to describe the chemical kinetics of practical fuels. For example, the mechanism describing the oxidation of the simplest hydrocarbon fuel, methane, involves over 25 species participating in nearly 100 elementary reaction steps. Validating a chemical reaction mechanism requires repetitive solutions of the governing ODEs for a variety of reaction conditions. Analytical solutions to the systems of ODEs describing chemistry are not possible, except for the simplest cases, which are of little or no practical value. Consequently, there is a need for fast and reliable numerical solution techniques for chemical kinetics problems. In addition to solving the ODEs describing chemical kinetics, it is often necessary to know what effects variations in either initial condition values or chemical reaction mechanism parameters have on the solution. Such a need arises in the development of reaction mechanisms from experimental data. The rate coefficients are often not known with great precision and in general, the experimental data are not sufficiently detailed to accurately estimate the rate coefficient parameters. The development of a reaction mechanism is facilitated by a systematic sensitivity analysis
Aero-Structural Interaction, Analysis, and Shape Sensitivity
NASA Technical Reports Server (NTRS)
Newman, James C., III
1999-01-01
A multidisciplinary sensitivity analysis technique that has been shown to be independent of step-size selection is examined further. The accuracy of this step-size independent technique, which uses complex variables for determining sensitivity derivatives, has been previously established. The primary focus of this work is to validate the aero-structural analysis procedure currently being used. This validation consists of comparing computed and experimental data obtained for an Aeroelastic Research Wing (ARW-2). Since the aero-structural analysis procedure has the complex variable modifications already included into the software, sensitivity derivatives can automatically be computed. Other than for design purposes, sensitivity derivatives can be used for predicting the solution at nearby conditions. The use of sensitivity derivatives for predicting the aero-structural characteristics of this configuration is demonstrated.
Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint
Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad
2015-12-08
Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.
Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis
Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad
2015-10-02
Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.
Global and Local Sensitivity Analysis Methods for a Physical System
ERIC Educational Resources Information Center
Morio, Jerome
2011-01-01
Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…
Advanced Fuel Cycle Economic Sensitivity Analysis
David Shropshire; Kent Williams; J.D. Smith; Brent Boore
2006-12-01
A fuel cycle economic analysis was performed on four fuel cycles to provide a baseline for initial cost comparison using the Gen IV Economic Modeling Work Group G4 ECON spreadsheet model, Decision Programming Language software, the 2006 Advanced Fuel Cycle Cost Basis report, industry cost data, international papers, the nuclear power related cost study from MIT, Harvard, and the University of Chicago. The analysis developed and compared the fuel cycle cost component of the total cost of energy for a wide range of fuel cycles including: once through, thermal with fast recycle, continuous fast recycle, and thermal recycle.
Kim, JaeHwang; Hayashi, Minoru; Kobayashi, Equo; Sato, Tatsuo
2016-02-01
The age-hardening is enhanced with the high cooling rate since more vacancies are formed during quenching, whereas the stable beta phase is formed with the slow cooling rate just after solid solution treatment resulting in lower increase of hardness during aging. Meanwhile, the nanoclusters are formed during natural aging in Al-Mg-Si alloys. The formation of nanoclusters is enhanced with increasing the Si amount. High quench sensitivity based on mechanical property changes was confirmed with increasing the Si amount. Moreover, the nano-size beta" phase, main hardening phase, is more formed by the Si addition resulting in enhancement of the age-hardening. The quench sensitivity and the formation behavior of precipitates are discussed based on the age-hardening phenomena.
Parameter sensitivity analysis for pesticide impacts on honeybee colonies
We employ Monte Carlo simulation and linear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed that simulate hive population trajectories, taking into account queen strength, foraging success, weather, colo...
Sobol’ sensitivity analysis for stressor impacts on honeybee colonies
We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather...
Selecting step sizes in sensitivity analysis by finite differences
NASA Technical Reports Server (NTRS)
Iott, J.; Haftka, R. T.; Adelman, H. M.
1985-01-01
This paper deals with methods for obtaining near-optimum step sizes for finite difference approximations to first derivatives with particular application to sensitivity analysis. A technique denoted the finite difference (FD) algorithm, previously described in the literature and applicable to one derivative at a time, is extended to the calculation of several simultaneously. Both the original and extended FD algorithms are applied to sensitivity analysis for a data-fitting problem in which derivatives of the coefficients of an interpolation polynomial are calculated with respect to uncertainties in the data. The methods are also applied to sensitivity analysis of the structural response of a finite-element-modeled swept wing. In a previous study, this sensitivity analysis of the swept wing required a time-consuming trial-and-error effort to obtain a suitable step size, but it proved to be a routine application for the extended FD algorithm herein.
Sensitivity Analysis and Computation for Partial Differential Equations
2008-03-14
Example, Journal of Mathematical Analysis and Applications , to appear. 11 [22] John R. Singler, Transition to Turbulence, Small Disturbances, and...Sensitivity Analysis II: The Navier-Stokes Equations, Journal of Mathematical Analysis and Applications , to appear. [23] A. M. Stuart and A. R. Humphries
Sensitivity analysis for electromagnetic topology optimization problems
NASA Astrophysics Data System (ADS)
Zhou, Shiwei; Li, Wei; Li, Qing
2010-06-01
This paper presents a level set based method to design the metal shape in electromagnetic field such that the incident current flow on the metal surface can be minimized or maximized. We represent the interface of the free space and conducting material (solid phase) by the zero-order contour of a higher dimensional level set function. Only the electrical component of the incident wave is considered in the current study and the distribution of the induced current flow on the metallic surface is governed by the electric field integral equation (EFIE). By minimizing or maximizing a costing function relative to the current flow, its distribution can be controlled to some extent. This method paves a new avenue to many electromagnetic applications such as antenna and metamaterial whose performance or properties are dominated by their surface current flow. The sensitivity of the objective function to the shape change, an integral formulation including both the solutions to the electric field integral equation and its adjoint equation, is obtained using a variational method and shape derivative. The advantages of the level set model lie in its flexibility of disposing complex topological changes and facilitating the mathematical expression of the electromagnetic configuration. Moreover, the level set model makes the optimization an elegant evolution process during which the volume of the metallic component keeps a constant while the free space/metal interface gradually approaching its optimal position. The effectiveness of this method is demonstrated through a self-adjoint 2D topology optimization example.
Adjoint sensitivity analysis of plasmonic structures using the FDTD method.
Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H
2014-05-15
We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.
Sensitivity Analysis of the Gap Heat Transfer Model in BISON.
Swiler, Laura Painton; Schmidt, Rodney C.; Williamson, Richard; Perez, Danielle
2014-10-01
This report summarizes the result of a NEAMS project focused on sensitivity analysis of the heat transfer model in the gap between the fuel rod and the cladding used in the BISON fuel performance code of Idaho National Laboratory. Using the gap heat transfer models in BISON, the sensitivity of the modeling parameters and the associated responses is investigated. The study results in a quantitative assessment of the role of various parameters in the analysis of gap heat transfer in nuclear fuel.
Sensitivity Analysis of QSAR Models for Assessing Novel Military Compounds
2009-01-01
erties, such as log P, would aid in estimating a chemical’s environmental fate and toxicology when applied to QSAR modeling. Granted, QSAR mod- els, such...ER D C TR -0 9 -3 Strategic Environmental Research and Development Program Sensitivity Analysis of QSAR Models for Assessing Novel...Environmental Research and Development Program ERDC TR-09-3 January 2009 Sensitivity Analysis of QSAR Models for Assessing Novel Military Compound
Advancing sensitivity analysis to precisely characterize temporal parameter dominance
NASA Astrophysics Data System (ADS)
Guse, Björn; Pfannerstill, Matthias; Strauch, Michael; Reusser, Dominik; Lüdtke, Stefan; Volk, Martin; Gupta, Hoshin; Fohrer, Nicola
2016-04-01
Parameter sensitivity analysis is a strategy for detecting dominant model parameters. A temporal sensitivity analysis calculates daily sensitivities of model parameters. This allows a precise characterization of temporal patterns of parameter dominance and an identification of the related discharge conditions. To achieve this goal, the diagnostic information as derived from the temporal parameter sensitivity is advanced by including discharge information in three steps. In a first step, the temporal dynamics are analyzed by means of daily time series of parameter sensitivities. As sensitivity analysis method, we used the Fourier Amplitude Sensitivity Test (FAST) applied directly onto the modelled discharge. Next, the daily sensitivities are analyzed in combination with the flow duration curve (FDC). Through this step, we determine whether high sensitivities of model parameters are related to specific discharges. Finally, parameter sensitivities are separately analyzed for five segments of the FDC and presented as monthly averaged sensitivities. In this way, seasonal patterns of dominant model parameter are provided for each FDC segment. For this methodical approach, we used two contrasting catchments (upland and lowland catchment) to illustrate how parameter dominances change seasonally in different catchments. For all of the FDC segments, the groundwater parameters are dominant in the lowland catchment, while in the upland catchment the controlling parameters change seasonally between parameters from different runoff components. The three methodical steps lead to clear temporal patterns, which represent the typical characteristics of the study catchments. Our methodical approach thus provides a clear idea of how the hydrological dynamics are controlled by model parameters for certain discharge magnitudes during the year. Overall, these three methodical steps precisely characterize model parameters and improve the understanding of process dynamics in hydrological
Adkins, D E; McClay, J L; Vunck, S A; Batman, A M; Vann, R E; Clark, S L; Souza, R P; Crowley, J J; Sullivan, P F; van den Oord, E J C G; Beardsley, P M
2013-11-01
Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In this study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine (MA)-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate, FDR <0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent MA levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization.
Navy Additive Manufacturing: Policy Analysis for Future DLA Material Support
2014-12-01
support programs. 14. SUBJECT TERMS additive manufacturing, 3D printing, technology adoption 15. NUMBER OF PAGES 69 16...LEFT BLANK xii LIST OF ACRONYMS AND ABBREVIATIONS 3D Three Dimensions or Three Dimensional 3DP 3D Printing AM Additive Manufacturing AMDO...this is about to change. Additive manufacturing (AM) systems (commonly known as “ 3D printing”) could bring the organic parts manufacturing capability
Design sensitivity analysis using EAL. Part 1: Conventional design parameters
NASA Technical Reports Server (NTRS)
Dopker, B.; Choi, Kyung K.; Lee, J.
1986-01-01
A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.
A study of turbulent flow with sensitivity analysis
NASA Astrophysics Data System (ADS)
Dwyer, H. A.; Peterson, T.
1980-07-01
In this paper a new type of analysis is introduced that can be used in numerical fluid mechanics. The method is known as sensitivity analysis and it has been widely used in the field of automatic control theory. Sensitivity analysis addresses in a systematic way to the question of 'how' the solution to an equation will change due to variations in the equation's parameters and boundary conditions. An important application is turbulent flow where there exists a large uncertainty in the models used for closure. In the present work the analysis is applied to the three-dimensional planetary boundary layer equations, and sensitivity equations are generated for various parameters in turbulence model. The solution of these equations with the proper techniques leads to considerable insight into the flow field and its dependence on turbulence parameters. Also, the analysis allows for unique decompositions of the parameter dependence and is efficient.
Hybrid Additive Manufacturing Technologies - An Analysis Regarding Potentials and Applications
NASA Astrophysics Data System (ADS)
Merklein, Marion; Junker, Daniel; Schaub, Adam; Neubauer, Franziska
Imposing the trend of mass customization of lightweight construction in industry, conventional manufacturing processes like forming technology and chipping production are pushed to their limits for economical manufacturing. More flexible processes are needed which were developed by the additive manufacturing technology. This toolless production principle offers a high geometrical freedom and an optimized utilization of the used material. Thus load adjusted lightweight components can be produced in small lot sizes in an economical way. To compensate disadvantages like inadequate accuracy and surface roughness hybrid machines combining additive and subtractive manufacturing are developed. Within this paper the principles of mainly used additive manufacturing processes of metals and their possibility to be integrated into a hybrid production machine are summarized. It is pointed out that in particular the integration of deposition processes into a CNC milling center supposes high potential for manufacturing larger parts with high accuracy. Furthermore the combination of additive and subtractive manufacturing allows the production of ready to use products within one single machine. Additionally actual research for the integration of additive manufacturing processes into the production chain will be analyzed. For the long manufacturing time of additive production processes the combination with conventional manufacturing processes like sheet or bulk metal forming seems an effective solution. Especially large volumes can be produced by conventional processes. In an additional production step active elements can be applied by additive manufacturing. This principle is also investigated for tool production to reduce chipping of the high strength material used for forming tools. The aim is the addition of active elements onto a geometrical simple basis by using Laser Metal Deposition. That process allows the utilization of several powder materials during one process what
Wang, Youping; Sonntag, Karin; Rudloff, Eicke; Wehling, Peter; Snowdon, Rod J
2006-02-01
Two Brassica napus-Crambe abyssinica monosomic addition lines (2n=39, AACC plus a single chromosome from C. abyssinca) were obtained from the F(2) progeny of the asymmetric somatic hybrid. The alien chromosome from C. abyssinca in the addition line was clearly distinguished by genomic in situ hybridization (GISH). Twenty-seven microspore-derived plants from the addition lines were obtained. Fourteen seedlings were determined to be diploid plants (2n=38) arising from spontaneous chromosome doubling, while 13 seedlings were confirmed as haploid plants. Doubled haploid plants produced after treatment with colchicine and two disomic chromosome addition lines (2n=40, AACC plus a single pair of homologous chromosomes from C. abyssinca) could again be identified by GISH analysis. The lines are potentially useful for molecular genetic analysis of novel C. abyssinica genes or alleles contributing to traits relevant for oilseed rape (B. napus) breeding.
Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)
1996-01-01
Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite
Aeroacoustic sensitivity analysis and optimal aeroacoustic design of turbomachinery blades
NASA Technical Reports Server (NTRS)
Hall, Kenneth C.
1994-01-01
During the first year of the project, we have developed a theoretical analysis - and wrote a computer code based on this analysis - to compute the sensitivity of unsteady aerodynamic loads acting on airfoils in cascades due to small changes in airfoil geometry. The steady and unsteady flow though a cascade of airfoils is computed using the full potential equation. Once the nominal solutions have been computed, one computes the sensitivity. The analysis takes advantage of the fact that LU decomposition is used to compute the nominal steady and unsteady flow fields. If the LU factors are saved, then the computer time required to compute the sensitivity of both the steady and unsteady flows to changes in airfoil geometry is quite small. The results to date are quite encouraging, and may be summarized as follows: (1) The sensitivity procedure has been validated by comparing the results obtained by 'finite difference' techniques, that is, computing the flow using the nominal flow solver for two slightly different airfoils and differencing the results. The 'analytic' solution computed using the method developed under this grant and the finite difference results are found to be in almost perfect agreement. (2) The present sensitivity analysis is computationally much more efficient than finite difference techniques. We found that using a 129 by 33 node computational grid, the present sensitivity analysis can compute the steady flow sensitivity about ten times more efficiently that the finite difference approach. For the unsteady flow problem, the present sensitivity analysis is about two and one-half times as fast as the finite difference approach. We expect that the relative efficiencies will be even larger for the finer grids which will be used to compute high frequency aeroacoustic solutions. Computational results show that the sensitivity analysis is valid for small to moderate sized design perturbations. (3) We found that the sensitivity analysis provided important
Qian, Yu-Qi; He, Feng-Peng; Wang, Wei
2016-01-01
The response of microbial respiration from soil organic carbon (SOC) decomposition to environmental changes plays a key role in predicting future trends of atmospheric CO2 concentration. However, it remains uncertain whether there is a universal trend in the response of microbial respiration to increased temperature and nutrient addition among different vegetation types. In this study, soils were sampled in spring, summer, autumn and winter from five dominant vegetation types, including pine, larch and birch forest, shrubland, and grassland, in the Saihanba area of northern China. Soil samples from each season were incubated at 1, 10, and 20°C for 5 to 7 days. Nitrogen (N; 0.035 mM as NH4NO3) and phosphorus (P; 0.03 mM as P2O5) were added to soil samples, and the responses of soil microbial respiration to increased temperature and nutrient addition were determined. We found a universal trend that soil microbial respiration increased with increased temperature regardless of sampling season or vegetation type. The temperature sensitivity (indicated by Q10, the increase in respiration rate with a 10°C increase in temperature) of microbial respiration was higher in spring and autumn than in summer and winter, irrespective of vegetation type. The Q10 was significantly positively correlated with microbial biomass and the fungal: bacterial ratio. Microbial respiration (or Q10) did not significantly respond to N or P addition. Our results suggest that short-term nutrient input might not change the SOC decomposition rate or its temperature sensitivity, whereas increased temperature might significantly enhance SOC decomposition in spring and autumn, compared with winter and summer.
He, Feng-Peng; Wang, Wei
2016-01-01
The response of microbial respiration from soil organic carbon (SOC) decomposition to environmental changes plays a key role in predicting future trends of atmospheric CO2 concentration. However, it remains uncertain whether there is a universal trend in the response of microbial respiration to increased temperature and nutrient addition among different vegetation types. In this study, soils were sampled in spring, summer, autumn and winter from five dominant vegetation types, including pine, larch and birch forest, shrubland, and grassland, in the Saihanba area of northern China. Soil samples from each season were incubated at 1, 10, and 20°C for 5 to 7 days. Nitrogen (N; 0.035 mM as NH4NO3) and phosphorus (P; 0.03 mM as P2O5) were added to soil samples, and the responses of soil microbial respiration to increased temperature and nutrient addition were determined. We found a universal trend that soil microbial respiration increased with increased temperature regardless of sampling season or vegetation type. The temperature sensitivity (indicated by Q10, the increase in respiration rate with a 10°C increase in temperature) of microbial respiration was higher in spring and autumn than in summer and winter, irrespective of vegetation type. The Q10 was significantly positively correlated with microbial biomass and the fungal: bacterial ratio. Microbial respiration (or Q10) did not significantly respond to N or P addition. Our results suggest that short-term nutrient input might not change the SOC decomposition rate or its temperature sensitivity, whereas increased temperature might significantly enhance SOC decomposition in spring and autumn, compared with winter and summer. PMID:27070782
Sensitivity analysis of small circular cylinders as wake control
NASA Astrophysics Data System (ADS)
Meneghini, Julio; Patino, Gustavo; Gioria, Rafael
2016-11-01
We apply a sensitivity analysis to a steady external force regarding control vortex shedding from a circular cylinder using active and passive small control cylinders. We evaluate the changes on the flow produced by the device on the flow near the primary instability, transition to wake. We numerically predict by means of sensitivity analysis the effective regions to place the control devices. The quantitative effect of the hydrodynamic forces produced by the control devices is also obtained by a sensitivity analysis supporting the prediction of minimum rotation rate. These results are extrapolated for higher Reynolds. Also, the analysis provided the positions of combined passive control cylinders that suppress the wake. The latter shows that these particular positions for the devices are adequate to suppress the wake unsteadiness. In both cases the results agree very well with experimental cases of control devices previously published.
Sensitivity Analysis of the Integrated Medical Model for ISS Programs
NASA Technical Reports Server (NTRS)
Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.
2016-01-01
Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral
Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis
Adnan, Tassha Hilda
2016-01-01
Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446
Sensitivity analysis for missing data in regulatory submissions.
Permutt, Thomas
2016-07-30
The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
New Methods for Sensitivity Analysis in Chaotic, Turbulent Fluid Flows
NASA Astrophysics Data System (ADS)
Blonigan, Patrick; Wang, Qiqi
2012-11-01
Computational methods for sensitivity analysis are invaluable tools for fluid mechanics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods break down when applied to long-time averaged quantities in chaotic fluid flowfields, such as those obtained using high-fidelity turbulence simulations. Also, a number of dynamical properties of chaotic fluid flows, most notably the ``Butterfly Effect,'' make the formulation of new sensitivity analysis methods difficult. This talk will outline two chaotic sensitivity analysis methods. The first method, the Fokker-Planck adjoint method, forms a probability density function on the strange attractor associated with the system and uses its adjoint to find gradients. The second method, the Least Squares Sensitivity method, finds some ``shadow trajectory'' in phase space for which perturbations do not grow exponentially. This method is formulated as a quadratic programing problem with linear constraints. This talk is concluded with demonstrations of these new methods on some example problems, including the Lorenz attractor and flow around an airfoil at a high angle of attack.
Sensitivity analysis of a sound absorption model with correlated inputs
NASA Astrophysics Data System (ADS)
Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.
2017-04-01
Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.
Sobol' sensitivity analysis for stressor impacts on honeybee ...
We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more
Martinuzzo, M.; Barrera, L.; Altuna, D.; Baña, F. Tisi; Bieti, J.; Amigo, Q.; D’Adamo, M.; López, M.S.; Oyhamburu, J.; Otaso, J.C.
2016-01-01
Background Homozygous or double heterozygous factor XIII (FXIII) deficiency is characterized by soft tissue hematomas, intracranial and delayed spontaneous bleeding. Alterations of thromboelastography (TEG) parameters in these patients have been reported. The aim of the study was to show results of TEG, TEG Lysis (Lys 60) induced by subthreshold concentrations of streptokinase (SK), and to compare them to the clot solubility studies results in samples of a 1-year-old girl with homozygous or double heterozygous FXIII deficiency. Case A year one girl with a history of bleeding from the umbilical cord. During her first year of life, several hematomas appeared in soft upper limb tissue after punctures for vaccination and a gluteal hematoma. One additional sample of a heterozygous patient and three samples of acquired FXIII deficiency were also evaluated. Materials and Methods Clotting tests, von Willebrand factor (vWF) antigen and activity, plasma FXIII-A subunit (pFXIII-A) were measured by an immunoturbidimetric assay in a photo-optical coagulometer. Solubility tests were performed with Ca2+-5 M urea and thrombin-2% acetic acid. Basal and post-FXIII concentrate infusion samples were studied. TEG was performed with CaCl2 or CaCl2 + SK (3.2 U/mL) in a Thromboelastograph. Results Prothrombin time (PT), activated partial thromboplastin time (APTT), thrombin time, fibrinogen, factor VIIIc, vWF, and platelet aggregation were normal. Antigenic pFXIII-A subunit was < 2%. TEG, evaluated at diagnosis and post FXIII concentrate infusion (pFXIII-A= 37%), presented a normal reaction time (R), 8 min, prolonged k (14 and 11min respectively), a low Maximum-Amplitude (MA) ( 39 and 52 mm respectively), and Clot Lysis (Lys60) slightly increased (23 and 30% respectively). In the sample at diagnosis, clot solubility was abnormal, 50 and 45 min with Ca-Urea and thrombin-acetic acid, respectively, but normal (>16 hours) 1-day post-FXIII infusion. Analysis of FXIII deficient and normal
Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC
NASA Astrophysics Data System (ADS)
Yang, J.; Castelli, F.; Chen, Y.
2014-10-01
Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ɛ-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more
Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator
NASA Astrophysics Data System (ADS)
Rehman, Naveed Ur; Siddiqui, Mubashir Ali
2017-01-01
In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.
Nonparametric Bounds and Sensitivity Analysis of Treatment Effects
Richardson, Amy; Hudgens, Michael G.; Gilbert, Peter B.; Fine, Jason P.
2015-01-01
This paper considers conducting inference about the effect of a treatment (or exposure) on an outcome of interest. In the ideal setting where treatment is assigned randomly, under certain assumptions the treatment effect is identifiable from the observable data and inference is straightforward. However, in other settings such as observational studies or randomized trials with noncompliance, the treatment effect is no longer identifiable without relying on untestable assumptions. Nonetheless, the observable data often do provide some information about the effect of treatment, that is, the parameter of interest is partially identifiable. Two approaches are often employed in this setting: (i) bounds are derived for the treatment effect under minimal assumptions, or (ii) additional untestable assumptions are invoked that render the treatment effect identifiable and then sensitivity analysis is conducted to assess how inference about the treatment effect changes as the untestable assumptions are varied. Approaches (i) and (ii) are considered in various settings, including assessing principal strata effects, direct and indirect effects and effects of time-varying exposures. Methods for drawing formal inference about partially identified parameters are also discussed. PMID:25663743
Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator
NASA Astrophysics Data System (ADS)
Rehman, Naveed Ur; Siddiqui, Mubashir Ali
2017-03-01
In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.
Sensitivity analysis of dynamic biological systems with time-delays
2010-01-01
Background Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. Results We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. Conclusions By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex
Sensitivity Analysis for Dynamic Failure and Damage in Metallic Structures
2005-03-01
respect to the nominal alloy composition at the center of weld surface (Point 6 of Figure 7) -21 - U CO 2000 - * cE axc -2000 o" "....". . -401.11 1...Final Report Sensitivity Analysis for Dynamic Failure and Damage in Metallic Structures Office of Naval Research 800 North Quincy Street Arlington...3/31/05 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Sensitivity Analysis for Dynamic Failure and Damage in Metallic Structures Sb. GRANT NUMBER N000
Sensitivity analysis of the fission gas behavior model in BISON.
Swiler, Laura Painton; Pastore, Giovanni; Perez, Danielle; Williamson, Richard
2013-05-01
This report summarizes the result of a NEAMS project focused on sensitivity analysis of a new model for the fission gas behavior (release and swelling) in the BISON fuel performance code of Idaho National Laboratory. Using the new model in BISON, the sensitivity of the calculated fission gas release and swelling to the involved parameters and the associated uncertainties is investigated. The study results in a quantitative assessment of the role of intrinsic uncertainties in the analysis of fission gas behavior in nuclear fuel.
Preliminary sensitivity analysis of the Devonian shale in Ohio
Covatch, G.L.
1985-06-01
A preliminary sensitivity analysis of gas reserves in Devonian shale in Ohio was made on the six partitioned areas, based on a payout time of 3 years. Data sets were obtained from Lewin and Associates for the six partitioned areas in Ohio and used as a base case for the METC sensitivity analysis. A total of five different well stimulation techniques were evaluated in both the METC and Lewin studies. The five techniques evaluated were borehole shooting, a small radial stimulation, a large radial stimulation, a small vertical fracture, and a large vertical fracture.
Stable locality sensitive discriminant analysis for image recognition.
Gao, Quanxue; Liu, Jingjing; Cui, Kai; Zhang, Hailin; Wang, Xiaogang
2014-06-01
Locality Sensitive Discriminant Analysis (LSDA) is one of the prevalent discriminant approaches based on manifold learning for dimensionality reduction. However, LSDA ignores the intra-class variation that characterizes the diversity of data, resulting in unstableness of the intra-class geometrical structure representation and not good enough performance of the algorithm. In this paper, a novel approach is proposed, namely stable locality sensitive discriminant analysis (SLSDA), for dimensionality reduction. SLSDA constructs an adjacency graph to model the diversity of data and then integrates it in the objective function of LSDA. Experimental results in five databases show the effectiveness of the proposed approach.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin
2016-04-01
Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.
Sensitivity Analysis for Coupled Aero-structural Systems
NASA Technical Reports Server (NTRS)
Giunta, Anthony A.
1999-01-01
A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.
NASA Astrophysics Data System (ADS)
Pratiwi, D. D.; Nurosyid, F.; Supriyanto, A.; Suryana, R.
2017-02-01
This article reported combination of anthocyanin and synthetic dyes in dye-sensitized solar cells (DSSC) applications. This study aims was to improve the performance of DSSC by addition of synthetic dye into anthocyanin dye. Anthocyanin dye was extracted from red cabbage and synthetic dye was obtained from N719. We prepared anthocyanin and synthetic dyes at 2 different volume, anthocyanin dye at volume of 10 ml and combination dyes with anthocyanin and synthetic dyes at volume of 8 mL : 2 mL. The DSSCs were designed into sandwich structure on the fluorine-doped tin oxide (FTO) substrates using TiO2 electrode, carbon electrode, anthocyanin and synthetic dyes, and redox electrolyte. The absorption wavelength of anthocyanin dye of red cabbage was 450 nm – 580 nm, the combination of anthocyanin and synthetic dyes can increase the absorbance peak only. The IPCE characteristic with anthocyanin dye of red cabbage and combination dyes resulted quantum efficiency of 0.081% and 0.092% at wavelength maximum about 430 nm. The DSSC by anthocyanin dye of red cabbage achieved a conversion efficiency of 0.024%, while the DSSC by combination dyes achieved a conversion efficiency of 0.054%, combination dyes by addition synthetic dye into anthocyanin dye enhanced the conversion efficiency up to 125%.
NASA Astrophysics Data System (ADS)
Jiao, Xingmin; Jin, Wei; Yang, Xiaoqing
2015-05-01
Permittivity measurement of materials is important in microwave chemistry, microwave material processing and microwave heating. The open-ended coaxial line method is one of the most popular and effective means for permittivity measurement. However, the conventional coaxial probe has difficulty in distinguishing small permittivity variations for low loss media. In this paper an additional S-shaped structure is proposed for sensitivity improvement of a coaxial probe for permittivity determination of low loss materials at 2.45 GHz. The small permittivity variation can be distinguished due to field enhancement generated by the additional S-shaped structure. We studied the variation of reflection coefficient amplitude for three kinds of samples with different moisture content, within the probe at different insertion depths. We find that the conventional coaxial probe cannot distinguish small permittivity variations until the moisture content of materials reaches 3%. Meanwhile, the probe with the S-shaped structure can detect such small permittivity variations when the moisture content of samples changes by only 1%. The experimental results demonstrate that the new probe proposed in this paper is reliable and feasible.
Additional analysis of dendrochemical data of Fallon, Nevada.
Sheppard, Paul R; Helsel, Dennis R; Speakman, Robert J; Ridenour, Gary; Witten, Mark L
2012-04-05
Previously reported dendrochemical data showed temporal variability in concentration of tungsten (W) and cobalt (Co) in tree rings of Fallon, Nevada, US. Criticism of this work questioned the use of the Mann-Whitney test for determining change in element concentrations. Here, we demonstrate that Mann-Whitney is appropriate for comparing background element concentrations to possibly elevated concentrations in environmental media. Given that Mann-Whitney tests for differences in shapes of distributions, inter-tree variability (e.g., "coefficient of median variation") was calculated for each measured element across trees within subsites and time periods. For W and Co, the metals of highest interest in Fallon, inter-tree variability was always higher within versus outside of Fallon. For calibration purposes, this entire analysis was repeated at a different town, Sweet Home, Oregon, which has a known tungsten-powder facility, and inter-tree variability of W in tree rings confirmed the establishment date of that facility. Mann-Whitney testing of simulated data also confirmed its appropriateness for analysis of data affected by point-source contamination. This research adds important new dimensions to dendrochemistry of point-source contamination by adding analysis of inter-tree variability to analysis of central tendency. Fallon remains distinctive by a temporal increase in W beginning by the mid 1990s and by elevated Co since at least the early 1990s, as well as by high inter-tree variability for W and Co relative to comparison towns.
A Global Sensitivity Analysis Methodology for Multi-physics Applications
Tong, C H; Graziani, F R
2007-02-02
Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.
LSENS - GENERAL CHEMICAL KINETICS AND SENSITIVITY ANALYSIS CODE
NASA Technical Reports Server (NTRS)
Bittker, D. A.
1994-01-01
LSENS has been developed for solving complex, homogeneous, gas-phase, chemical kinetics problems. The motivation for the development of this program is the continuing interest in developing detailed chemical reaction mechanisms for complex reactions such as the combustion of fuels and pollutant formation and destruction. A reaction mechanism is the set of all elementary chemical reactions that are required to describe the process of interest. Mathematical descriptions of chemical kinetics problems constitute sets of coupled, nonlinear, first-order ordinary differential equations (ODEs). The number of ODEs can be very large because of the numerous chemical species involved in the reaction mechanism. Further complicating the situation are the many simultaneous reactions needed to describe the chemical kinetics of practical fuels. For example, the mechanism describing the oxidation of the simplest hydrocarbon fuel, methane, involves over 25 species participating in nearly 100 elementary reaction steps. Validating a chemical reaction mechanism requires repetitive solutions of the governing ODEs for a variety of reaction conditions. Analytical solutions to the systems of ODEs describing chemistry are not possible, except for the simplest cases, which are of little or no practical value. Consequently, there is a need for fast and reliable numerical solution techniques for chemical kinetics problems. In addition to solving the ODEs describing chemical kinetics, it is often necessary to know what effects variations in either initial condition values or chemical reaction mechanism parameters have on the solution. Such a need arises in the development of reaction mechanisms from experimental data. The rate coefficients are often not known with great precision and in general, the experimental data are not sufficiently detailed to accurately estimate the rate coefficient parameters. The development of a reaction mechanism is facilitated by a systematic sensitivity analysis
Analysis of fluorine addition to the vanguard first stage
NASA Technical Reports Server (NTRS)
Tomazic, William A; Schmidt, Harold W; Tischler, Adelbert O
1957-01-01
The effect of adding fluorine to the Vanguard first-stage oxidant was anlyzed. An increase in specific impulse of 5.74 percent may be obtained with 30 percent fluorine. This increase, coupled with increased mass ratio due to greater oxidant density, gave up to 24.6-percent increase in first-stage burnout energy with 30 percent fluorine added. However, a change in tank configuration is required to accommodate the higher oxidant-fuel ratio necessary for peak specific impulse with fluorine addition.
Porosity Measurements and Analysis for Metal Additive Manufacturing Process Control
Slotwinski, John A; Garboczi, Edward J; Hebenstreit, Keith M
2014-01-01
Additive manufacturing techniques can produce complex, high-value metal parts, with potential applications as critical metal components such as those found in aerospace engines and as customized biomedical implants. Material porosity in these parts is undesirable for aerospace parts - since porosity could lead to premature failure - and desirable for some biomedical implants - since surface-breaking pores allows for better integration with biological tissue. Changes in a part’s porosity during an additive manufacturing build may also be an indication of an undesired change in the build process. Here, we present efforts to develop an ultrasonic sensor for monitoring changes in the porosity in metal parts during fabrication on a metal powder bed fusion system. The development of well-characterized reference samples, measurements of the porosity of these samples with multiple techniques, and correlation of ultrasonic measurements with the degree of porosity are presented. A proposed sensor design, measurement strategy, and future experimental plans on a metal powder bed fusion system are also presented. PMID:26601041
Porosity Measurements and Analysis for Metal Additive Manufacturing Process Control.
Slotwinski, John A; Garboczi, Edward J; Hebenstreit, Keith M
2014-01-01
Additive manufacturing techniques can produce complex, high-value metal parts, with potential applications as critical metal components such as those found in aerospace engines and as customized biomedical implants. Material porosity in these parts is undesirable for aerospace parts - since porosity could lead to premature failure - and desirable for some biomedical implants - since surface-breaking pores allows for better integration with biological tissue. Changes in a part's porosity during an additive manufacturing build may also be an indication of an undesired change in the build process. Here, we present efforts to develop an ultrasonic sensor for monitoring changes in the porosity in metal parts during fabrication on a metal powder bed fusion system. The development of well-characterized reference samples, measurements of the porosity of these samples with multiple techniques, and correlation of ultrasonic measurements with the degree of porosity are presented. A proposed sensor design, measurement strategy, and future experimental plans on a metal powder bed fusion system are also presented.
Sensitivity analysis in a Lassa fever deterministic mathematical model
NASA Astrophysics Data System (ADS)
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
Computational methods for efficient structural reliability and reliability sensitivity analysis
NASA Technical Reports Server (NTRS)
Wu, Y.-T.
1993-01-01
This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.
Sensitivity analysis applied to stalled airfoil wake and steady control
NASA Astrophysics Data System (ADS)
Patino, Gustavo; Gioria, Rafael; Meneghini, Julio
2014-11-01
The sensitivity of an eigenvalue to base flow modifications induced by an external force is applied to the global unstable modes associated to the onset of vortex shedding in the wake of a stalled airfoil. In this work, the flow regime is close to the first instability of the system and its associated eigenvalue/eigenmode is determined. The sensitivity analysis to a general punctual external force allows establishing the regions where control devices must be in order to stabilize the global modes. Different types of steady control devices, passive and active, are used in the regions predicted by the sensitivity analysis to check the vortex shedding suppression, i.e. the primary instability bifurcation is delayed. The new eigenvalue, modified by the action of the device, is also calculated. Finally the spectral finite element method is employed to determine flow characteristics before and after of the bifurcation in order to cross check the results.
Uncertainty and sensitivity analysis and its applications in OCD measurements
NASA Astrophysics Data System (ADS)
Vagos, Pedro; Hu, Jiangtao; Liu, Zhuan; Rabello, Silvio
2009-03-01
This article describes an Uncertainty & Sensitivity Analysis package, a mathematical tool that can be an effective time-shortcut for optimizing OCD models. By including real system noises in the model, an accurate method for predicting measurements uncertainties is shown. The assessment, in an early stage, of the uncertainties, sensitivities and correlations of the parameters to be measured drives the user in the optimization of the OCD measurement strategy. Real examples are discussed revealing common pitfalls like hidden correlations and simulation results are compared with real measurements. Special emphasis is given to 2 different cases: 1) the optimization of the data set of multi-head metrology tools (NI-OCD, SE-OCD), 2) the optimization of the azimuth measurement angle in SE-OCD. With the uncertainty and sensitivity analysis result, the right data set and measurement mode (NI-OCD, SE-OCD or NI+SE OCD) can be easily selected to achieve the best OCD model performance.
Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.
2007-01-01
To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.
Risk analysis of sulfites used as food additives in China.
Zhang, Jian Bo; Zhang, Hong; Wang, Hua Li; Zhang, Ji Yue; Luo, Peng Jie; Zhu, Lei; Wang, Zhu Tian
2014-02-01
This study was to analyze the risk of sulfites in food consumed by the Chinese people and assess the health protection capability of maximum-permitted level (MPL) of sulfites in GB 2760-2011. Sulfites as food additives are overused or abused in many food categories. When the MPL in GB 2760-2011 was used as sulfites content in food, the intake of sulfites in most surveyed populations was lower than the acceptable daily intake (ADI). Excess intake of sulfites was found in all the surveyed groups when a high percentile of sulfites in food was in taken. Moreover, children aged 1-6 years are at a high risk to intake excess sulfites. The primary cause for the excess intake of sulfites in Chinese people is the overuse and abuse of sulfites by the food industry. The current MPL of sulfites in GB 2760-2011 protects the health of most populations.
Disclosure of hydraulic fracturing fluid chemical additives: analysis of regulations.
Maule, Alexis L; Makey, Colleen M; Benson, Eugene B; Burrows, Isaac J; Scammell, Madeleine K
2013-01-01
Hydraulic fracturing is used to extract natural gas from shale formations. The process involves injecting into the ground fracturing fluids that contain thousands of gallons of chemical additives. Companies are not mandated by federal regulations to disclose the identities or quantities of chemicals used during hydraulic fracturing operations on private or public lands. States have begun to regulate hydraulic fracturing fluids by mandating chemical disclosure. These laws have shortcomings including nondisclosure of proprietary or "trade secret" mixtures, insufficient penalties for reporting inaccurate or incomplete information, and timelines that allow for after-the-fact reporting. These limitations leave lawmakers, regulators, public safety officers, and the public uninformed and ill-prepared to anticipate and respond to possible environmental and human health hazards associated with hydraulic fracturing fluids. We explore hydraulic fracturing exemptions from federal regulations, as well as current and future efforts to mandate chemical disclosure at the federal and state level.
Beyond the GUM: variance-based sensitivity analysis in metrology
NASA Astrophysics Data System (ADS)
Lira, I.
2016-07-01
Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand.
Omitted Variable Sensitivity Analysis with the Annotated Love Plot
ERIC Educational Resources Information Center
Hansen, Ben B.; Fredrickson, Mark M.
2014-01-01
The goal of this research is to make sensitivity analysis accessible not only to empirical researchers but also to the various stakeholders for whom educational evaluations are conducted. To do this it derives anchors for the omitted variable (OV)-program participation association intrinsically, using the Love plot to present a wide range of…
Sensitivity analysis of the Ohio phosphorus risk index
Technology Transfer Automated Retrieval System (TEKTRAN)
The Phosphorus (P) Index is a widely used tool for assessing the vulnerability of agricultural fields to P loss; yet, few of the P Indices developed in the U.S. have been evaluated for their accuracy. Sensitivity analysis is one approach that can be used prior to calibration and field-scale testing ...
Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit
NASA Astrophysics Data System (ADS)
Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie
2015-09-01
The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity
Sensitivity and Uncertainty Analysis of the GFR MOX Fuel Subassembly
NASA Astrophysics Data System (ADS)
Lüley, J.; Vrban, B.; Čerba, Š.; Haščík, J.; Nečas, V.; Pelloni, S.
2014-04-01
We performed sensitivity and uncertainty analysis as well as benchmark similarity assessment of the MOX fuel subassembly designed for the Gas-Cooled Fast Reactor (GFR) as a representative material of the core. Material composition was defined for each assembly ring separately allowing us to decompose the sensitivities not only for isotopes and reactions but also for spatial regions. This approach was confirmed by direct perturbation calculations for chosen materials and isotopes. Similarity assessment identified only ten partly comparable benchmark experiments that can be utilized in the field of GFR development. Based on the determined uncertainties, we also identified main contributors to the calculation bias.
Sensitive Chiral Analysis via Microwave Three-Wave Mixing
NASA Astrophysics Data System (ADS)
Patterson, David; Doyle, John M.
2013-07-01
We demonstrate chirality-induced three-wave mixing in the microwave regime, using rotational transitions in cold gas-phase samples of 1,2-propanediol and 1,3-butanediol. We show that bulk three-wave mixing, which can only be realized in a chiral environment, provides a sensitive, species-selective probe of enantiomeric excess and is applicable to a broad class of molecules. The doubly resonant condition provides simultaneous identification of species and of handedness, which should allow sensitive chiral analysis even within a complex mixture.
Rethinking Sensitivity Analysis of Nuclear Simulations with Topology
Dan Maljovec; Bei Wang; Paul Rosen; Andrea Alfonsi; Giovanni Pastore; Cristian Rabiti; Valerio Pascucci
2016-01-01
In nuclear engineering, understanding the safety margins of the nuclear reactor via simulations is arguably of paramount importance in predicting and preventing nuclear accidents. It is therefore crucial to perform sensitivity analysis to understand how changes in the model inputs affect the outputs. Modern nuclear simulation tools rely on numerical representations of the sensitivity information -- inherently lacking in visual encodings -- offering limited effectiveness in communicating and exploring the generated data. In this paper, we design a framework for sensitivity analysis and visualization of multidimensional nuclear simulation data using partition-based, topology-inspired regression models and report on its efficacy. We rely on the established Morse-Smale regression technique, which allows us to partition the domain into monotonic regions where easily interpretable linear models can be used to assess the influence of inputs on the output variability. The underlying computation is augmented with an intuitive and interactive visual design to effectively communicate sensitivity information to the nuclear scientists. Our framework is being deployed into the multi-purpose probabilistic risk assessment and uncertainty quantification framework RAVEN (Reactor Analysis and Virtual Control Environment). We evaluate our framework using an simulation dataset studying nuclear fuel performance.
Sensitivity Analysis of Chaotic Flow around Two-Dimensional Airfoil
NASA Astrophysics Data System (ADS)
Blonigan, Patrick; Wang, Qiqi; Nielsen, Eric; Diskin, Boris
2015-11-01
Computational methods for sensitivity analysis are invaluable tools for fluid dynamics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods, including the adjoint method, break down when applied to long-time averaged quantities in chaotic fluid flow fields, such as high-fidelity turbulence simulations. This break down is due to the ``Butterfly Effect'' the high sensitivity of chaotic dynamical systems to the initial condition. A new sensitivity analysis method developed by the authors, Least Squares Shadowing (LSS), can compute useful and accurate gradients for quantities of interest in chaotic dynamical systems. LSS computes gradients using the ``shadow trajectory'', a phase space trajectory (or solution) for which perturbations to the flow field do not grow exponentially in time. To efficiently compute many gradients for one objective function, we use an adjoint version of LSS. This talk will briefly outline Least Squares Shadowing and demonstrate it on chaotic flow around a Two-Dimensional airfoil.
Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics
NASA Technical Reports Server (NTRS)
Baysal, Oktay; Eleshaky, Mohamed E.
1991-01-01
A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.
Additional challenges for uncertainty analysis in river engineering
NASA Astrophysics Data System (ADS)
Berends, Koen; Warmink, Jord; Hulscher, Suzanne
2016-04-01
the proposed intervention. The implicit assumption underlying such analysis is that both models are commensurable. We hypothesize that they are commensurable only to a certain extent. In an idealised study we have demonstrated that prediction performance loss should be expected with increasingly large engineering works. When accounting for parametric uncertainty of floodplain roughness in model identification, we see uncertainty bounds for predicted effects of interventions increase with increasing intervention scale. Calibration of these types of models therefore seems to have a shelf-life, beyond which calibration does not longer improves prediction. Therefore a qualification scheme for model use is required that can be linked to model validity. In this study, we characterize model use along three dimensions: extrapolation (using the model with different external drivers), extension (using the model for different output or indicators) and modification (using modified models). Such use of models is expected to have implications for the applicability of surrogating modelling for efficient uncertainty analysis as well, which is recommended for future research. Warmink, J. J.; Straatsma, M. W.; Huthoff, F.; Booij, M. J. & Hulscher, S. J. M. H. 2013. Uncertainty of design water levels due to combined bed form and vegetation roughness in the Dutch river Waal. Journal of Flood Risk Management 6, 302-318 . DOI: 10.1111/jfr3.12014
Bayesian sensitivity analysis of incomplete data: bridging pattern-mixture and selection models.
Kaciroti, Niko A; Raghunathan, Trivellore
2014-11-30
Pattern-mixture models (PMM) and selection models (SM) are alternative approaches for statistical analysis when faced with incomplete data and a nonignorable missing-data mechanism. Both models make empirically unverifiable assumptions and need additional constraints to identify the parameters. Here, we first introduce intuitive parameterizations to identify PMM for different types of outcome with distribution in the exponential family; then we translate these to their equivalent SM approach. This provides a unified framework for performing sensitivity analysis under either setting. These new parameterizations are transparent, easy-to-use, and provide dual interpretation from both the PMM and SM perspectives. A Bayesian approach is used to perform sensitivity analysis, deriving inferences using informative prior distributions on the sensitivity parameters. These models can be fitted using software that implements Gibbs sampling.
Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety
Broadhead, B.L.; Childs, R.L.; Rearden, B.T.
1999-09-20
Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community.
Kinetic analysis of microbial respiratory response to substrate addition
NASA Astrophysics Data System (ADS)
Blagodatskaya, Evgenia; Blagodatsky, Sergey; Yuyukina, Tatayna; Kuzyakov, Yakov
2010-05-01
Heterotrophic component of CO2 emitted from soil is mainly due to the respiratory activity of soil microorganisms. Field measurements of microbial respiration can be used for estimation of C-budget in soil, while laboratory estimation of respiration kinetics allows the elucidation of mechanisms of soil C sequestration. Physiological approaches based on 1) time-dependent or 2) substrate-dependent respiratory response of soil microorganisms decomposing the organic substrates allow to relate the functional properties of soil microbial community with decomposition rates of soil organic matter. We used a novel methodology combining (i) microbial growth kinetics and (ii) enzymes affinity to the substrate to show the shift in functional properties of the soil microbial community after amendments with substrates of contrasting availability. We combined the application of 14C labeled glucose as easily available C source to soil with natural isotope labeling of old and young soil SOM. The possible contribution of two processes: isotopic fractionation and preferential substrate utilization to the shifts in δ13C during SOM decomposition in soil after C3-C4 vegetation change was evaluated. Specific growth rate (µ) of soil microorganisms was estimated by fitting the parameters of the equation v(t) = A + B * exp(µ*t), to the measured CO2 evolution rate (v(t)) after glucose addition, and where A is the initial rate of non-growth respiration, B - initial rate of the growing fraction of total respiration. Maximal mineralization rate (Vmax), substrate affinity of microbial enzymes (Ks) and substrate availability (Sn) were determined by Michaelis-Menten kinetics. To study the effect of plant originated C on δ13C signature of SOM we compared the changes in isotopic composition of different C pools in C3 soil under grassland with C3-C4 soil where C4 plant Miscanthus giganteus was grown for 12 years on the plot after grassland. The shift in 13δ C caused by planting of M. giganteus
Sensitivity Analysis of the Static Aeroelastic Response of a Wing
NASA Technical Reports Server (NTRS)
Eldred, Lloyd B.
1993-01-01
A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.
Shape sensitivity analysis of flutter response of a laminated wing
NASA Technical Reports Server (NTRS)
Bergen, Fred D.; Kapania, Rakesh K.
1988-01-01
A method is presented for calculating the shape sensitivity of a wing aeroelastic response with respect to changes in geometric shape. Yates' modified strip method is used in conjunction with Giles' equivalent plate analysis to predict the flutter speed, frequency, and reduced frequency of the wing. Three methods are used to calculate the sensitivity of the eigenvalue. The first method is purely a finite difference calculation of the eigenvalue derivative directly from the solution of the flutter problem corresponding to the two different values of the shape parameters. The second method uses an analytic expression for the eigenvalue sensitivities of a general complex matrix, where the derivatives of the aerodynamic, mass, and stiffness matrices are computed using a finite difference approximation. The third method also uses an analytic expression for the eigenvalue sensitivities, but the aerodynamic matrix is computed analytically. All three methods are found to be in good agreement with each other. The sensitivities of the eigenvalues were used to predict the flutter speed, frequency, and reduced frequency. These approximations were found to be in good agreement with those obtained using a complete reanalysis.
Gerstl, S.A.W.
1980-01-01
SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.
Sensitivity Analysis and Optimal Control of Anthroponotic Cutaneous Leishmania
Zamir, Muhammad; Zaman, Gul; Alshomrani, Ali Saleh
2016-01-01
This paper is focused on the transmission dynamics and optimal control of Anthroponotic Cutaneous Leishmania. The threshold condition R0 for initial transmission of infection is obtained by next generation method. Biological sense of the threshold condition is investigated and discussed in detail. The sensitivity analysis of the reproduction number is presented and the most sensitive parameters are high lighted. On the basis of sensitivity analysis, some control strategies are introduced in the model. These strategies positively reduce the effect of the parameters with high sensitivity indices, on the initial transmission. Finally, an optimal control strategy is presented by taking into account the cost associated with control strategies. It is also shown that an optimal control exists for the proposed control problem. The goal of optimal control problem is to minimize, the cost associated with control strategies and the chances of infectious humans, exposed humans and vector population to become infected. Numerical simulations are carried out with the help of Runge-Kutta fourth order procedure. PMID:27505634
Sensitivity analysis techniques for models of human behavior.
Bier, Asmeret Brooke
2010-09-01
Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.
Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations
NASA Astrophysics Data System (ADS)
Wang, Qiqi; Hu, Rui; Blonigan, Patrick
2014-06-01
The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned "least squares shadowing (LSS) problem". The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.
Adaptive approach for nonlinear sensitivity analysis of reaction kinetics.
Horenko, Illia; Lorenz, Sönke; Schütte, Christof; Huisinga, Wilhelm
2005-07-15
We present a unified approach for linear and nonlinear sensitivity analysis for models of reaction kinetics that are stated in terms of systems of ordinary differential equations (ODEs). The approach is based on the reformulation of the ODE problem as a density transport problem described by a Fokker-Planck equation. The resulting multidimensional partial differential equation is herein solved by extending the TRAIL algorithm originally introduced by Horenko and Weiser in the context of molecular dynamics (J. Comp. Chem. 2003, 24, 1921) and discussed it in comparison with Monte Carlo techniques. The extended TRAIL approach is fully adaptive and easily allows to study the influence of nonlinear dynamical effects. We illustrate the scheme in application to an enzyme-substrate model problem for sensitivity analysis w.r.t. to initial concentrations and parameter values.
Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations
Wang, Qiqi Hu, Rui Blonigan, Patrick
2014-06-15
The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.
Graphical methods for the sensitivity analysis in discriminant analysis
Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang
2015-09-30
Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern ofmore » the change.« less
Graphical methods for the sensitivity analysis in discriminant analysis
Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang
2015-09-30
Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern of the change.
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
Sensitivity Analysis of Launch Vehicle Debris Risk Model
NASA Technical Reports Server (NTRS)
Gee, Ken; Lawrence, Scott L.
2010-01-01
As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.
On the variational data assimilation problem solving and sensitivity analysis
NASA Astrophysics Data System (ADS)
Arcucci, Rossella; D'Amore, Luisa; Pistoia, Jenny; Toumi, Ralf; Murli, Almerico
2017-04-01
We consider the Variational Data Assimilation (VarDA) problem in an operational framework, namely, as it results when it is employed for the analysis of temperature and salinity variations of data collected in closed and semi closed seas. We present a computing approach to solve the main computational kernel at the heart of the VarDA problem, which outperforms the technique nowadays employed by the oceanographic operative software. The new approach is obtained by means of Tikhonov regularization. We provide the sensitivity analysis of this approach and we also study its performance in terms of the accuracy gain on the computed solution. We provide validations on two realistic oceanographic data sets.
Sensitivity of Forecast Skill to Different Objective Analysis Schemes
NASA Technical Reports Server (NTRS)
Baker, W. E.
1979-01-01
Numerical weather forecasts are characterized by rapidly declining skill in the first 48 to 72 h. Recent estimates of the sources of forecast error indicate that the inaccurate specification of the initial conditions contributes substantially to this error. The sensitivity of the forecast skill to the initial conditions is examined by comparing a set of real-data experiments whose initial data were obtained with two different analysis schemes. Results are presented to emphasize the importance of the objective analysis techniques used in the assimilation of observational data.
Sensitivity analysis of fine sediment models using heterogeneous data
NASA Astrophysics Data System (ADS)
Kamel, A. M. Yousif; Bhattacharya, B.; El Serafy, G. Y.; van Kessel, T.; Solomatine, D. P.
2012-04-01
Sediments play an important role in many aquatic systems. Their transportation and deposition has significant implication on morphology, navigability and water quality. Understanding the dynamics of sediment transportation in time and space is therefore important in drawing interventions and making management decisions. This research is related to the fine sediment dynamics in the Dutch coastal zone, which is subject to human interference through constructions, fishing, navigation, sand mining, etc. These activities do affect the natural flow of sediments and sometimes lead to environmental concerns or affect the siltation rates in harbours and fairways. Numerical models are widely used in studying fine sediment processes. Accuracy of numerical models depends upon the estimation of model parameters through calibration. Studying the model uncertainty related to these parameters is important in improving the spatio-temporal prediction of suspended particulate matter (SPM) concentrations, and determining the limits of their accuracy. This research deals with the analysis of a 3D numerical model of North Sea covering the Dutch coast using the Delft3D modelling tool (developed at Deltares, The Netherlands). The methodology in this research was divided into three main phases. The first phase focused on analysing the performance of the numerical model in simulating SPM concentrations near the Dutch coast by comparing the model predictions with SPM concentrations estimated from NASA's MODIS sensors at different time scales. The second phase focused on carrying out a sensitivity analysis of model parameters. Four model parameters were identified for the uncertainty and sensitivity analysis: the sedimentation velocity, the critical shear stress above which re-suspension occurs, the shields shear stress for re-suspension pick-up, and the re-suspension pick-up factor. By adopting different values of these parameters the numerical model was run and a comparison between the
Species sensitivity analysis of heavy metals to freshwater organisms.
Xin, Zheng; Wenchao, Zang; Zhenguang, Yan; Yiguo, Hong; Zhengtao, Liu; Xianliang, Yi; Xiaonan, Wang; Tingting, Liu; Liming, Zhou
2015-10-01
Acute toxicity data of six heavy metals [Cu, Hg, Cd, Cr(VI), Pb, Zn] to aquatic organisms were collected and screened. Species sensitivity distributions (SSD) curves of vertebrate and invertebrate were constructed by log-logistic model separately. The comprehensive comparisons of the sensitivities of different trophic species to six typical heavy metals were performed. The results indicated invertebrate taxa to each heavy metal exhibited higher sensitivity than vertebrates. However, with respect to the same taxa species, Cu had the most adverse effect on vertebrate, followed by Hg, Cd, Zn and Cr. When datasets from all species were included, Cu and Hg were still more toxic than the others. In particular, the toxicities of Pb to vertebrate and fish were complicated as the SSD curves of Pb intersected with those of other heavy metals, while the SSD curves of Pb constructed by total species no longer crossed with others. The hazardous concentrations for 5 % of the species (HC5) affected were derived to determine the concentration protecting 95 % of species. The HC5 values of the six heavy metals were in the descending order: Zn > Pb > Cr > Cd > Hg > Cu, indicating toxicities in opposite order. Moreover, potential affected fractions were calculated to assess the ecological risks of different heavy metals at certain concentrations of the selected heavy metals. Evaluations of sensitivities of the species at various trophic levels and toxicity analysis of heavy metals are necessary prior to derivation of water quality criteria and the further environmental protection.
Sensitivity-analysis techniques: self-teaching curriculum
Iman, R.L.; Conover, W.J.
1982-06-01
This self teaching curriculum on sensitivity analysis techniques consists of three parts: (1) Use of the Latin Hypercube Sampling Program (Iman, Davenport and Ziegler, Latin Hypercube Sampling (Program User's Guide), SAND79-1473, January 1980); (2) Use of the Stepwise Regression Program (Iman, et al., Stepwise Regression with PRESS and Rank Regression (Program User's Guide) SAND79-1472, January 1980); and (3) Application of the procedures to sensitivity and uncertainty analyses of the groundwater transport model MWFT/DVM (Campbell, Iman and Reeves, Risk Methodology for Geologic Disposal of Radioactive Waste - Transport Model Sensitivity Analysis; SAND80-0644, NUREG/CR-1377, June 1980: Campbell, Longsine, and Reeves, The Distributed Velocity Method of Solving the Convective-Dispersion Equation, SAND80-0717, NUREG/CR-1376, July 1980). This curriculum is one in a series developed by Sandia National Laboratories for transfer of the capability to use the technology developed under the NRC funded High Level Waste Methodology Development Program.
An analytic method for sensitivity analysis of complex systems
NASA Astrophysics Data System (ADS)
Zhu, Yueying; Wang, Qiuping Alexandre; Li, Wei; Cai, Xu
2017-03-01
Sensitivity analysis is concerned with understanding how the model output depends on uncertainties (variances) in inputs and identifying which inputs are important in contributing to the prediction imprecision. Uncertainty determination in output is the most crucial step in sensitivity analysis. In the present paper, an analytic expression, which can exactly evaluate the uncertainty in output as a function of the output's derivatives and inputs' central moments, is firstly deduced for general multivariate models with given relationship between output and inputs in terms of Taylor series expansion. A γ-order relative uncertainty for output, denoted by Rvγ, is introduced to quantify the contributions of input uncertainty of different orders. On this basis, it is shown that the widely used approximation considering the first order contribution from the variance of input variable can satisfactorily express the output uncertainty only when the input variance is very small or the input-output function is almost linear. Two applications of the analytic formula are performed to the power grid and economic systems where the sensitivities of both actual power output and Economic Order Quantity models are analyzed. The importance of each input variable in response to the model output is quantified by the analytic formula.
Optimizing human activity patterns using global sensitivity analysis
Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.
2013-12-10
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.
Optimizing human activity patterns using global sensitivity analysis
Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; ...
2013-12-10
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less
NASA Astrophysics Data System (ADS)
Luo, Jiannan; Lu, Wenxi
2014-06-01
Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.
Hyperspectral data analysis procedures with reduced sensitivity to noise
NASA Technical Reports Server (NTRS)
Landgrebe, David A.
1993-01-01
Multispectral sensor systems have become steadily improved over the years in their ability to deliver increased spectral detail. With the advent of hyperspectral sensors, including imaging spectrometers, this technology is in the process of taking a large leap forward, thus providing the possibility of enabling delivery of much more detailed information. However, this direction of development has drawn even more attention to the matter of noise and other deleterious effects in the data, because reducing the fundamental limitations of spectral detail on information collection raises the limitations presented by noise to even greater importance. Much current effort in remote sensing research is thus being devoted to adjusting the data to mitigate the effects of noise and other deleterious effects. A parallel approach to the problem is to look for analysis approaches and procedures which have reduced sensitivity to such effects. We discuss some of the fundamental principles which define analysis algorithm characteristics providing such reduced sensitivity. One such analysis procedure including an example analysis of a data set is described, illustrating this effect.
Sensitivity analysis based preform die shape design using the finite element method
NASA Astrophysics Data System (ADS)
Zhao, G. Q.; Hufi, R.; Hutter, A.; Grandhi, R. V.
1997-06-01
This paper uses a finite element-based sensitivity analysis method to design the preform die shape for metal forming processes. The sensitivity analysis was developed using the rigid visco-plastic finite element method. The preform die shapes are represented by cubic B-spline curves. The control points or coefficients of the B-spline are used as the design variables. The optimization problem is to minimize the difference between the realized and the desired final forging shapes. The sensitivity analysis includes the sensitivities of the objective function, nodal coordinates, and nodal velocities with respect to the design variables. The remeshing procedure and the interpolation/transfer of the history/dependent parameters are considered. An adjustment of the volume loss resulting from the finite element analysis is used to make the workpiece volume consistent in each optimization iteration and improve the optimization convergence. In addition, a technique for dealing with fold-over defects during the forming simulation is employed in order to continue the optimization procedures of the preform die shape design. The method developed in this paper is used to design the preform die shape for both plane strain and axisymmetric deformations with shaped cavities. The analysis shows that satisfactory final forging shapes are obtained using the optimized preform die shapes.
Sensitivity Analysis of Hardwired Parameters in GALE Codes
Geelhood, Kenneth J.; Mitchell, Mark R.; Droppo, James G.
2008-12-01
The U.S. Nuclear Regulatory Commission asked Pacific Northwest National Laboratory to provide a data-gathering plan for updating the hardwired data tables and parameters of the Gaseous and Liquid Effluents (GALE) codes to reflect current nuclear reactor performance. This would enable the GALE codes to make more accurate predictions about the normal radioactive release source term applicable to currently operating reactors and to the cohort of reactors planned for construction in the next few years. A sensitivity analysis was conducted to define the importance of hardwired parameters in terms of each parameter’s effect on the emission rate of the nuclides that are most important in computing potential exposures. The results of this study were used to compile a list of parameters that should be updated based on the sensitivity of these parameters to outputs of interest.
Multiplexed analysis of chromosome conformation at vastly improved sensitivity
Davies, James O.J.; Telenius, Jelena M.; McGowan, Simon; Roberts, Nigel A.; Taylor, Stephen; Higgs, Douglas R.; Hughes, Jim R.
2015-01-01
Since methods for analysing chromosome conformation in mammalian cells are either low resolution or low throughput and are technically challenging they are not widely used outside of specialised laboratories. We have re-designed the Capture-C method producing a new approach, called next generation (NG) Capture-C. This produces unprecedented levels of sensitivity and reproducibility and can be used to analyse many genetic loci and samples simultaneously. Importantly, high-resolution data can be produced on as few as 100,000 cells and SNPs can be used to generate allele specific tracks. The method is straightforward to perform and should therefore greatly facilitate the task of linking SNPs identified by genome wide association studies with the genes they influence. The complete and detailed protocol presented here, with new publicly available tools for library design and data analysis, will allow most laboratories to analyse chromatin conformation at levels of sensitivity and throughput that were previously impossible. PMID:26595209
Numerical Sensitivity Analysis of a Composite Impact Absorber
NASA Astrophysics Data System (ADS)
Caputo, F.; Lamanna, G.; Scarano, D.; Soprano, A.
2008-08-01
This work deals with a numerical investigation on the energy absorbing capability of structural composite components. There are several difficulties associated with the numerical simulation of a composite impact-absorber, such as high geometrical non-linearities, boundary contact conditions, failure criteria, material behaviour; all those aspects make the calibration of numerical models and the evaluation of their sensitivity to the governing geometrical, physical and numerical parameters one of the main objectives of whatever numerical investigation. The last aspect is a very important one for designers in order to make the application of the model to real cases robust from both a physical and a numerical point of view. At first, on the basis of experimental data from literature, a preliminary calibration of the numerical model of a composite impact absorber and then a sensitivity analysis to the variation of the main geometrical and material parameters have been developed, by using explicit finite element algorithms implemented in the Ls-Dyna code.
SENSITIVITY ANALYSIS OF A TPB DEGRADATION RATE MODEL
Crawford, C; Tommy Edwards, T; Bill Wilmarth, B
2006-08-01
A tetraphenylborate (TPB) degradation model for use in aggregating Tank 48 material in Tank 50 is developed in this report. The influential factors for this model are listed as the headings in the table below. A sensitivity study of the predictions of the model over intervals of values for the influential factors affecting the model was conducted. These intervals bound the levels of these factors expected during Tank 50 aggregations. The results from the sensitivity analysis were used to identify settings for the influential factors that yielded the largest predicted TPB degradation rate. Thus, these factor settings are considered as those that yield the ''worst-case'' scenario for TPB degradation rate for Tank 50 aggregation, and, as such they would define the test conditions that should be studied in a waste qualification program whose dual purpose would be the investigation of the introduction of Tank 48 material for aggregation in Tank 50 and the bounding of TPB degradation rates for such aggregations.
NASA Astrophysics Data System (ADS)
Shirvani-Mahdavi, Hamidreza; Shafiee, Parisa
2016-12-01
Matrix mismatching in the quantitative analysis of materials through calibration-based laser-induced breakdown spectroscopy (LIBS) is a serious problem. In this paper, to overcome the matrix mismatching, two distinct approaches named addition standardization (AS) and addition-internal combinatorial standardization (A-ICS) are demonstrated for LIBS experiments. Furthermore, in order to examine the efficiency of these methods, the concentration of calcium in ordinary garden soil without any fertilizer is individually measured by each of the two procedures. To achieve this purpose, ten standard samples with different concentrations of calcium (as the analyte) and copper (as the internal standard) are prepared in the form of cylindrical tablets, so that the soil plays the role of the matrix in all of them. The measurements indicate that the relative error of concentration compared to a certified value derived by induced coupled plasma optical emission spectroscopy is 3.97% and 2.23% for AS and A-ICS methods, respectively. Furthermore, calculations related to standard deviation indicates that A-ICS method may be more accurate than AS one.
Simulation of the global contrail radiative forcing: A sensitivity analysis
NASA Astrophysics Data System (ADS)
Yi, Bingqi; Yang, Ping; Liou, Kuo-Nan; Minnis, Patrick; Penner, Joyce E.
2012-12-01
The contrail radiative forcing induced by human aviation activity is one of the most uncertain contributions to climate forcing. An accurate estimation of global contrail radiative forcing is imperative, and the modeling approach is an effective and prominent method to investigate the sensitivity of contrail forcing to various potential factors. We use a simple offline model framework that is particularly useful for sensitivity studies. The most-up-to-date Community Atmospheric Model version 5 (CAM5) is employed to simulate the atmosphere and cloud conditions during the year 2006. With updated natural cirrus and additional contrail optical property parameterizations, the RRTMG Model (RRTM-GCM application) is used to simulate the global contrail radiative forcing. Global contrail coverage and optical depth derived from the literature for the year 2002 is used. The 2006 global annual averaged contrail net (shortwave + longwave) radiative forcing is estimated to be 11.3 mW m-2. Regional contrail radiative forcing over dense air traffic areas can be more than ten times stronger than the global average. A series of sensitivity tests are implemented and show that contrail particle effective size, contrail layer height, the model cloud overlap assumption, and contrail optical properties are among the most important factors. The difference between the contrail forcing under all and clear skies is also shown.
NASA Astrophysics Data System (ADS)
Al Okab, Riyad Ahmed
2013-02-01
Green analytical methods using Cisapride (CPE) as green analytical reagent was investigated in this work. Rapid, simple, and sensitive spectrophotometric methods for the determination of bromate in water sample, bread and flour additives were developed. The proposed methods based on the oxidative coupling between phenoxazine and Cisapride in the presence of bromate to form red colored product with max at 520 nm. Phenoxazine and Cisapride and its reaction products were found to be environmentally friendly under the optimum experimental condition. The method obeys beers law in concentration range 0.11-4.00 g ml-1 and molar absorptivity 1.41 × 104 L mol-1 cm-1. All variables have been optimized and the presented reaction sequences were applied to the analysis of bromate in water, bread and flour additive samples. The performance of these method was evaluated in terms of Student's t-test and variance ratio F-test to find out the significance of proposed methods over the reference method. The combination of pharmaceutical drugs reagents with low concentration create some unique green chemical analyses.
Al Okab, Riyad Ahmed
2013-02-15
Green analytical methods using Cisapride (CPE) as green analytical reagent was investigated in this work. Rapid, simple, and sensitive spectrophotometric methods for the determination of bromate in water sample, bread and flour additives were developed. The proposed methods based on the oxidative coupling between phenoxazine and Cisapride in the presence of bromate to form red colored product with max at 520 nm. Phenoxazine and Cisapride and its reaction products were found to be environmentally friendly under the optimum experimental condition. The method obeys beers law in concentration range 0.11-4.00 g ml(-1) and molar absorptivity 1.41 × 10(4) L mol(-1)cm(-1). All variables have been optimized and the presented reaction sequences were applied to the analysis of bromate in water, bread and flour additive samples. The performance of these method was evaluated in terms of Student's t-test and variance ratio F-test to find out the significance of proposed methods over the reference method. The combination of pharmaceutical drugs reagents with low concentration create some unique green chemical analyses.
Biosphere dose conversion Factor Importance and Sensitivity Analysis
M. Wasiolek
2004-10-15
This report presents importance and sensitivity analysis for the environmental radiation model for Yucca Mountain, Nevada (ERMYN). ERMYN is a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis concerns the output of the model, biosphere dose conversion factors (BDCFs) for the groundwater, and the volcanic ash exposure scenarios. It identifies important processes and parameters that influence the BDCF values and distributions, enhances understanding of the relative importance of the physical and environmental processes on the outcome of the biosphere model, includes a detailed pathway analysis for key radionuclides, and evaluates the appropriateness of selected parameter values that are not site-specific or have large uncertainty.
Sensitivity Analysis of OECD Benchmark Tests in BISON
Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.; Williamson, Richard
2015-09-01
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.
Sensitivity analysis of the GNSS derived Victoria plate motion
NASA Astrophysics Data System (ADS)
Apolinário, João; Fernandes, Rui; Bos, Machiel
2014-05-01
Fernandes et al. (2013) estimated the angular velocity of the Victoria tectonic block from geodetic data (GNSS derived velocities) only.. GNSS observations are sparse in this region and it is therefore of the utmost importance to use the available data (5 sites) in the most optimal way. Unfortunately, the existing time-series were/are affected by missing data and offsets. In addition, some time-series were close to the considered minimal threshold value to compute one reliable velocity solution: 2.5-3.0 years. In this research, we focus on the sensitivity of the derived angular velocity to changes in the data (longer data-span for some stations) by extending the used data-span: Fernandes et al. (2013) used data until September 2011. We also investigate the effect of adding other stations to the solution, which is now possible since more stations became available in the region. In addition, we study if the conventional power-law plus white noise model is indeed the best stochastic model. In this respect, we apply different noise models using HECTOR (Bos et al. (2013), which can use different noise models and estimate offsets and seasonal signals simultaneously. The seasonal signal estimation is also other important parameter, since the time-series are rather short or have large data spans at some stations, which implies that the seasonal signals still can have some effect on the estimated trends as shown by Blewitt and Lavellee (2002) and Bos et al. (2010). We also quantify the magnitude of such differences in the estimation of the secular velocity and their effect in the derived angular velocity. Concerning the offsets, we investigate how they can, detected and undetected, influence the estimated plate motion. The time of offsets has been determined by visual inspection of the time-series. The influence of undetected offsets has been done by adding small synthetic random walk signals that are too small to be detected visually but might have an effect on the
Rheological Models of Blood: Sensitivity Analysis and Benchmark Simulations
NASA Astrophysics Data System (ADS)
Szeliga, Danuta; Macioł, Piotr; Banas, Krzysztof; Kopernik, Magdalena; Pietrzyk, Maciej
2010-06-01
Modeling of blood flow with respect to rheological parameters of the blood is the objective of this paper. Casson type equation was selected as a blood model and the blood flow was analyzed based on Backward Facing Step benchmark. The simulations were performed using ADINA-CFD finite element code. Three output parameters were selected, which characterize the accuracy of flow simulation. Sensitivity analysis of the results with Morris Design method was performed to identify rheological parameters and the model output, which control the blood flow to significant extent. The paper is the part of the work on identification of parameters controlling process of clotting.
SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES
Flach, G.
2014-10-28
PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.
Path-sensitive analysis for reducing rollback overheads
O'Brien, John K.P.; Wang, Kai-Ting Amy; Yamashita, Mark; Zhuang, Xiaotong
2014-07-22
A mechanism is provided for path-sensitive analysis for reducing rollback overheads. The mechanism receives, in a compiler, program code to be compiled to form compiled code. The mechanism divides the code into basic blocks. The mechanism then determines a restore register set for each of the one or more basic blocks to form one or more restore register sets. The mechanism then stores the one or more register sets such that responsive to a rollback during execution of the compiled code. A rollback routine identifies a restore register set from the one or more restore register sets and restores registers identified in the identified restore register set.
Sensitivity and uncertainty analysis of a polyurethane foam decomposition model
HOBBS,MICHAEL L.; ROBINSON,DAVID G.
2000-03-14
Sensitivity/uncertainty analyses are not commonly performed on complex, finite-element engineering models because the analyses are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, an analytical sensitivity/uncertainty analysis is used to determine the standard deviation and the primary factors affecting the burn velocity of polyurethane foam exposed to firelike radiative boundary conditions. The complex, finite element model has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state burn velocity calculated as the derivative of the burn front location versus time. The standard deviation of the burn velocity was determined by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation is essentially determined from a second derivative that is extremely sensitive to numerical noise. To minimize the numerical noise, 50-micron elements and approximately 1-msec time steps were required to obtain stable uncertainty results. The primary effect variable was shown to be the emissivity of the foam.
A global sensitivity analysis of crop virtual water content
NASA Astrophysics Data System (ADS)
Tamea, S.; Tuninetti, M.; D'Odorico, P.; Laio, F.; Ridolfi, L.
2015-12-01
The concepts of virtual water and water footprint are becoming widely used in the scientific literature and they are proving their usefulness in a number of multidisciplinary contexts. With such growing interest a measure of data reliability (and uncertainty) is becoming pressing but, as of today, assessments of data sensitivity to model parameters, performed at the global scale, are not known. This contribution aims at filling this gap. Starting point of this study is the evaluation of the green and blue virtual water content (VWC) of four staple crops (i.e. wheat, rice, maize, and soybean) at a global high resolution scale. In each grid cell, the crop VWC is given by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield, where evapotranspiration is determined with a detailed daily soil water balance and actual yield is estimated using country-based data, adjusted to account for spatial variability. The model provides estimates of the VWC at a 5x5 arc minutes and it improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The model is then used as the basis for a sensitivity analysis, in order to evaluate the role of model parameters in affecting the VWC and to understand how uncertainties in input data propagate and impact the VWC accounting. In each cell, small changes are exerted to one parameter at a time, and a sensitivity index is determined as the ratio between the relative change of VWC and the relative change of the input parameter with respect to its reference value. At the global scale, VWC is found to be most sensitive to the planting date, with a positive (direct) or negative (inverse) sensitivity index depending on the typical season of crop planting date. VWC is also markedly dependent on the length of the growing period, with an increase in length always producing an increase of VWC, but with higher spatial variability for rice than for
Analysis of Transition-Sensitized Turbulent Transport Equations
NASA Technical Reports Server (NTRS)
Rumsey, Christopher L.; Thacker, William D.; Gatski, Thomas B.; Grosch, Chester E,
2005-01-01
The dynamics of an ensemble of linear disturbances in boundary-layer flows at various Reynolds numbers is studied through an analysis of the transport equations for the mean disturbance kinetic energy and energy dissipation rate. Effects of adverse and favorable pressure-gradients on the disturbance dynamics are also included in the analysis Unlike the fully turbulent regime where nonlinear phase scrambling of the fluctuations affects the flow field even in proximity to the wall, the early stage transition regime fluctuations studied here are influenced cross the boundary layer by the solid boundary. The dominating dynamics in the disturbance kinetic energy and dissipation rate equations are described. These results are then used to formulate transition-sensitized turbulent transport equations, which are solved in a two-step process and applied to zero-pressure-gradient flow over a flat plate. Computed results are in good agreement with experimental data.
Parametric sensitivity analysis for temperature control in outdoor photobioreactors.
Pereira, Darlan A; Rodrigues, Vinicius O; Gómez, Sonia V; Sales, Emerson A; Jorquera, Orlando
2013-09-01
In this study a critical analysis of input parameters on a model to describe the broth temperature in flat plate photobioreactors throughout the day is carried out in order to assess the effect of these parameters on the model. Using the design of experiment approach, variation of selected parameters was introduced and the influence of each parameter on the broth temperature was evaluated by a parametric sensitivity analysis. The results show that the major influence on the broth temperature is that from the reactor wall and the shading factor, both related to the direct and reflected solar irradiation. Other parameter which play an important role on the temperature is the distance between plates. This study provides information to improve the design and establish the most appropriate operating conditions for the cultivation of microalgae in outdoor systems.
Kim, Min-Geun; Jang, Hong-Lae; Cho, Seonho
2013-05-01
An efficient adjoint design sensitivity analysis method is developed for reduced atomic systems. A reduced atomic system and the adjoint system are constructed in a locally confined region, utilizing generalized Langevin equation (GLE) for periodic lattice structures. Due to the translational symmetry of lattice structures, the size of time history kernel function that accounts for the boundary effects of the reduced atomic systems could be reduced to a single atom’s degrees of freedom. For the problems of highly nonlinear design variables, the finite difference method is impractical for its inefficiency and inaccuracy. However, the adjoint method is very efficient regardless of the number of design variables since one additional time integration is required for the adjoint GLE. Through numerical examples, the derived adjoint sensitivity turns out to be accurate and efficient through the comparison with finite difference sensitivity.
A highly sensitive and multiplexed method for focused transcript analysis.
Kataja, Kari; Satokari, Reetta M; Arvas, Mikko; Takkinen, Kristiina; Söderlund, Hans
2006-10-01
We describe a novel, multiplexed method for focused transcript analysis of tens to hundreds of genes. In this method TRAC (transcript analysis with aid of affinity capture) mRNA targets, a set of amplifiable detection probes of distinct sizes and biotinylated oligo(dT) capture probe are hybridized in solution. The formed sandwich hybrids are collected on magnetic streptavidin-coated microparticles and washed. The hybridized probes are eluted, optionally amplified by a PCR using a universal primer pair and detected with laser-induced fluorescence and capillary electrophoresis. The probes were designed by using a computer program developed for the purpose. The TRAC method was adapted to 96-well format by utilizing an automated magnetic particle processor. Here we demonstrate a simultaneous analysis of 18 Saccharomyces cerevisiae transcripts from two experimental conditions and show a comparison with a qPCR system. The sensitivity of the method is significantly increased by the PCR amplification of the hybridized and eluted probes. Our data demonstrate a bias-free use of at least 16 cycles of PCR amplification to increase probe signal, allowing transcript analysis from 2.5 ng of the total mRNA sample. The method is fast and simple and avoids cDNA conversion. These qualifications make it a potential, new means for routine analysis and a complementing method for microarrays and high density chips.
Simple Sensitivity Analysis for Orion Guidance Navigation and Control
NASA Technical Reports Server (NTRS)
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch. We describe in this paper a sensitivity analysis tool ("Critical Factors Tool" or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
Three-dimensional aerodynamic shape optimization using discrete sensitivity analysis
NASA Technical Reports Server (NTRS)
Burgreen, Gregory W.
1995-01-01
An aerodynamic shape optimization procedure based on discrete sensitivity analysis is extended to treat three-dimensional geometries. The function of sensitivity analysis is to directly couple computational fluid dynamics (CFD) with numerical optimization techniques, which facilitates the construction of efficient direct-design methods. The development of a practical three-dimensional design procedures entails many challenges, such as: (1) the demand for significant efficiency improvements over current design methods; (2) a general and flexible three-dimensional surface representation; and (3) the efficient solution of very large systems of linear algebraic equations. It is demonstrated that each of these challenges is overcome by: (1) employing fully implicit (Newton) methods for the CFD analyses; (2) adopting a Bezier-Bernstein polynomial parameterization of two- and three-dimensional surfaces; and (3) using preconditioned conjugate gradient-like linear system solvers. Whereas each of these extensions independently yields an improvement in computational efficiency, the combined effect of implementing all the extensions simultaneously results in a significant factor of 50 decrease in computational time and a factor of eight reduction in memory over the most efficient design strategies in current use. The new aerodynamic shape optimization procedure is demonstrated in the design of both two- and three-dimensional inviscid aerodynamic problems including a two-dimensional supersonic internal/external nozzle, two-dimensional transonic airfoils (resulting in supercritical shapes), three-dimensional transport wings, and three-dimensional supersonic delta wings. Each design application results in realistic and useful optimized shapes.
Parametric sensitivity analysis of avian pancreatic polypeptide (APP).
Zhang, H; Wong, C F; Thacher, T; Rabitz, H
1995-10-01
Computer simulations utilizing a classical force field have been widely used to study biomolecular properties. It is important to identify the key force field parameters or structural groups controlling the molecular properties. In the present paper the sensitivity analysis method is applied to study how various partial charges and solvation parameters affect the equilibrium structure and free energy of avian pancreatic polypeptide (APP). The general shape of APP is characterized by its three principal moments of inertia. A molecular dynamics simulation of APP was carried out with the OPLS/Amber force field and a continuum model of solvation energy. The analysis pinpoints the parameters which have the largest (or smallest) impact on the protein equilibrium structure (i.e., the moments of inertia) or free energy. A display of the protein with its atoms colored according to their sensitivities illustrates the patterns of the interactions responsible for the protein stability. The results suggest that the electrostatic interactions play a more dominant role in protein stability than the part of the solvation effect modeled by the atomic solvation parameters.
Thermodynamics-based Metabolite Sensitivity Analysis in metabolic networks.
Kiparissides, A; Hatzimanikatis, V
2017-01-01
The increasing availability of large metabolomics datasets enhances the need for computational methodologies that can organize the data in a way that can lead to the inference of meaningful relationships. Knowledge of the metabolic state of a cell and how it responds to various stimuli and extracellular conditions can offer significant insight in the regulatory functions and how to manipulate them. Constraint based methods, such as Flux Balance Analysis (FBA) and Thermodynamics-based flux analysis (TFA), are commonly used to estimate the flow of metabolites through genome-wide metabolic networks, making it possible to identify the ranges of flux values that are consistent with the studied physiological and thermodynamic conditions. However, unless key intracellular fluxes and metabolite concentrations are known, constraint-based models lead to underdetermined problem formulations. This lack of information propagates as uncertainty in the estimation of fluxes and basic reaction properties such as the determination of reaction directionalities. Therefore, knowledge of which metabolites, if measured, would contribute the most to reducing this uncertainty can significantly improve our ability to define the internal state of the cell. In the present work we combine constraint based modeling, Design of Experiments (DoE) and Global Sensitivity Analysis (GSA) into the Thermodynamics-based Metabolite Sensitivity Analysis (TMSA) method. TMSA ranks metabolites comprising a metabolic network based on their ability to constrain the gamut of possible solutions to a limited, thermodynamically consistent set of internal states. TMSA is modular and can be applied to a single reaction, a metabolic pathway or an entire metabolic network. This is, to our knowledge, the first attempt to use metabolic modeling in order to provide a significance ranking of metabolites to guide experimental measurements.
Sensitivity Analysis of Offshore Wind Cost of Energy (Poster)
Dykes, K.; Ning, A.; Graf, P.; Scott, G.; Damiami, R.; Hand, M.; Meadows, R.; Musial, W.; Moriarty, P.; Veers, P.
2012-10-01
No matter the source, offshore wind energy plant cost estimates are significantly higher than for land-based projects. For instance, a National Renewable Energy Laboratory (NREL) review on the 2010 cost of wind energy found baseline cost estimates for onshore wind energy systems to be 71 dollars per megawatt-hour ($/MWh), versus 225 $/MWh for offshore systems. There are many ways that innovation can be used to reduce the high costs of offshore wind energy. However, the use of such innovation impacts the cost of energy because of the highly coupled nature of the system. For example, the deployment of multimegawatt turbines can reduce the number of turbines, thereby reducing the operation and maintenance (O&M) costs associated with vessel acquisition and use. On the other hand, larger turbines may require more specialized vessels and infrastructure to perform the same operations, which could result in higher costs. To better understand the full impact of a design decision on offshore wind energy system performance and cost, a system analysis approach is needed. In 2011-2012, NREL began development of a wind energy systems engineering software tool to support offshore wind energy system analysis. The tool combines engineering and cost models to represent an entire offshore wind energy plant and to perform system cost sensitivity analysis and optimization. Initial results were collected by applying the tool to conduct a sensitivity analysis on a baseline offshore wind energy system using 5-MW and 6-MW NREL reference turbines. Results included information on rotor diameter, hub height, power rating, and maximum allowable tip speeds.
NASA Astrophysics Data System (ADS)
Chou, C. S.; Tsai, P. J.; Wu, P.; Shu, G. G.; Huang, Y. H.; Chen, Y. S.
2014-04-01
This study investigates the relationship between the performance of a dye-sensitized solar cell (DSSC) sensitized by a natural sensitizer of Taiwan Roselle anthocyanin (TRA) and fabrication process conditions of the DSSC. A set of systematic experiments has been carried out at various soaking temperatures, soaking periods, sensitizer concentrations, pH values, and additions of single-walled carbon nanotube (SWCNT). An absorption peak (520 nm) is found for TRA, and it is close to that of the N719 dye (518 nm). At a fixed concentration of TRA and a fixed soaking period, a lower pH of the extract or a lower soaking temperature is found favorable to the formation of pigment cations, which leads to an enhanced power conversion efficiency (η) of DSSC. For instance, by applying 17.53 mg/100ml TRA at 30 for 10 h, as the pH of the extract decreases to 2.00 from 2.33 (the original pH of TRA), the η of DSSC with TiO2+SWCNT electrode increases to 0.67% from 0.11% of a traditional DSSC with TiO2 electrode. This performance improvement can be explained by the combined effect of the pH of sensitizer and the additions of SWCNT, a first investigation in DSSC using the natural sensitizer with SWCNT.
Radiolysis Model Sensitivity Analysis for a Used Fuel Storage Canister
Wittman, Richard S.
2013-09-20
This report fulfills the M3 milestone (M3FT-13PN0810027) to report on a radiolysis computer model analysis that estimates the generation of radiolytic products for a storage canister. The analysis considers radiolysis outside storage canister walls and within the canister fill gas over a possible 300-year lifetime. Previous work relied on estimates based directly on a water radiolysis G-value. This work also includes that effect with the addition of coupled kinetics for 111 reactions for 40 gas species to account for radiolytic-induced chemistry, which includes water recombination and reactions with air.
Sensitivity analysis of radionuclides atmospheric dispersion following the Fukushima accident
NASA Astrophysics Data System (ADS)
Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien
2014-05-01
Atmospheric dispersion models are used in response to accidental releases with two purposes: - minimising the population exposure during the accident; - complementing field measurements for the assessment of short and long term environmental and sanitary impacts. The predictions of these models are subject to considerable uncertainties of various origins. Notably, input data, such as meteorological fields or estimations of emitted quantities as function of time, are highly uncertain. The case studied here is the atmospheric release of radionuclides following the Fukushima Daiichi disaster. The model used in this study is Polyphemus/Polair3D, from which derives IRSN's operational long distance atmospheric dispersion model ldX. A sensitivity analysis was conducted in order to estimate the relative importance of a set of identified uncertainty sources. The complexity of this task was increased by four characteristics shared by most environmental models: - high dimensional inputs; - correlated inputs or inputs with complex structures; - high dimensional output; - multiplicity of purposes that require sophisticated and non-systematic post-processing of the output. The sensitivities of a set of outputs were estimated with the Morris screening method. The input ranking was highly dependent on the considered output. Yet, a few variables, such as horizontal diffusion coefficient or clouds thickness, were found to have a weak influence on most of them and could be discarded from further studies. The sensitivity analysis procedure was also applied to indicators of the model performance computed on a set of gamma dose rates observations. This original approach is of particular interest since observations could be used later to calibrate the input variables probability distributions. Indeed, only the variables that are influential on performance scores are likely to allow for calibration. An indicator based on emission peaks time matching was elaborated in order to complement
GPU-based Integration with Application in Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Atanassov, Emanouil; Ivanovska, Sofiya; Karaivanova, Aneta; Slavov, Dimitar
2010-05-01
The presented work is an important part of the grid application MCSAES (Monte Carlo Sensitivity Analysis for Environmental Studies) which aim is to develop an efficient Grid implementation of a Monte Carlo based approach for sensitivity studies in the domains of Environmental modelling and Environmental security. The goal is to study the damaging effects that can be caused by high pollution levels (especially effects on human health), when the main modeling tool is the Danish Eulerian Model (DEM). Generally speaking, sensitivity analysis (SA) is the study of how the variation in the output of a mathematical model can be apportioned to, qualitatively or quantitatively, different sources of variation in the input of a model. One of the important classes of methods for Sensitivity Analysis are Monte Carlo based, first proposed by Sobol, and then developed by Saltelli and his group. In MCSAES the general Saltelli procedure has been adapted for SA of the Danish Eulerian model. In our case we consider as factors the constants determining the speeds of the chemical reactions in the DEM and as output a certain aggregated measure of the pollution. Sensitivity simulations lead to huge computational tasks (systems with up to 4 × 109 equations at every time-step, and the number of time-steps can be more than a million) which motivates its grid implementation. MCSAES grid implementation scheme includes two main tasks: (i) Grid implementation of the DEM, (ii) Grid implementation of the Monte Carlo integration. In this work we present our new developments in the integration part of the application. We have developed an algorithm for GPU-based generation of scrambled quasirandom sequences which can be combined with the CPU-based computations related to the SA. Owen first proposed scrambling of Sobol sequence through permutation in a manner that improves the convergence rates. Scrambling is necessary not only for error analysis but for parallel implementations. Good scrambling is
Jalal, Hawre; Goldhaber-Fiebert, Jeremy D.; Kuntz, Karen M.
2016-01-01
Decision makers often desire both guidance on the most cost-effective interventions given current knowledge and also the value of collecting additional information to improve the decisions made [i.e., from value of information (VOI) analysis]. Unfortunately, VOI analysis remains underutilized due to the conceptual, mathematical and computational challenges of implementing Bayesian decision theoretic approaches in models of sufficient complexity for real-world decision making. In this study, we propose a novel practical approach for conducting VOI analysis using a combination of probabilistic sensitivity analysis, linear regression metamodeling, and unit normal loss integral function – a parametric approach to VOI analysis. We adopt a linear approximation and leverage a fundamental assumption of VOI analysis which requires that all sources of prior uncertainties be accurately specified. We provide examples of the approach and show that the assumptions we make do not induce substantial bias but greatly reduce the computational time needed to perform VOI analysis. Our approach avoids the need to analytically solve or approximate joint Bayesian updating, requires only one set of probabilistic sensitivity analysis simulations, and can be applied in models with correlated input parameters. PMID:25840900
Sensitivity Analysis of Differential-Algebraic Equations and Partial Differential Equations
Petzold, L; Cao, Y; Li, S; Serban, R
2005-08-09
Sensitivity analysis generates essential information for model development, design optimization, parameter estimation, optimal control, model reduction and experimental design. In this paper we describe the forward and adjoint methods for sensitivity analysis, and outline some of our recent work on theory, algorithms and software for sensitivity analysis of differential-algebraic equation (DAE) and time-dependent partial differential equation (PDE) systems.
Global sensitivity analysis of the radiative transfer model
NASA Astrophysics Data System (ADS)
Neelam, Maheshwari; Mohanty, Binayak P.
2015-04-01
With the recently launched Soil Moisture Active Passive (SMAP) mission, it is very important to have a complete understanding of the radiative transfer model for better soil moisture retrievals and to direct future research and field campaigns in areas of necessity. Because natural systems show great variability and complexity with respect to soil, land cover, topography, precipitation, there exist large uncertainties and heterogeneities in model input factors. In this paper, we explore the possibility of using global sensitivity analysis (GSA) technique to study the influence of heterogeneity and uncertainties in model inputs on zero order radiative transfer (ZRT) model and to quantify interactions between parameters. GSA technique is based on decomposition of variance and can handle nonlinear and nonmonotonic functions. We direct our analyses toward growing agricultural fields of corn and soybean in two different regions, Iowa, USA (SMEX02) and Winnipeg, Canada (SMAPVEX12). We noticed that, there exists a spatio-temporal variation in parameter interactions under different soil moisture and vegetation conditions. Radiative Transfer Model (RTM) behaves more non-linearly in SMEX02 and linearly in SMAPVEX12, with average parameter interactions of 14% in SMEX02 and 5% in SMAPVEX12. Also, parameter interactions increased with vegetation water content (VWC) and roughness conditions. Interestingly, soil moisture shows an exponentially decreasing sensitivity function whereas parameters such as root mean square height (RMS height) and vegetation water content show increasing sensitivity with 0.05 v/v increase in soil moisture range. Overall, considering the SMAPVEX12 fields to be water rich environment (due to higher observed SM) and SMEX02 fields to be energy rich environment (due to lower SM and wide ranges of TSURF), our results indicate that first order as well as interactions between the parameters change with water and energy rich environments.
Sensitivity analysis of channel-bend hydraulics influenced by vegetation
NASA Astrophysics Data System (ADS)
Bywater-Reyes, S.; Manners, R.; McDonald, R.; Wilcox, A. C.
2015-12-01
Alternating bars influence hydraulics by changing the force balance of channels as part of a morphodynamic feedback loop that dictates channel geometry. Pioneer woody riparian trees recruit on river bars and may steer flow, alter cross-stream and downstream force balances, and ultimately change channel morphology. Quantifying the influence of vegetation on stream hydraulics is difficult, and researchers increasingly rely on two-dimensional hydraulic models. In many cases, channel characteristics (channel drag and lateral eddy viscosity) and vegetation characteristics (density, frontal area, and drag coefficient) are uncertain. This study uses a beta version of FaSTMECH that models vegetation explicitly as a drag force to test the sensitivity of channel-bend hydraulics to riparian vegetation. We use a simplified, scale model of a meandering river with bars and conduct a global sensitivity analysis that ranks the influence of specified channel characteristics (channel drag and lateral eddy viscosity) against vegetation characteristics (density, frontal area, and drag coefficient) on cross-stream hydraulics. The primary influence on cross-stream velocity and shear stress is channel drag (i.e., bed roughness), followed by the near-equal influence of all vegetation parameters and lateral eddy viscosity. To test the implication of the sensitivity indices on bend hydraulics, we hold calibrated channel characteristics constant for a wandering gravel-bed river with bars (Bitterroot River, MT), and vary vegetation parameters on a bar. For a dense vegetation scenario, we find flow to be steered away from the bar, and velocity and shear stress to be reduced within the thalweg. This provides insight into how the morphodynamic evolution of vegetated bars differs from unvegetated bars.
Dynamic global sensitivity analysis in bioreactor networks for bioethanol production.
Ochoa, M P; Estrada, V; Di Maggio, J; Hoch, P M
2016-01-01
Dynamic global sensitivity analysis (GSA) was performed for three different dynamic bioreactor models of increasing complexity: a fermenter for bioethanol production, a bioreactors network, where two types of bioreactors were considered: aerobic for biomass production and anaerobic for bioethanol production and a co-fermenter bioreactor, to identify the parameters that most contribute to uncertainty in model outputs. Sobol's method was used to calculate time profiles for sensitivity indices. Numerical results have shown the time-variant influence of uncertain parameters on model variables. Most influential model parameters have been determined. For the model of the bioethanol fermenter, μmax (maximum growth rate) and Ks (half-saturation constant) are the parameters with largest contribution to model variables uncertainty; in the bioreactors network, the most influential parameter is μmax,1 (maximum growth rate in bioreactor 1); whereas λ (glucose-to-total sugars concentration ratio in the feed) is the most influential parameter over all model variables in the co-fermentation bioreactor.
Space Shuttle Orbiter entry guidance and control system sensitivity analysis
NASA Technical Reports Server (NTRS)
Stone, H. W.; Powell, R. W.
1976-01-01
An approach has been developed to determine the guidance and control system sensitivity to off-nominal aerodynamics for the Space Shuttle Orbiter during entry. This approach, which uses a nonlinear six-degree-of-freedom interactive, digital simulation, has been applied to both the longitudinal and lateral-directional axes for a portion of the orbiter entry. Boundary values for each of the aerodynamic parameters have been identified, the key parameters have been determined, and system modifications that will increase system tolerance to off-nominal aerodynamics have been recommended. The simulations were judged by specified criteria and the performance was evaluated by use of key dependent variables. The analysis is now being expanded to include the latest shuttle guidance and control systems throughout the entry speed range.
Neutron activation analysis; A sensitive test for trace elements
Hossain, T.Z. . Ward Lab.)
1992-01-01
This paper discusses neutron activation analysis (NAA), an extremely sensitive technique for determining the elemental constituents of an unknown specimen. Currently, there are some twenty-five moderate-power TRIGA reactors scattered across the United States (fourteen of them at universities), and one of their principal uses is for NAA. NAA is procedurally simple. A small amount of the material to be tested (typically between one and one hundred milligrams) is irradiated for a period that varies from a few minutes to several hours in a neutron flux of around 10{sup 12} neutrons per square centimeter per second. A tiny fraction of the nuclei present (about 10{sup {minus}8}) is transmuted by nuclear reactions into radioactive forms. Subsequently, the nuclei decay, and the energy and intensity of the gamma rays that they emit can be measured in a gamma-ray spectrometer.
Sensitivity and uncertainty analysis of the recharge boundary condition
NASA Astrophysics Data System (ADS)
Jyrkama, M. I.; Sykes, J. F.
2006-01-01
The reliability analysis method is integrated with MODFLOW to study the impact of recharge on the groundwater flow system at a study area in New Jersey. The performance function is formulated in terms of head or flow rate at a pumping well, while the recharge sensitivity vector is computed efficiently by implementing the adjoint method in MODFLOW. The developed methodology not only quantifies the reliability of head at the well in terms of uncertainties in the recharge boundary condition, but it also delineates areas of recharge that have the highest impact on the head and flow rate at the well. The results clearly identify the most important land use areas that should be protected in order to maintain the head and hence production at the pumping well. These areas extend far beyond the steady state well capture zone used for land use planning and management within traditional wellhead protection programs.
Sensitivity analysis for causal inference using inverse probability weighting.
Shen, Changyu; Li, Xiaochun; Li, Lingling; Were, Martin C
2011-09-01
Evaluation of impact of potential uncontrolled confounding is an important component for causal inference based on observational studies. In this article, we introduce a general framework of sensitivity analysis that is based on inverse probability weighting. We propose a general methodology that allows both non-parametric and parametric analyses, which are driven by two parameters that govern the magnitude of the variation of the multiplicative errors of the propensity score and their correlations with the potential outcomes. We also introduce a specific parametric model that offers a mechanistic view on how the uncontrolled confounding may bias the inference through these parameters. Our method can be readily applied to both binary and continuous outcomes and depends on the covariates only through the propensity score that can be estimated by any parametric or non-parametric method. We illustrate our method with two medical data sets.
Control sensitivity indices for stability analysis of HVdc systems
Nayak, O.B.; Gole, A.M.; Chapman, D.G.; Davies, J.B.
1995-10-01
This paper presents a new concept called the ``Control Sensitivity Index`` of CSI, for the stability analysis of HVdc converters connected to weak ac systems. The CSI for a particular control mode can be defined as the ratio of incremental changes in the two system variables that are most relevant to that control mode. The index provides valuable information on the stability of the system and, unlike other approaches, aids in the design of the controller. It also plays an important role in defining non-linear gains for the controller. This paper offers a generalized formulation of CSI and demonstrates its application through an analysis of the CSI for three modes of HVdc control. The conclusions drawn from the analysis are confirmed by a detailed electromagnetic transients simulation of the ac/dc system. The paper concludes that the CSI can be used to improve the controller design and, for an inverter in a weak ac system, the conventional voltage control mode is more stable than the conventional {gamma} control mode.
de Jong, Simone; van Eijk, Kristel R; Zeegers, Dave W L H; Strengman, Eric; Janson, Esther; Veldink, Jan H; van den Berg, Leonard H; Cahn, Wiepke; Kahn, René S; Boks, Marco P M; Ophoff, Roel A
2012-09-01
There is genetic evidence that schizophrenia is a polygenic disorder with a large number of loci of small effect on disease susceptibility. Genome-wide association studies (GWASs) of schizophrenia have had limited success, with the best finding at the MHC locus at chromosome 6p. A recent effort of the Psychiatric GWAS consortium (PGC) yielded five novel loci for schizophrenia. In this study, we aim to highlight additional schizophrenia susceptibility loci from the PGC study by combining the top association findings from the discovery stage (9394 schizophrenia cases and 12 462 controls) with expression QTLs (eQTLs) and differential gene expression in whole blood of schizophrenia patients and controls. We examined the 6192 single-nucleotide polymorphisms (SNPs) with significance threshold at P<0.001. eQTLs were calculated for these SNPs in a sample of healthy controls (n=437). The transcripts significantly regulated by the top SNPs from the GWAS meta-analysis were subsequently tested for differential expression in an independent set of schizophrenia cases and controls (n=202). After correction for multiple testing, the eQTL analysis yielded 40 significant cis-acting effects of the SNPs. Seven of these transcripts show differential expression between cases and controls. Of these, the effect of three genes (RNF5, TRIM26 and HLA-DRB3) coincided with the direction expected from meta-analysis findings and were all located within the MHC region. Our results identify new genes of interest and highlight again the involvement of the MHC region in schizophrenia susceptibility.
NASA Astrophysics Data System (ADS)
Rieger, Vanessa S.; Dietmüller, Simone; Ponater, Michael
2016-12-01
Different strengths and types of radiative forcings cause variations in the climate sensitivities and efficacies. To relate these changes to their physical origin, this study tests whether a feedback analysis is a suitable approach. For this end, we apply the partial radiative perturbation method. Combining the forward and backward calculation turns out to be indispensable to ensure the additivity of feedbacks and to yield a closed forcing-feedback-balance at top of the atmosphere. For a set of CO2-forced simulations, the climate sensitivity changes with increasing forcing. The albedo, cloud and combined water vapour and lapse rate feedback are found to be responsible for the variations in the climate sensitivity. An O3-forced simulation (induced by enhanced NOx and CO surface emissions) causes a smaller efficacy than a CO2-forced simulation with a similar magnitude of forcing. We find that the Planck, albedo and most likely the cloud feedback are responsible for this effect. Reducing the radiative forcing impedes the statistical separability of feedbacks. We additionally discuss formal inconsistencies between the common ways of comparing climate sensitivities and feedbacks. Moreover, methodical recommendations for future work are given.
Sensitivity analysis of ecosystem service valuation in a Mediterranean watershed.
Sánchez-Canales, María; López Benito, Alfredo; Passuello, Ana; Terrado, Marta; Ziv, Guy; Acuña, Vicenç; Schuhmacher, Marta; Elorza, F Javier
2012-12-01
The services of natural ecosystems are clearly very important to our societies. In the last years, efforts to conserve and value ecosystem services have been fomented. By way of illustration, the Natural Capital Project integrates ecosystem services into everyday decision making around the world. This project has developed InVEST (a system for Integrated Valuation of Ecosystem Services and Tradeoffs). The InVEST model is a spatially integrated modelling tool that allows us to predict changes in ecosystem services, biodiversity conservation and commodity production levels. Here, InVEST model is applied to a stakeholder-defined scenario of land-use/land-cover change in a Mediterranean region basin (the Llobregat basin, Catalonia, Spain). Of all InVEST modules and sub-modules, only the behaviour of the water provisioning one is investigated in this article. The main novel aspect of this work is the sensitivity analysis (SA) carried out to the InVEST model in order to determine the variability of the model response when the values of three of its main coefficients: Z (seasonal precipitation distribution), prec (annual precipitation) and eto (annual evapotranspiration), change. The SA technique used here is a One-At-a-Time (OAT) screening method known as Morris method, applied over each one of the one hundred and fifty four sub-watersheds in which the Llobregat River basin is divided. As a result, this method provides three sensitivity indices for each one of the sub-watersheds under consideration, which are mapped to study how they are spatially distributed. From their analysis, the study shows that, in the case under consideration and between the limits considered for each factor, the effect of the Z coefficient on the model response is negligible, while the other two need to be accurately determined in order to obtain precise output variables. The results of this study will be applicable to the others watersheds assessed in the Consolider Scarce Project.
Uncertainty and sensitivity analysis for photovoltaic system modeling.
Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk
2013-12-01
We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.
A Multivariate Analysis of Extratropical Cyclone Environmental Sensitivity
NASA Astrophysics Data System (ADS)
Tierney, G.; Posselt, D. J.; Booth, J. F.
2015-12-01
The implications of a changing climate system include more than a simple temperature increase. A changing climate also modifies atmospheric conditions responsible for shaping the genesis and evolution of atmospheric circulations. In the mid-latitudes, the effects of climate change on extratropical cyclones (ETCs) can be expressed through changes in bulk temperature, horizontal and vertical temperature gradients (leading to changes in mean state winds) as well as atmospheric moisture content. Understanding how these changes impact ETC evolution and dynamics will help to inform climate mitigation and adaptation strategies, and allow for better informed weather emergency planning. However, our understanding is complicated by the complex interplay between a variety of environmental influences, and their potentially opposing effects on extratropical cyclone strength. Attempting to untangle competing influences from a theoretical or observational standpoint is complicated by nonlinear responses to environmental perturbations and a lack of data. As such, numerical models can serve as a useful tool for examining this complex issue. We present results from an analysis framework that combines the computational power of idealized modeling with the statistical robustness of multivariate sensitivity analysis. We first establish control variables, such as baroclinicity, bulk temperature, and moisture content, and specify a range of values that simulate possible changes in a future climate. The Weather Research and Forecasting (WRF) model serves as the link between changes in climate state and ETC relevant outcomes. A diverse set of output metrics (e.g., sea level pressure, average precipitation rates, eddy kinetic energy, and latent heat release) facilitates examination of storm dynamics, thermodynamic properties, and hydrologic cycles. Exploration of the multivariate sensitivity of ETCs to changes in control parameters space is performed via an ensemble of WRF runs coupled with
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; Turner, Adrian Keith; Jeffery, Nicole
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.
Moradi, Ali; Tootkaboni, Mazdak; Pennell, Kelly G.
2015-01-01
The Johnson and Ettinger (J&E) model is the most widely used vapor intrusion model in the United States. It is routinely used as part of hazardous waste site assessments to evaluate the potential for vapor intrusion exposure risks. This study incorporates mathematical approaches that allow sensitivity and uncertainty of the J&E model to be evaluated. In addition to performing Monte Carlo simulations to examine the uncertainty in the J&E model output, a powerful global sensitivity analysis technique based on Sobol indices is used to evaluate J&E model sensitivity to variations in the input parameters. The results suggest that the J&E model is most sensitive to the building air exchange rate, regardless of soil type and source depth. Building air exchange rate is not routinely measured during vapor intrusion investigations, but clearly improved estimates and/or measurements of the air exchange rate would lead to improved model predictions. It is also found that the J&E model is more sensitive to effective diffusivity, than effective permeability. Field measurements of effective diffusivity are not commonly collected during vapor intrusion investigations; however, consideration of this parameter warrants additional attention. Finally, the effects of input uncertainties on model predictions for different scenarios (e.g. sandy soil as compared to clayey soil, and “shallow” sources as compared to “deep” sources) are evaluated. Our results, not only identify the range of variability to be expected depending on the scenario at hand, but also mark the important cases where special care is needed when estimating the input parameters to which the J&E model is most sensitive. PMID:25947051
General methods for sensitivity analysis of equilibrium dynamics in patch occupancy models
Miller, David A.W.
2012-01-01
Sensitivity analysis is a useful tool for the study of ecological models that has many potential applications for patch occupancy modeling. Drawing from the rich foundation of existing methods for Markov chain models, I demonstrate new methods for sensitivity analysis of the equilibrium state dynamics of occupancy models. Estimates from three previous studies are used to illustrate the utility of the sensitivity calculations: a joint occupancy model for a prey species, its predators, and habitat used by both; occurrence dynamics from a well-known metapopulation study of three butterfly species; and Golden Eagle occupancy and reproductive dynamics. I show how to deal efficiently with multistate models and how to calculate sensitivities involving derived state variables and lower-level parameters. In addition, I extend methods to incorporate environmental variation by allowing for spatial and temporal variability in transition probabilities. The approach used here is concise and general and can fully account for environmental variability in transition parameters. The methods can be used to improve inferences in occupancy studies by quantifying the effects of underlying parameters, aiding prediction of future system states, and identifying priorities for sampling effort.
Breathing dynamics based parameter sensitivity analysis of hetero-polymeric DNA
Talukder, Srijeeta; Sen, Shrabani; Chaudhury, Pinaki; Chakraborti, Prantik; Banik, Suman K.
2014-03-28
We study the parameter sensitivity of hetero-polymeric DNA within the purview of DNA breathing dynamics. The degree of correlation between the mean bubble size and the model parameters is estimated for this purpose for three different DNA sequences. The analysis leads us to a better understanding of the sequence dependent nature of the breathing dynamics of hetero-polymeric DNA. Out of the 14 model parameters for DNA stability in the statistical Poland-Scheraga approach, the hydrogen bond interaction ε{sub hb}(AT) for an AT base pair and the ring factor ξ turn out to be the most sensitive parameters. In addition, the stacking interaction ε{sub st}(TA-TA) for an TA-TA nearest neighbor pair of base-pairs is found to be the most sensitive one among all stacking interactions. Moreover, we also establish that the nature of stacking interaction has a deciding effect on the DNA breathing dynamics, not the number of times a particular stacking interaction appears in a sequence. We show that the sensitivity analysis can be used as an effective measure to guide a stochastic optimization technique to find the kinetic rate constants related to the dynamics as opposed to the case where the rate constants are measured using the conventional unbiased way of optimization.
Margin and sensitivity methods for security analysis of electric power systems
NASA Astrophysics Data System (ADS)
Greene, Scott L.
Reliable operation of large scale electric power networks requires that system voltages and currents stay within design limits. Operation beyond those limits can lead to equipment failures and blackouts. Security margins measure the amount by which system loads or power transfers can change before a security violation, such as an overloaded transmission line, is encountered. This thesis shows how to efficiently compute security margins defined by limiting events and instabilities, and the sensitivity of those margins with respect to assumptions, system parameters, operating policy, and transactions. Security margins to voltage collapse blackouts, oscillatory instability, generator limits, voltage constraints and line overloads are considered. The usefulness of computing the sensitivities of these margins with respect to interarea transfers, loading parameters, generator dispatch, transmission line parameters, and VAR support is established for networks as large as 1500 buses. The sensitivity formulas presented apply to a range of power system models. Conventional sensitivity formulas such as line distribution factors, outage distribution factors, participation factors and penalty factors are shown to be special cases of the general sensitivity formulas derived in this thesis. The sensitivity formulas readily accommodate sparse matrix techniques. Margin sensitivity methods are shown to work effectively for avoiding voltage collapse blackouts caused by either saddle node bifurcation of equilibria or immediate instability due to generator reactive power limits. Extremely fast contingency analysis for voltage collapse can be implemented with margin sensitivity based rankings. Interarea transfer can be limited by voltage limits, line limits, or voltage stability. The sensitivity formulas presented in this thesis apply to security margins defined by any limit criteria. A method to compute transfer margins by directly locating intermediate events reduces the total number
Ecological sensitivity analysis in Fengshun County based on GIS
NASA Astrophysics Data System (ADS)
Zhou, Xia; Zhang, Hong-ou
2008-10-01
Ecological sensitivity in Fengshun County was analyzed by using GIS technology. Several factors were considered, which included sensitivity to acid rain, soil erosion, flood and geological disaster. Meanwhile, nature reserve and economic indicator were also considered. After single sensitivity assessment, the general ecological sensitivity was computed through GIS software. Ranging from low to extreme the ecological sensitivity was divided into five levels: not sensitive, low sensitive, moderately sensitive, highly sensitive and extremely sensitive. The results showed there was highly sensitivity in the south-east Fengshun. With the sensitivity and environment characters, the ecological function zone was also worked out, which included three big ecological function zones and ten sub-ecological zones. The three big ecological function zones were hill eco-environmental function zone, platform and plain ecological construction zone, ecological restoration and control zone. Based on the analyzed results, some strategies on environmental protection to each zone were brought forward, which provided the gist for making urban planning and environmental protection planning to Fengshun.
Spatial risk assessment for critical network infrastructure using sensitivity analysis
NASA Astrophysics Data System (ADS)
Möderl, Michael; Rauch, Wolfgang
2011-12-01
The presented spatial risk assessment method allows for managing critical network infrastructure in urban areas under abnormal and future conditions caused e.g., by terrorist attacks, infrastructure deterioration or climate change. For the spatial risk assessment, vulnerability maps for critical network infrastructure are merged with hazard maps for an interfering process. Vulnerability maps are generated using a spatial sensitivity analysis of network transport models to evaluate performance decrease under investigated thread scenarios. Thereby parameters are varied according to the specific impact of a particular threat scenario. Hazard maps are generated with a geographical information system using raster data of the same threat scenario derived from structured interviews and cluster analysis of events in the past. The application of the spatial risk assessment is exemplified by means of a case study for a water supply system, but the principal concept is applicable likewise to other critical network infrastructure. The aim of the approach is to help decision makers in choosing zones for preventive measures.
Global sensitivity analysis of analytical vibroacoustic transmission models
NASA Astrophysics Data System (ADS)
Christen, Jean-Loup; Ichchou, Mohamed; Troclet, Bernard; Bareille, Olivier; Ouisse, Morvan
2016-04-01
Noise reduction issues arise in many engineering problems. One typical vibroacoustic problem is the transmission loss (TL) optimisation and control. The TL depends mainly on the mechanical parameters of the considered media. At early stages of the design, such parameters are not well known. Decision making tools are therefore needed to tackle this issue. In this paper, we consider the use of the Fourier Amplitude Sensitivity Test (FAST) for the analysis of the impact of mechanical parameters on features of interest. FAST is implemented with several structural configurations. FAST method is used to estimate the relative influence of the model parameters while assuming some uncertainty or variability on their values. The method offers a way to synthesize the results of a multiparametric analysis with large variability. Results are presented for transmission loss of isotropic, orthotropic and sandwich plates excited by a diffuse field on one side. Qualitative trends found to agree with the physical expectation. Design rules can then be set up for vibroacoustic indicators. The case of a sandwich plate is taken as an example of the use of this method inside an optimisation process and for uncertainty quantification.
Design Parameters Influencing Reliability of CCGA Assembly: A Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Tasooji, Amaneh; Ghaffarian, Reza; Rinaldi, Antonio
2006-01-01
Area Array microelectronic packages with small pitch and large I/O counts are now widely used in microelectronics packaging. The impact of various package design and materials/process parameters on reliability has been studied through extensive literature review. Reliability of Ceramic Column Grid Array (CCGA) package assemblies has been evaluated using JPL thermal cycle test results (-50(deg)/75(deg)C, -55(deg)/100(deg)C, and -55(deg)/125(deg)C), as well as those reported by other investigators. A sensitivity analysis has been performed using the literature da to study the impact of design parameters and global/local stress conditions on assembly reliability. The applicability of various life-prediction models for CCGA design has been investigated by comparing model's predictions with the experimental thermal cycling data. Finite Element Method (FEM) analysis has been conducted to assess the state of the stress/strain in CCGA assembly under different thermal cycling, and to explain the different failure modes and locations observed in JPL test assemblies.
Relative performance of academic departments using DEA with sensitivity analysis.
Tyagi, Preeti; Yadav, Shiv Prasad; Singh, S P
2009-05-01
The process of liberalization and globalization of Indian economy has brought new opportunities and challenges in all areas of human endeavor including education. Educational institutions have to adopt new strategies to make best use of the opportunities and counter the challenges. One of these challenges is how to assess the performance of academic programs based on multiple criteria. Keeping this in view, this paper attempts to evaluate the performance efficiencies of 19 academic departments of IIT Roorkee (India) through data envelopment analysis (DEA) technique. The technique has been used to assess the performance of academic institutions in a number of countries like USA, UK, Australia, etc. But we are using it first time in Indian context to the best of our knowledge. Applying DEA models, we calculate technical, pure technical and scale efficiencies and identify the reference sets for inefficient departments. Input and output projections are also suggested for inefficient departments to reach the frontier. Overall performance, research performance and teaching performance are assessed separately using sensitivity analysis.
Robust and sensitive video motion detection for sleep analysis.
Heinrich, Adrienne; Geng, Di; Znamenskiy, Dmitry; Vink, Jelte Peter; de Haan, Gerard
2014-05-01
In this paper, we propose a camera-based system combining video motion detection, motion estimation, and texture analysis with machine learning for sleep analysis. The system is robust to time-varying illumination conditions while using standard camera and infrared illumination hardware. We tested the system for periodic limb movement (PLM) detection during sleep, using EMG signals as a reference. We evaluated the motion detection performance both per frame and with respect to movement event classification relevant for PLM detection. The Matthews correlation coefficient improved by a factor of 2, compared to a state-of-the-art motion detection method, while sensitivity and specificity increased with 45% and 15%, respectively. Movement event classification improved by a factor of 6 and 3 in constant and highly varying lighting conditions, respectively. On 11 PLM patient test sequences, the proposed system achieved a 100% accurate PLM index (PLMI) score with a slight temporal misalignment of the starting time (<1 s) regarding one movement. We conclude that camera-based PLM detection during sleep is feasible and can give an indication of the PLMI score.
Sensitivity and uncertainty analysis of a regulatory risk model
Kumar, A.; Manocha, A.; Shenoy, T.
1999-07-01
Health Risk Assessments (H.R.A.s) are increasingly being used in the environmental decision making process, starting from problem identification to the final clean up activities. A key issue concerning the results of these risk assessments is the uncertainty associated with them. This uncertainty has been associated with highly conservative estimates of risk assessment parameters in past studies. The primary purpose of this study was to investigate error propagation through a risk model. A hypothetical glass plant situated in the state of California was studied. Air emissions from this plant were modeled using the ISCST2 model and the risk was calculated using the ACE2588 model. The downwash was also considered during the concentration calculations. A sensitivity analysis on the risk computations identified five parameters--mixing depth for human consumption, deposition velocity, weathering constant, interception factors for vine crop and the average leaf vegetable consumption--which had the greatest impact on the calculated risk. A Monte Carlo analysis using these five parameters resulted in a distribution with a lesser percentage deviation than the percentage standard deviation of the input parameters.
2013-01-01
Background Stochastic modeling and simulation provide powerful predictive methods for the intrinsic understanding of fundamental mechanisms in complex biochemical networks. Typically, such mathematical models involve networks of coupled jump stochastic processes with a large number of parameters that need to be suitably calibrated against experimental data. In this direction, the parameter sensitivity analysis of reaction networks is an essential mathematical and computational tool, yielding information regarding the robustness and the identifiability of model parameters. However, existing sensitivity analysis approaches such as variants of the finite difference method can have an overwhelming computational cost in models with a high-dimensional parameter space. Results We develop a sensitivity analysis methodology suitable for complex stochastic reaction networks with a large number of parameters. The proposed approach is based on Information Theory methods and relies on the quantification of information loss due to parameter perturbations between time-series distributions. For this reason, we need to work on path-space, i.e., the set consisting of all stochastic trajectories, hence the proposed approach is referred to as “pathwise”. The pathwise sensitivity analysis method is realized by employing the rigorously-derived Relative Entropy Rate, which is directly computable from the propensity functions. A key aspect of the method is that an associated pathwise Fisher Information Matrix (FIM) is defined, which in turn constitutes a gradient-free approach to quantifying parameter sensitivities. The structure of the FIM turns out to be block-diagonal, revealing hidden parameter dependencies and sensitivities in reaction networks. Conclusions As a gradient-free method, the proposed sensitivity analysis provides a significant advantage when dealing with complex stochastic systems with a large number of parameters. In addition, the knowledge of the structure of the
Analysis methods for the determination of anthropogenic additions of P to agricultural soils
Technology Transfer Automated Retrieval System (TEKTRAN)
Phosphorus additions and measurement in soil is of concern on lands where biosolids have been applied. Colorimetric analysis for plant-available P may be inadequate for the accurate assessment of soil P. Phosphate additions in a regulatory environment need to be accurately assessed as the reported...
Lock Acquisition and Sensitivity Analysis of Advanced LIGO Interferometers
NASA Astrophysics Data System (ADS)
Martynov, Denis
Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe. The initial phase of LIGO started in 2002, and since then data was collected during the six science runs. Instrument sensitivity improved from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010. In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation of detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted until 2014. This thesis describes results of commissioning work done at the LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers. The first part of this thesis is devoted to the description of methods for bringing the interferometer into linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details. Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in
On the sensitivity analysis of separated-loop MRS data
NASA Astrophysics Data System (ADS)
Behroozmand, A.; Auken, E.; Fiandaca, G.
2013-12-01
In this study we investigate the sensitivity analysis of separated-loop magnetic resonance sounding (MRS) data and, in light of deploying a separate MRS receiver system from the transmitter system, compare the parameter determination of the separated-loop with the conventional coincident-loop MRS data. MRS has emerged as a promising surface-based geophysical technique for groundwater investigations, as it provides a direct estimate of the water content. The method works based on the physical principle of NMR during which a large volume of protons of the water molecules in the subsurface is excited at the specific Larmor frequency. The measurement consists of a large wire loop (typically 25 - 100 m in side length/diameter) deployed on the surface which typically acts as both a transmitter and a receiver, the so-called coincident-loop configuration. An alternating current is passed through the loop deployed and the superposition of signals from all precessing protons within the investigated volume is measured in a receiver loop; a decaying NMR signal called Free Induction Decay (FID). To provide depth information, the FID signal is measured for a series of pulse moments (Q; product of current amplitude and transmitting pulse length) during which different earth volumes are excited. One of the main and inevitable limitations of MRS measurements is a relatively long measurement dead time, i.e. a non-zero time between the end of the energizing pulse and the beginning of the measurement, which makes it difficult, and in some places impossible, to record SNMR signal from fine-grained geologic units and limits the application of advanced pulse sequences. Therefore, one of the current research activities is the idea of building separate receiver units, which will diminish the dead time. In light of that, the aims of this study are twofold: 1) Using a forward modeling approach, the sensitivity kernels of different separated-loop MRS soundings are studied and compared with
Da Costa, Caitlyn; Reynolds, James C; Whitmarsh, Samuel; Lynch, Tom; Creaser, Colin S
2013-01-01
RATIONALE Chemical additives are incorporated into commercial lubricant oils to modify the physical and chemical properties of the lubricant. The quantitative analysis of additives in oil-based lubricants deposited on a surface without extraction of the sample from the surface presents a challenge. The potential of desorption electrospray ionization mass spectrometry (DESI-MS) for the quantitative surface analysis of an oil additive in a complex oil lubricant matrix without sample extraction has been evaluated. METHODS The quantitative surface analysis of the antioxidant additive octyl (4-hydroxy-3,5-di-tert-butylphenyl)propionate in an oil lubricant matrix was carried out by DESI-MS in the presence of 2-(pentyloxy)ethyl 3-(3,5-di-tert-butyl-4-hydroxyphenyl)propionate as an internal standard. A quadrupole/time-of-flight mass spectrometer fitted with an in-house modified ion source enabling non-proximal DESI-MS was used for the analyses. RESULTS An eight-point calibration curve ranging from 1 to 80 µg/spot of octyl (4-hydroxy-3,5-di-tert-butylphenyl)propionate in an oil lubricant matrix and in the presence of the internal standard was used to determine the quantitative response of the DESI-MS method. The sensitivity and repeatability of the technique were assessed by conducting replicate analyses at each concentration. The limit of detection was determined to be 11 ng/mm2 additive on spot with relative standard deviations in the range 3–14%. CONCLUSIONS The application of DESI-MS to the direct, quantitative surface analysis of a commercial lubricant additive in a native oil lubricant matrix is demonstrated. © 2013 The Authors. Rapid Communications in Mass Spectrometry published by John Wiley & Sons, Ltd. PMID:24097398
Time course analysis of baroreflex sensitivity during postural stress.
Westerhof, Berend E; Gisolf, Janneke; Karemaker, John M; Wesseling, Karel H; Secher, Niels H; van Lieshout, Johannes J
2006-12-01
Postural stress requires immediate autonomic nervous action to maintain blood pressure. We determined time-domain cardiac baroreflex sensitivity (BRS) and time delay (tau) between systolic blood pressure and interbeat interval variations during stepwise changes in the angle of vertical body axis (alpha). The assumption was that with increasing postural stress, BRS becomes attenuated, accompanied by a shift in tau toward higher values. In 10 healthy young volunteers, alpha included 20 degrees head-down tilt (-20 degrees), supine (0 degree), 30 and 70 degrees head-up tilt (30 degrees, 70 degrees), and free standing (90 degrees). Noninvasive blood pressures were analyzed over 6-min periods before and after each change in alpha. The BRS was determined by frequency-domain analysis and with xBRS, a cross-correlation time-domain method. On average, between 28 (-20 degrees) to 45 (90 degrees) xBRS estimates per minute became available. Following a change in alpha, xBRS reached a different mean level in the first minute in 78% of the cases and in 93% after 6 min. With increasing alpha, BRS decreased: BRS = -10.1.sin(alpha) + 18.7 (r(2) = 0.99) with tight correlation between xBRS and cross-spectral gain (r(2) approximately 0.97). Delay tau shifted toward higher values. In conclusion, in healthy subjects the sensitivity of the cardiac baroreflex obtained from time domain decreases linearly with sin(alpha), and the start of baroreflex adaptation to a physiological perturbation like postural stress occurs rapidly. The decreases of BRS and reduction of short tau may be the result of reduced vagal activity with increasing alpha.
Shrivastava, Manish; Zhao, Chun; Easter, Richard C.; Qian, Yun; Zelenyuk, Alla; Fast, Jerome D.; Liu, Ying; Zhang, Qi; Guenther, Alex
2016-04-08
We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance
Mokhtari, Amirhossein; Frey, H Christopher
2005-12-01
This article demonstrates application of sensitivity analysis to risk assessment models with two-dimensional probabilistic frameworks that distinguish between variability and uncertainty. A microbial food safety process risk (MFSPR) model is used as a test bed. The process of identifying key controllable inputs and key sources of uncertainty using sensitivity analysis is challenged by typical characteristics of MFSPR models such as nonlinearity, thresholds, interactions, and categorical inputs. Among many available sensitivity analysis methods, analysis of variance (ANOVA) is evaluated in comparison to commonly used methods based on correlation coefficients. In a two-dimensional risk model, the identification of key controllable inputs that can be prioritized with respect to risk management is confounded by uncertainty. However, as shown here, ANOVA provided robust insights regarding controllable inputs most likely to lead to effective risk reduction despite uncertainty. ANOVA appropriately selected the top six important inputs, while correlation-based methods provided misleading insights. Bootstrap simulation is used to quantify uncertainty in ranks of inputs due to sampling error. For the selected sample size, differences in F values of 60% or more were associated with clear differences in rank order between inputs. Sensitivity analysis results identified inputs related to the storage of ground beef servings at home as the most important. Risk management recommendations are suggested in the form of a consumer advisory for better handling and storage practices.
Ackerman, L K; Noonan, G O; Begley, T H
2009-12-01
The ambient ionization technique direct analysis in real time (DART) was characterized and evaluated for the screening of food packaging for the presence of packaging additives using a benchtop mass spectrometer (MS). Approximate optimum conditions were determined for 13 common food-packaging additives, including plasticizers, anti-oxidants, colorants, grease-proofers, and ultraviolet light stabilizers. Method sensitivity and linearity were evaluated using solutions and characterized polymer samples. Additionally, the response of a model additive (di-ethyl-hexyl-phthalate) was examined across a range of sample positions, DART, and MS conditions (temperature, voltage and helium flow). Under optimal conditions, molecular ion (M+H+) was the major ion for most additives. Additive responses were highly sensitive to sample and DART source orientation, as well as to DART flow rates, temperatures, and MS inlet voltages, respectively. DART-MS response was neither consistently linear nor quantitative in this setting, and sensitivity varied by additive. All additives studied were rapidly identified in multiple food-packaging materials by DART-MS/MS, suggesting this technique can be used to screen food packaging rapidly. However, method sensitivity and quantitation requires further study and improvement.
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Bittker, David A.
1993-01-01
A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS, are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include static system, steady, one-dimensional, inviscid flow, shock initiated reaction, and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method, which works efficiently for the extremes of very fast and very slow reaction, is used for solving the 'stiff' differential equation systems that arise in chemical kinetics. For static reactions, sensitivity coefficients of all dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters can be computed. This paper presents descriptions of the code and its usage, and includes several illustrative example problems.
Advances in Sensitivity Analysis Capabilities with SCALE 6.0 and 6.1
Rearden, Bradley T; Petrie Jr, Lester M; Williams, Mark L
2010-01-01
The sensitivity and uncertainty analysis sequences of SCALE compute the sensitivity of k{sub eff} to each constituent multigroup cross section using perturbation theory based on forward and adjoint transport computations with several available codes. Versions 6.0 and 6.1 of SCALE, released in 2009 and 2010, respectively, include important additions to the TSUNAMI-3D sequence, which computes forward and adjoint solutions in multigroup with the KENO Monte Carlo codes. Previously, sensitivity calculations were performed with the simple and efficient geometry capabilities of KENO V.a, but now calculations can also be performed with the generalized geometry code KENO-VI. TSUNAMI-3D requires spatial refinement of the angular flux moment solutions for the forward and adjoint calculations. These refinements are most efficiently achieved with the use of a mesh accumulator. For SCALE 6.0, a more flexible mesh accumulator capability has been added to the KENO codes, enabling varying granularity of the spatial refinement to optimize the calculation for different regions of the system model. The new mesh capabilities allow the efficient calculation of larger models than were previously possible. Additional improvements in the TSUNAMI calculations were realized in the computation of implicit effects of resonance self-shielding on the final sensitivity coefficients. Multigroup resonance self-shielded cross sections are accurately computed with SCALE's robust deterministic continuous-energy treatment for the resolved and thermal energy range and with Bondarenko shielding factors elsewhere, including the unresolved resonance range. However, the sensitivities of the self-shielded cross sections to the parameters input to the calculation are quantified using only full-range Bondarenko factors.
A comprehensive sensitivity analysis of central-loop MRS data
NASA Astrophysics Data System (ADS)
Behroozmand, Ahmad; Auken, Esben; Dalgaard, Esben; Rejkjaer, Simon
2014-05-01
In this study we investigate the sensitivity analysis of separated-loop magnetic resonance sounding (MRS) data and, in light of deploying a separate MRS receiver system from the transmitter system, compare the parameter determination of the central-loop with the conventional coincident-loop MRS data. MRS, also called surface NMR, has emerged as a promising surface-based geophysical technique for groundwater investigations, as it provides a direct estimate of the water content and, through empirical relations, is linked to hydraulic properties of the subsurface such as hydraulic conductivity. The method works based on the physical principle of NMR during which a large volume of protons of the water molecules in the subsurface is excited at the specific Larmor frequency. The measurement consists of a large wire loop deployed on the surface which typically acts as both a transmitter and a receiver, the so-called coincident-loop configuration. An alternating current is passed through the loop deployed and the superposition of signals from all precessing protons within the investigated volume is measured in a receiver loop; a decaying NMR signal called Free Induction Decay (FID). To provide depth information, the FID signal is measured for a series of pulse moments (Q; product of current amplitude and transmitting pulse length) during which different earth volumes are excited. One of the main and inevitable limitations of MRS measurements is a relatively long measurement dead time, i.e. a non-zero time between the end of the energizing pulse and the beginning of the measurement, which makes it difficult, and in some places impossible, to record MRS signal from fine-grained geologic units and limits the application of advanced pulse sequences. Therefore, one of the current research activities is the idea of building separate receiver units, which will diminish the dead time. In light of that, the aims of this study are twofold: 1) Using a forward modeling approach, the
Sensitivity analysis on parameters and processes affecting vapor intrusion risk.
Picone, Sara; Valstar, Johan; van Gaans, Pauline; Grotenhuis, Tim; Rijnaarts, Huub
2012-05-01
A one-dimensional numerical model was developed and used to identify the key processes controlling vapor intrusion risks by means of a sensitivity analysis. The model simulates the fate of a dissolved volatile organic compound present below the ventilated crawl space of a house. In contrast to the vast majority of previous studies, this model accounts for vertical variation of soil water saturation and includes aerobic biodegradation. The attenuation factor (ratio between concentration in the crawl space and source concentration) and the characteristic time to approach maximum concentrations were calculated and compared for a variety of scenarios. These concepts allow an understanding of controlling mechanisms and aid in the identification of critical parameters to be collected for field situations. The relative distance of the source to the nearest gas-filled pores of the unsaturated zone is the most critical parameter because diffusive contaminant transport is significantly slower in water-filled pores than in gas-filled pores. Therefore, attenuation factors decrease and characteristic times increase with increasing relative distance of the contaminant dissolved source to the nearest gas diffusion front. Aerobic biodegradation may decrease the attenuation factor by up to three orders of magnitude. Moreover, the occurrence of water table oscillations is of importance. Dynamic processes leading to a retreating water table increase the attenuation factor by two orders of magnitude because of the enhanced gas phase diffusion.
Sensitivity analysis of near-infrared functional lymphatic imaging
Weiler, Michael; Kassis, Timothy
2012-01-01
Abstract. Near-infrared imaging of lymphatic drainage of injected indocyanine green (ICG) has emerged as a new technology for clinical imaging of lymphatic architecture and quantification of vessel function, yet the imaging capabilities of this approach have yet to be quantitatively characterized. We seek to quantify its capabilities as a diagnostic tool for lymphatic disease. Imaging is performed in a tissue phantom for sensitivity analysis and in hairless rats for in vivo testing. To demonstrate the efficacy of this imaging approach to quantifying immediate functional changes in lymphatics, we investigate the effects of a topically applied nitric oxide (NO) donor glyceryl trinitrate ointment. Premixing ICG with albumin induces greater fluorescence intensity, with the ideal concentration being 150 μg/mL ICG and 60 g/L albumin. ICG fluorescence can be detected at a concentration of 150 μg/mL as deep as 6 mm with our system, but spatial resolution deteriorates below 3 mm, skewing measurements of vessel geometry. NO treatment slows lymphatic transport, which is reflected in increased transport time, reduced packet frequency, reduced packet velocity, and reduced effective contraction length. NIR imaging may be an alternative to invasive procedures measuring lymphatic function in vivo in real time. PMID:22734775
Experimental sensitivity analysis of oxygen transfer in the capillary fringe.
Haberer, Christina M; Cirpka, Olaf A; Rolle, Massimo; Grathwohl, Peter
2014-01-01
Oxygen transfer in the capillary fringe (CF) is of primary importance for a wide variety of biogeochemical processes occurring in shallow groundwater systems. In case of a fluctuating groundwater table two distinct mechanisms of oxygen transfer within the capillary zone can be identified: vertical predominantly diffusive mass flux of oxygen, and mass transfer between entrapped gas and groundwater. In this study, we perform a systematic experimental sensitivity analysis in order to assess the influence of different parameters on oxygen transfer from entrapped air within the CF to underlying anoxic groundwater. We carry out quasi two-dimensional flow-through experiments focusing on the transient phase following imbibition to investigate the influence of the horizontal flow velocity, the average grain diameter of the porous medium, as well as the magnitude and the speed of the water table rise. We present a numerical flow and transport model that quantitatively represents the main mechanisms governing oxygen transfer. Assuming local equilibrium between the aqueous and the gaseous phase, the partitioning process from entrapped air can be satisfactorily simulated. The different experiments are monitored by measuring vertical oxygen concentration profiles at high spatial resolution with a noninvasive optode technique as well as by determining oxygen fluxes at the outlet of the flow-through chamber. The results show that all parameters investigated have a significant effect and determine different amounts of oxygen transferred to the oxygen-depleted groundwater. Particularly relevant are the magnitude of the water table rise and the grain size of the porous medium.
Plans for a sensitivity analysis of bridge-scour computations
Dunn, David D.; Smith, Peter N.
1993-01-01
Plans for an analysis of the sensitivity of Level 2 bridge-scour computations are described. Cross-section data from 15 bridge sites in Texas are modified to reflect four levels of field effort ranging from no field surveys to complete surveys. Data from United States Geological Survey (USGS) topographic maps will be used to supplement incomplete field surveys. The cross sections are used to compute the water-surface profile through each bridge for several T-year recurrence-interval design discharges. The effect of determining the downstream energy grade-line slope from topographic maps is investigated by systematically varying the starting slope of each profile. The water-surface profile analyses are then used to compute potential scour resulting from each of the design discharges. The planned results will be presented in the form of exceedance-probability versus scour-depth plots with the maximum and minimum scour depths at each T-year discharge presented as error bars.
Sensitivity analysis of near-infrared functional lymphatic imaging
NASA Astrophysics Data System (ADS)
Weiler, Michael; Kassis, Timothy; Dixon, J. Brandon
2012-06-01
Near-infrared imaging of lymphatic drainage of injected indocyanine green (ICG) has emerged as a new technology for clinical imaging of lymphatic architecture and quantification of vessel function, yet the imaging capabilities of this approach have yet to be quantitatively characterized. We seek to quantify its capabilities as a diagnostic tool for lymphatic disease. Imaging is performed in a tissue phantom for sensitivity analysis and in hairless rats for in vivo testing. To demonstrate the efficacy of this imaging approach to quantifying immediate functional changes in lymphatics, we investigate the effects of a topically applied nitric oxide (NO) donor glyceryl trinitrate ointment. Premixing ICG with albumin induces greater fluorescence intensity, with the ideal concentration being 150 μg/mL ICG and 60 g/L albumin. ICG fluorescence can be detected at a concentration of 150 μg/mL as deep as 6 mm with our system, but spatial resolution deteriorates below 3 mm, skewing measurements of vessel geometry. NO treatment slows lymphatic transport, which is reflected in increased transport time, reduced packet frequency, reduced packet velocity, and reduced effective contraction length. NIR imaging may be an alternative to invasive procedures measuring lymphatic function in vivo in real time.
Sensitivity analysis and optimization of the nuclear fuel cycle
Passerini, S.; Kazimi, M. S.; Shwageraus, E.
2012-07-01
A sensitivity study has been conducted to assess the robustness of the conclusions presented in the MIT Fuel Cycle Study. The Once Through Cycle (OTC) is considered as the base-line case, while advanced technologies with fuel recycling characterize the alternative fuel cycles. The options include limited recycling in LWRs and full recycling in fast reactors and in high conversion LWRs. Fast reactor technologies studied include both oxide and metal fueled reactors. The analysis allowed optimization of the fast reactor conversion ratio with respect to desired fuel cycle performance characteristics. The following parameters were found to significantly affect the performance of recycling technologies and their penetration over time: Capacity Factors of the fuel cycle facilities, Spent Fuel Cooling Time, Thermal Reprocessing Introduction Date, and in core and Out-of-core TRU Inventory Requirements for recycling technology. An optimization scheme of the nuclear fuel cycle is proposed. Optimization criteria and metrics of interest for different stakeholders in the fuel cycle (economics, waste management, environmental impact, etc.) are utilized for two different optimization techniques (linear and stochastic). Preliminary results covering single and multi-variable and single and multi-objective optimization demonstrate the viability of the optimization scheme. (authors)
Sensitivity analysis on an AC600 aluminum skin component
NASA Astrophysics Data System (ADS)
Mendiguren, J.; Agirre, J.; Mugarra, E.; Galdos, L.; Saenz de Argandoña, E.
2016-08-01
New materials are been introduced on the car body in order to reduce weight and fulfil the international CO2 emission regulations. Among them, the application of aluminum alloys is increasing for skin panels. Even if these alloys are beneficial for the car design, the manufacturing of these components become more complex. In this regard, numerical simulations have become a necessary tool for die designers. There are multiple factors affecting the accuracy of these simulations e.g. hardening, anisotropy, lubrication, elastic behavior. Numerous studies have been conducted in the last years on high strength steels component stamping and on developing new anisotropic models for aluminum cup drawings. However, the impact of the correct modelling on the latest aluminums for the manufacturing of skin panels has been not yet analyzed. In this work, first, the new AC600 aluminum alloy of JLR-Novelis is characterized for anisotropy, kinematic hardening, friction coefficient, elastic behavior. Next, a sensitivity analysis is conducted on the simulation of a U channel (with drawbeads). Then, the numerical an experimental results are correlated in terms of springback and failure. Finally, some conclusions are drawn.
Sensitivity analysis of a wide-field telescope
NASA Astrophysics Data System (ADS)
Lim, Juhee; Lee, Sangon; Moon, Il Kweon; Yang, Ho-Soon; Lee, Jong Ung; Choi, Young-Jun; Park, Jang-Hyun; Jin, Ho
2013-07-01
We are developing three ground-based wide-field telescopes. A wide-field Cassegrain telescope consists of two hyperbolic mirrors, aberration correctors and a field flattener for a 2-degree field of view. The diameters of the primary mirror and the secondary mirror are 500 mm and 200 mm, respectively. Corrective optics combined with four lenses, a filter and a window are also considered. For the imaging detection device, we use a charge coupled device (CCD) which has a 4096 × 4096 array with a 9-µm2 pixel size. One of the requirements is that the image motion limit of the opto-mechanical structure be less than 1 pixel size of the CCD on the image plane. To meet this requirement, we carried out an optical design evaluation and a misalignment analysis. Line-of-sight sensitivity equations are obtained from the rigid-body rotation in three directions and the rigid-body translation in three directions. These equations express the image motions at the image plane in terms of the independent motions of the optical components. We conducted a response simulation to evaluate the finite element method models under static load conditions, and the result is represented by the static response function. We show that the wide-field telescope system is stiff and stable enough to be supported and operated during its operating time.
Sensitivity analysis of an individual-based model for simulation of influenza epidemics.
Nsoesie, Elaine O; Beckman, Richard J; Marathe, Madhav V
2012-01-01
Individual-based epidemiology models are increasingly used in the study of influenza epidemics. Several studies on influenza dynamics and evaluation of intervention measures have used the same incubation and infectious period distribution parameters based on the natural history of influenza. A sensitivity analysis evaluating the influence of slight changes to these parameters (in addition to the transmissibility) would be useful for future studies and real-time modeling during an influenza pandemic.In this study, we examined individual and joint effects of parameters and ranked parameters based on their influence on the dynamics of simulated epidemics. We also compared the sensitivity of the model across synthetic social networks for Montgomery County in Virginia and New York City (and surrounding metropolitan regions) with demographic and rural-urban differences. In addition, we studied the effects of changing the mean infectious period on age-specific epidemics. The research was performed from a public health standpoint using three relevant measures: time to peak, peak infected proportion and total attack rate. We also used statistical methods in the design and analysis of the experiments. The results showed that: (i) minute changes in the transmissibility and mean infectious period significantly influenced the attack rate; (ii) the mean of the incubation period distribution appeared to be sufficient for determining its effects on the dynamics of epidemics; (iii) the infectious period distribution had the strongest influence on the structure of the epidemic curves; (iv) the sensitivity of the individual-based model was consistent across social networks investigated in this study and (v) age-specific epidemics were sensitive to changes in the mean infectious period irrespective of the susceptibility of the other age groups. These findings suggest that small changes in some of the disease model parameters can significantly influence the uncertainty observed in real
ERIC Educational Resources Information Center
Akturk, Ahmet Oguz
2015-01-01
Purpose: The purpose of this paper is to determine the cyberbullying sensitivity levels of high school students and their perceived social supports levels, and analyze the variables that predict cyberbullying sensitivity. In addition, whether cyberbullying sensitivity levels and social support levels differed according to gender was also…
Jani, Vivek; Ingulli, Elizabeth; Mekeel, Kristen; Morris, Gerald P
2017-02-01
Efficient allocation of deceased donor organs depends upon effective prediction of immunologic compatibility based on donor HLA genotype and recipient alloantibody profile, referred to as virtual crossmatching (VCXM). VCXM has demonstrated utility in predicting compatibility, though there is reduced efficacy for patients highly sensitized against allogeneic HLA antigens. The recently revised deceased donor kidney allocation system (KAS) has increased transplantation for this group, but with an increased burden for histocompatibility testing and organ sharing. Given the limitations of VCXM, we hypothesized that increased organ offers for highly-sensitized patients could result in a concomitant increase in offers rejected due to unexpectedly positive crossmatch. Review of 645 crossmatches performed for deceased donor kidney transplantation at our center did not reveal a significant increase in positive crossmatches following KAS implementation. Positive crossmatches not predicted by VCXM were concentrated among highly-sensitized patients. Root cause analysis of VCXM failures identified technical limitations of anti-HLA antibody testing as the most significant contributor to VCXM error. Contributions of technical limitations including additive/synergistic antibody effects, prozone phenomenon, and antigens not represented in standard testing panels, were evaluated by retrospective testing. These data provide insight into the limitations of VCXM, particularly those affecting allocation of kidneys to highly-sensitized patients.
Robustness and period sensitivity analysis of minimal models for biochemical oscillators
Caicedo-Casso, Angélica; Kang, Hye-Won; Lim, Sookkyung; Hong, Christian I.
2015-01-01
Biological systems exhibit numerous oscillatory behaviors from calcium oscillations to circadian rhythms that recur daily. These autonomous oscillators contain complex feedbacks with nonlinear dynamics that enable spontaneous oscillations. The detailed nonlinear dynamics of such systems remains largely unknown. In this paper, we investigate robustness and dynamical differences of five minimal systems that may underlie fundamental molecular processes in biological oscillatory systems. Bifurcation analyses of these five models demonstrate an increase of oscillatory domains with a positive feedback mechanism that incorporates a reversible reaction, and dramatic changes in dynamics with small modifications in the wiring. Furthermore, our parameter sensitivity analysis and stochastic simulations reveal different rankings of hierarchy of period robustness that are determined by the number of sensitive parameters or network topology. In addition, systems with autocatalytic positive feedback loop are shown to be more robust than those with positive feedback via inhibitory degradation regardless of noise type. We demonstrate that robustness has to be comprehensively assessed with both parameter sensitivity analysis and stochastic simulations. PMID:26267886
A sensitivity analysis of key natural factors in the modeled global acetone budget
NASA Astrophysics Data System (ADS)
Brewer, J. F.; Bishop, M.; Kelp, M.; Keller, C. A.; Ravishankara, A. R.; Fischer, E. V.
2017-02-01
Acetone is one of the most abundant carbonyl compounds in the atmosphere, and it serves as an important source of HOx (OH + HO2) radicals in the upper troposphere and a precursor for peroxyacetyl nitrate. We present a global sensitivity analysis targeted at several major natural source and sink terms in the global acetone budget to find the input factor or factors to which the simulated acetone mixing ratio was most sensitive. The ranges of input factors were taken from literature. We calculated the influence of these factors in terms of their elementary effects on model output. Of the six factors tested here, the four factors with the highest contribution to total global annual model sensitivity are direct emissions of acetone from the terrestrial biosphere, acetone loss to photolysis, the concentration of acetone in the ocean mixed layer, and the dry deposition of acetone to ice-free land. The direct emissions of acetone from the terrestrial biosphere are globally important in determining acetone mixing ratios, but their importance varies seasonally outside the tropics. Photolysis is most influential in the upper troposphere. Additionally, the influence of the oceanic mixed layer concentrations are relatively invariant between seasons, compared to the other factors tested. Monoterpene oxidation in the troposphere, despite the significant uncertainties in acetone yield in this process, is responsible for only a small amount of model uncertainty in the budget analysis.
Asaduzzaman, Abu Md; Schreckenbach, Georg
2010-11-21
One of the major and unique components of dye-sensitized solar cells (DSSC) is the iodide/triiodide redox couple. Periodic density-functional calculations have been carried out to study the interactions among three different components of the DSSC, i.e. the redox shuttle, the TiO(2) semiconductor surface, and nitrogen containing additives, with a focus on the implications for the performance of the DSSC. Iodide and bromide with alkali metal cations as counter ions are strongly adsorbed on the TiO(2) surface. Small additive molecules also strongly interact with TiO(2). Both interactions induce a negative shift of the Fermi energy of TiO(2). The negative shift of the Fermi energy is related to the performance of the cell by increasing the open voltage of the cell and retarding the injection dynamics (decreasing the short circuit current). Additive molecules, however, have relatively weaker interaction with iodide and triiodide.
Sensitivity analysis of water quality for Delhi stretch of the River Yamuna, India.
Parmar, D L; Keshari, Ashok K
2012-03-01
Simulation models are used to aid the decision makers about water pollution control and management in river systems. However, uncertainty of model parameters affects the model predictions and hence the pollution control decision. Therefore, it often is necessary to identify the model parameters that significantly affect the model output uncertainty prior to or as a supplement to model application to water pollution control and planning problems. In this study, sensitivity analysis, as a tool for uncertainty analysis was carried out to assess the sensitivity of water quality to (a) model parameters (b) pollution abatement measures such as wastewater treatment, waste discharge and flow augmentation from upstream reservoir. In addition, sensitivity analysis for the "best practical solution" was carried out to help the decision makers in choosing an appropriate option. The Delhi stretch of the river Yamuna was considered as a case study. The QUAL2E model is used for water quality simulation. The results obtained indicate that parameters K(1) (deoxygenation constant) and K(3) (settling oxygen demand), which is the rate of biochemical decomposition of organic matter and rate of BOD removal by settling, respectively, are the most sensitive parameters for the considered river stretch. Different combinations of variations in K(1) and K(2) also revealed similar results for better understanding of inter-dependability of K(1) and K(2). Also, among the pollution abatement methods, the change (perturbation) in wastewater treatment level at primary, secondary, tertiary, and advanced has the greatest effect on the uncertainty of the simulated dissolved oxygen and biochemical oxygen demand concentrations.
Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models
NASA Astrophysics Data System (ADS)
Rakovec, O.; Hill, M. C.; Clark, M. P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.
2014-01-01
This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based "local" methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative "bucket-style" hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.
Petzold, L.R.; Rosen, J.B.
1997-12-30
Differential-algebraic equations arise in a wide variety of engineering and scientific problems. Relatively little work has been done regarding sensitivity analysis and model reduction for this class of problems. Efficient methods for sensitivity analysis are required in model development and as an intermediate step in design optimization of engineering processes. Reduced order models are needed for modelling complex physical phenomena like turbulent reacting flows, where it is not feasible to use a fully-detailed model. The objective of this work has been to develop numerical methods and software for sensitivity analysis and model reduction of nonlinear differential-algebraic systems, including large-scale systems. In collaboration with Peter Brown and Alan Hindmarsh of LLNL, the authors developed an algorithm for finding consistent initial conditions for several widely occurring classes of differential-algebraic equations (DAEs). The new algorithm is much more robust than the previous algorithm. It is also very easy to use, having been designed to require almost no information about the differential equation, Jacobian matrix, etc. in addition to what is already needed to take the subsequent time steps. The new algorithm has been implemented in a version of the software for solution of large-scale DAEs, DASPK, which has been made available on the internet. The new methods and software have been used to solve a Tokamak edge plasma problem at LLNL which could not be solved with the previous methods and software because of difficulties in finding consistent initial conditions. The capability of finding consistent initial values is also needed for the sensitivity and optimization efforts described in this paper.
NASA Astrophysics Data System (ADS)
Liu, I.-Ping; Chen, Liang-Yih; Lee, Yuh-Lang
2016-09-01
Sodium acetate (NaAc) is utilized as an additive in cationic precursors of the successive ionic layer adsorption and reaction (SILAR) process to fabricate CdS quantum-dot (QD)-sensitized photoelectrodes. The effects of the NaAc concentration on the deposition rate and distribution of QDs in mesoporous TiO2 films, as well as on the performance of CdS-sensitized solar cells are studied. The experimental results show that the presence of NaAc can significantly accelerate the deposition of CdS, improve the QD distribution across photoelectrodes, and thereby, increase the performance of solar cells. These results are mainly attributed to the pH-elevation effect of NaAc to the cationic precursors which increases the electrostatic interaction of the TiO2 film to cadmium ions. The light-to-energy conversion efficiency of the CdS-sensitized solar cell increases with increasing concentration of the NaAc and approaches a maximum value (3.11%) at 0.05 M NaAc. Additionally, an ionic exchange is carried out on the photoelectrode to transform the deposited CdS into CdS1-xSex ternary QDs. The light-absorption range of the photoelectrode is extended and an exceptional power conversion efficiency of 4.51% is achieved due to this treatment.
Sorption of redox-sensitive elements: critical analysis
Strickert, R.G.
1980-12-01
The redox-sensitive elements (Tc, U, Np, Pu) discussed in this report are of interest to nuclear waste management due to their long-lived isotopes which have a potential radiotoxic effect on man. In their lower oxidation states these elements have been shown to be highly adsorbed by geologic materials occurring under reducing conditions. Experimental research conducted in recent years, especially through the Waste Isolation Safety Assessment Program (WISAP) and Waste/Rock Interaction Technology (WRIT) program, has provided extensive information on the mechanisms of retardation. In general, ion-exchange probably plays a minor role in the sorption behavior of cations of the above three actinide elements. Formation of anionic complexes of the oxidized states with common ligands (OH/sup -/, CO/sup - -//sub 3/) is expected to reduce adsorption by ion exchange further. Pertechnetate also exhibits little ion-exchange sorption by geologic media. In the reduced (IV) state, all of the elements are highly charged and it appears that they form a very insoluble compound (oxide, hydroxide, etc.) or undergo coprecipitation or are incorporated into minerals. The exact nature of the insoluble compounds and the effect of temperature, pH, pe, other chemical species, and other parameters are currently being investigated. Oxidation states other than Tc (IV,VII), U(IV,VI), Np(IV,V), and Pu(IV,V) are probably not important for the geologic repository environment expected, but should be considered especially when extreme conditions exist (radiation, temperature, etc.). Various experimental techniques such as oxidation-state analysis of tracer-level isotopes, redox potential measurement and control, pH measurement, and solid phase identification have been used to categorize the behavior of the various valence states.
Lumen, Annie; McNally, Kevin; George, Nysia; Fisher, Jeffrey W.; Loizou, George D.
2015-01-01
A deterministic biologically based dose-response model for the thyroidal system in a near-term pregnant woman and the fetus was recently developed to evaluate quantitatively thyroid hormone perturbations. The current work focuses on conducting a quantitative global sensitivity analysis on this complex model to identify and characterize the sources and contributions of uncertainties in the predicted model output. The workflow and methodologies suitable for computationally expensive models, such as the Morris screening method and Gaussian Emulation processes, were used for the implementation of the global sensitivity analysis. Sensitivity indices, such as main, total and interaction effects, were computed for a screened set of the total thyroidal system descriptive model input parameters. Furthermore, a narrower sub-set of the most influential parameters affecting the model output of maternal thyroid hormone levels were identified in addition to the characterization of their overall and pair-wise parameter interaction quotients. The characteristic trends of influence in model output for each of these individual model input parameters over their plausible ranges were elucidated using Gaussian Emulation processes. Through global sensitivity analysis we have gained a better understanding of the model behavior and performance beyond the domains of observation by the simultaneous variation in model inputs over their range of plausible uncertainties. The sensitivity analysis helped identify parameters that determine the driving mechanisms of the maternal and fetal iodide kinetics, thyroid function and their interactions, and contributed to an improved understanding of the system modeled. We have thus demonstrated the use and application of global sensitivity analysis for a biologically based dose-response model for sensitive life-stages such as pregnancy that provides richer information on the model and the thyroidal system modeled compared to local sensitivity analysis
Flora, Joseph R V; Hargis, Richard A; O'Dowd, William J; Pennline, Henry W; Vidic, Radisav D
2003-04-01
A two-stage mathematical model for Hg removal using powdered activated carbon injection upstream of a baghouse filter was developed, with the first stage accounting for removal in the ductwork and the second stage accounting for additional removal caused by the retention of carbon particles on the filter. The model shows that removal in the ductwork is minimal, and the additional carbon detention time from the entrapment of the carbon particles in the fabric filter enhances the Hg removal from the gas phase. A sensitivity analysis on the model shows that Hg removal is dependent on the isotherm parameters, the carbon pore radius and tortuosity, the C/Hg ratio, and the carbon particle radius.
Rehrl, Jakob; Gruber, Arlin; Khinast, Johannes G; Horn, Martin
2017-01-30
This paper presents a sensitivity analysis of a pharmaceutical direct compaction process. Sensitivity analysis is an important tool for gaining valuable process insights and designing a process control concept. Examining its results in a systematic manner makes it possible to assign actuating signals to controlled variables. This paper presents mathematical models for individual unit operations, on which the sensitivity analysis is based. Two sensitivity analysis methods are outlined: (i) based on the so-called Sobol indices and (ii) based on the steady-state gains and the frequency response of the proposed plant model.
Decoupled direct method for sensitivity analysis in combustion kinetics
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan
1987-01-01
An efficient, decoupled direct method for calculating the first order sensitivity coefficients of homogeneous, batch combustion kinetic rate equations is presented. In this method the ordinary differential equations for the sensitivity coefficients are solved separately from , but sequentially with, those describing the combustion chemistry. The ordinary differential equations for the thermochemical variables are solved using an efficient, implicit method (LSODE) that automatically selects the steplength and order for each solution step. The solution procedure for the sensitivity coefficients maintains accuracy and stability by using exactly the same steplengths and numerical approximations. The method computes sensitivity coefficients with respect to any combination of the initial values of the thermochemical variables and the three rate constant parameters for the chemical reactions. The method is illustrated by application to several simple problems and, where possible, comparisons are made with exact solutions and those obtained by other techniques.
Sensitivity analysis of static resistance of slender beam under bending
NASA Astrophysics Data System (ADS)
Valeš, Jan
2016-06-01
The paper deals with statical and sensitivity analyses of resistance of simply supported I-beams under bending. The resistance was solved by geometrically nonlinear finite element method in the programme Ansys. The beams are modelled with initial geometrical imperfections following the first eigenmode of buckling. Imperfections were, together with geometrical characteristics of cross section, and material characteristics of steel, considered as random quantities. The method Latin Hypercube Sampling was applied to evaluate statistical and sensitivity resistance analyses.
Sensitivity analysis of the age-structured malaria transmission model
NASA Astrophysics Data System (ADS)
Addawe, Joel M.; Lope, Jose Ernie C.
2012-09-01
We propose an age-structured malaria transmission model and perform sensitivity analyses to determine the relative importance of model parameters to disease transmission. We subdivide the human population into two: preschool humans (below 5 years) and the rest of the human population (above 5 years). We then consider two sets of baseline parameters, one for areas of high transmission and the other for areas of low transmission. We compute the sensitivity indices of the reproductive number and the endemic equilibrium point with respect to the two sets of baseline parameters. Our simulations reveal that in areas of either high or low transmission, the reproductive number is most sensitive to the number of bites by a female mosquito on the rest of the human population. For areas of low transmission, we find that the equilibrium proportion of infectious pre-school humans is most sensitive to the number of bites by a female mosquito. For the rest of the human population it is most sensitive to the rate of acquiring temporary immunity. In areas of high transmission, the equilibrium proportion of infectious pre-school humans and the rest of the human population are both most sensitive to the birth rate of humans. This suggests that strategies that target the mosquito biting rate on pre-school humans and those that shortens the time in acquiring immunity can be successful in preventing the spread of malaria.
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; ...
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Enhanced orbit determination filter sensitivity analysis: Error budget development
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Burkhart, P. D.
1994-01-01
An error budget analysis is presented which quantifies the effects of different error sources in the orbit determination process when the enhanced orbit determination filter, recently developed, is used to reduce radio metric data. The enhanced filter strategy differs from more traditional filtering methods in that nearly all of the principal ground system calibration errors affecting the data are represented as filter parameters. Error budget computations were performed for a Mars Observer interplanetary cruise scenario for cases in which only X-band (8.4-GHz) Doppler data were used to determine the spacecraft's orbit, X-band ranging data were used exclusively, and a combined set in which the ranging data were used in addition to the Doppler data. In all three cases, the filter model was assumed to be a correct representation of the physical world. Random nongravitational accelerations were found to be the largest source of error contributing to the individual error budgets. Other significant contributors, depending on the data strategy used, were solar-radiation pressure coefficient uncertainty, random earth-orientation calibration errors, and Deep Space Network (DSN) station location uncertainty.
NASA Astrophysics Data System (ADS)
He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei
2015-02-01
A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety.
7 CFR 91.38 - Additional fees for appeal of analysis.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Additional fees for appeal of analysis. 91.38 Section 91.38 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED)...
7 CFR 91.38 - Additional fees for appeal of analysis.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Additional fees for appeal of analysis. 91.38 Section 91.38 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED)...
He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei
2015-02-25
A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety.
An analytical approach to grid sensitivity analysis. [of NACA wing sections
NASA Technical Reports Server (NTRS)
Sadrehaghighi, Ideen; Smith, Robert E.; Tiwari, Surendra N.
1992-01-01
Sensitivity analysis in Computational Fluid Dynamics with emphasis on grids and surface parameterization is described. An interactive algebraic grid-generation technique is employed to generate C-type grids around NACA four-digit wing sections. An analytical procedure is developed for calculating grid sensitivity with respect to design parameters of a wing section. A comparison of the sensitivity with that obtained using a finite-difference approach is made. Grid sensitivity with respect to grid parameters, such as grid-stretching coefficients, are also investigated. Using the resultant grid sensitivity, aerodynamic sensitivity is obtained using the compressible two-dimensional thin-layer Navier-Stokes equations.
An analytical approach to grid sensitivity analysis for NACA four-digit wing sections
NASA Technical Reports Server (NTRS)
Sadrehaghighi, I.; Tiwari, S. N.
1992-01-01
Sensitivity analysis in computational fluid dynamics with emphasis on grids and surface parameterization is described. An interactive algebraic grid-generation technique is employed to generate C-type grids around NACA four-digit wing sections. An analytical procedure is developed for calculating grid sensitivity with respect to design parameters of a wing section. A comparison of the sensitivity with that obtained using a finite difference approach is made. Grid sensitivity with respect to grid parameters, such as grid-stretching coefficients, are also investigated. Using the resultant grid sensitivity, aerodynamic sensitivity is obtained using the compressible two-dimensional thin-layer Navier-Stokes equations.
Stimulation of terrestrial ecosystem carbon storage by nitrogen addition: a meta-analysis
Yue, Kai; Peng, Yan; Peng, Changhui; Yang, Wanqin; Peng, Xin; Wu, Fuzhong
2016-01-01
Elevated nitrogen (N) deposition alters the terrestrial carbon (C) cycle, which is likely to feed back to further climate change. However, how the overall terrestrial ecosystem C pools and fluxes respond to N addition remains unclear. By synthesizing data from multiple terrestrial ecosystems, we quantified the response of C pools and fluxes to experimental N addition using a comprehensive meta-analysis method. Our results showed that N addition significantly stimulated soil total C storage by 5.82% ([2.47%, 9.27%], 95% CI, the same below) and increased the C contents of the above- and below-ground parts of plants by 25.65% [11.07%, 42.12%] and 15.93% [6.80%, 25.85%], respectively. Furthermore, N addition significantly increased aboveground net primary production by 52.38% [40.58%, 65.19%] and litterfall by 14.67% [9.24%, 20.38%] at a global scale. However, the C influx from the plant litter to the soil through litter decomposition and the efflux from the soil due to microbial respiration and soil respiration showed insignificant responses to N addition. Overall, our meta-analysis suggested that N addition will increase soil C storage and plant C in both above- and below-ground parts, indicating that terrestrial ecosystems might act to strengthen as a C sink under increasing N deposition. PMID:26813078
Stimulation of terrestrial ecosystem carbon storage by nitrogen addition: a meta-analysis
NASA Astrophysics Data System (ADS)
Yue, Kai; Peng, Yan; Peng, Changhui; Yang, Wanqin; Peng, Xin; Wu, Fuzhong
2016-01-01
Elevated nitrogen (N) deposition alters the terrestrial carbon (C) cycle, which is likely to feed back to further climate change. However, how the overall terrestrial ecosystem C pools and fluxes respond to N addition remains unclear. By synthesizing data from multiple terrestrial ecosystems, we quantified the response of C pools and fluxes to experimental N addition using a comprehensive meta-analysis method. Our results showed that N addition significantly stimulated soil total C storage by 5.82% ([2.47%, 9.27%], 95% CI, the same below) and increased the C contents of the above- and below-ground parts of plants by 25.65% [11.07%, 42.12%] and 15.93% [6.80%, 25.85%], respectively. Furthermore, N addition significantly increased aboveground net primary production by 52.38% [40.58%, 65.19%] and litterfall by 14.67% [9.24%, 20.38%] at a global scale. However, the C influx from the plant litter to the soil through litter decomposition and the efflux from the soil due to microbial respiration and soil respiration showed insignificant responses to N addition. Overall, our meta-analysis suggested that N addition will increase soil C storage and plant C in both above- and below-ground parts, indicating that terrestrial ecosystems might act to strengthen as a C sink under increasing N deposition.
How to assess the Efficiency and "Uncertainty" of Global Sensitivity Analysis?
NASA Astrophysics Data System (ADS)
Haghnegahdar, Amin; Razavi, Saman
2016-04-01
Sensitivity analysis (SA) is an important paradigm for understanding model behavior, characterizing uncertainty, improving model calibration, etc. Conventional "global" SA (GSA) approaches are rooted in different philosophies, resulting in different and sometime conflicting and/or counter-intuitive assessment of sensitivity. Moreover, most global sensitivity techniques are highly computationally demanding to be able to generate robust and stable sensitivity metrics over the entire model response surface. Accordingly, a novel sensitivity analysis method called Variogram Analysis of Response Surfaces (VARS) is introduced to overcome the aforementioned issues. VARS uses the Variogram concept to efficiently provide a comprehensive assessment of global sensitivity across a range of scales within the parameter space. Based on the VARS principles, in this study we present innovative ideas to assess (1) the efficiency of GSA algorithms and (2) the level of confidence we can assign to a sensitivity assessment. We use multiple hydrological models with different levels of complexity to explain the new ideas.
Depletion GPT-free sensitivity analysis for reactor eigenvalue problems
Kennedy, C.; Abdel-Khalik, H.
2013-07-01
This manuscript introduces a novel approach to solving depletion perturbation theory problems without the need to set up or solve the generalized perturbation theory (GPT) equations. The approach, hereinafter denoted generalized perturbation theory free (GPT-Free), constructs a reduced order model (ROM) using methods based in perturbation theory and computes response sensitivity profiles in a manner that is independent of the number or type of responses, allowing for an efficient computation of sensitivities when many responses are required. Moreover, the reduction error from using the ROM is quantified in the GPT-Free approach by means of a Wilks' order statistics error metric denoted the K-metric. Traditional GPT has been recognized as the most computationally efficient approach for performing sensitivity analyses of models with many input parameters, e.g. when forward sensitivity analyses are computationally intractable. However, most neutronics codes that can solve the fundamental (homogenous) adjoint eigenvalue problem do not have GPT capabilities unless envisioned during code development. The GPT-Free approach addresses this limitation by requiring only the ability to compute the fundamental adjoint. This manuscript demonstrates the GPT-Free approach for depletion reactor calculations performed in SCALE6 using the 7x7 UAM assembly model. A ROM is developed for the assembly over a time horizon of 990 days. The approach both calculates the reduction error over the lifetime of the simulation using the K-metric and benchmarks the obtained sensitivities using sample calculations. (authors)
Local sensitivity analysis for inverse problems solved by singular value decomposition
Hill, M.C.; Nolan, B.T.
2010-01-01
regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.
NASA Astrophysics Data System (ADS)
Pagliano, Enea; Meija, Juris
2016-04-01
The combination of isotope dilution and mass spectrometry has become an ubiquitous tool of chemical analysis. Often perceived as one of the most accurate methods of chemical analysis, it is not without shortcomings. Current isotope dilution equations are not capable of fully addressing one of the key problems encountered in chemical analysis: the possible effect of sample matrix on measured isotope ratios. The method of standard addition does compensate for the effect of sample matrix by making sure that all measured solutions have identical composition. While it is impossible to attain such condition in traditional isotope dilution, we present equations which allow for matrix-matching between all measured solutions by fusion of isotope dilution and standard addition methods.
Dźwiarek, Marek; Latała, Agata
2016-01-01
This article presents an analysis of results of 1035 serious and 341 minor accidents recorded by Poland's National Labour Inspectorate (PIP) in 2005–2011, in view of their prevention by means of additional safety measures applied by machinery users. Since the analysis aimed at formulating principles for the application of technical safety measures, the analysed accidents should bear additional attributes: the type of machine operation, technical safety measures and the type of events causing injuries. The analysis proved that the executed tasks and injury-causing events were closely connected and there was a relation between casualty events and technical safety measures. In the case of tasks consisting of manual feeding and collecting materials, the injuries usually occur because of the rotating motion of tools or crushing due to a closing motion. Numerous accidents also happened in the course of supporting actions, like removing pollutants, correcting material position, cleaning, etc. PMID:26652689
Coffin, Allison B.; Mohr, Robert A.; Sisneros, Joseph A.
2012-01-01
The plainfin midshipman fish, Porichthys notatus, is a seasonal breeding teleost fish for which vocal-acoustic communication is essential for its reproductive success. Female midshipman use the saccule as the primary end organ for hearing to detect and locate “singing” males that produce multiharmonic advertisement calls during the summer breeding season. Previous work showed that female auditory sensitivity changes seasonally with reproductive state; summer reproductive females become better suited than winter nonreproductive females to detect and encode the dominant higher harmonic components in the male’s advertisement call, which are potentially critical for mate selection and localization. Here, we test the hypothesis that these seasonal changes in female auditory sensitivity are concurrent with seasonal increases in saccular hair cell receptors. We show that there is increased hair cell density in reproductive females and that this increase is not dependent on body size since similar changes in hair cell density were not found in the other inner ear end organs. We also observed an increase in the number of small, potentially immature saccular hair bundles in reproductive females. The seasonal increase in saccular hair cell density and smaller hair bundles in reproductive females was paralleled by a dramatic increase in the magnitude of the evoked saccular potentials and a corresponding decrease in the auditory thresholds recorded from the saccule. This demonstration of correlated seasonal plasticity of hair cell addition and auditory sensitivity may in part facilitate the adaptive auditory plasticity of this species to enhance mate detection and localization during breeding. PMID:22279221
Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis
NASA Technical Reports Server (NTRS)
Kallman, Tim
2006-01-01
A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn 011 many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.
Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis
NASA Technical Reports Server (NTRS)
Kallman, Tim
2006-01-01
A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn on many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.
Schijven, J F; Mülschlegel, J H C; Hassanizadeh, S M; Teunis, P F M; de Roda Husman, A M
2006-09-01
Protection zones of shallow unconfined aquifers in The Netherlands were calculated that allow protection against virus contamination to the level that the infection risk of 10(-4) per person per year is not exceeded with a 95% certainty. An uncertainty and a sensitivity analysis of the calculated protection zones were included. It was concluded that protection zones of 1 to 2 years travel time (206-418 m) are needed (6 to 12 times the currently applied travel time of 60 days). This will lead to enlargement of protection zones, encompassing 110 unconfined groundwater well systems that produce 3 x 10(8) m3 y(-1) of drinking water (38% of total Dutch production from groundwater). A smaller protection zone is possible if it can be shown that an aquifer has properties that lead to greater reduction of virus contamination, like more attachment. Deeper aquifers beneath aquitards of at least 2 years of vertical travel time are adequately protected because vertical flow in the aquitards is only 0.7 m per year. The most sensitive parameters are virus attachment and inactivation. The next most sensitive parameters are grain size of the sand, abstraction rate of groundwater, virus concentrations in raw sewage and consumption of unboiled drinking water. Research is recommended on additional protection by attachment and under unsaturated conditions.
Analysis of JPSS J1 VIIRS Polarization Sensitivity Using the NIST T-SIRCUS
NASA Technical Reports Server (NTRS)
McIntire, Jeffrey W.; Young, James B.; Moyer, David; Waluschka, Eugene; Oudrari, Hassan; Xiong, Xiaoxiong
2015-01-01
The polarization sensitivity of the Joint Polar Satellite System (JPSS) J1 Visible Infrared Imaging Radiometer Suite (VIIRS) measured pre-launch using a broadband source was observed to be larger than expected for many reflective bands. Ray trace modeling predicted that the observed polarization sensitivity was the result of larger diattenuation at the edges of the focal plane filter spectral bandpass. Additional ground measurements were performed using a monochromatic source (the NIST T-SIRCUS) to input linearly polarized light at a number of wavelengths across the bandpass of two VIIRS spectral bands and two scan angles. This work describes the data processing, analysis, and results derived from the T-SIRCUS measurements, comparing them with broadband measurements. Results have shown that the observed degree of linear polarization, when weighted by the sensor's spectral response function, is generally larger on the edges and smaller in the center of the spectral bandpass, as predicted. However, phase angle changes in the center of the bandpass differ between model and measurement. Integration of the monochromatic polarization sensitivity over wavelength produced results consistent with the broadband source measurements, for all cases considered.
Optimization of Parameter Ranges for Composite Tape Winding Process Based on Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Yu, Tao; Shi, Yaoyao; He, Xiaodong; Kang, Chao; Deng, Bo; Song, Shibo
2016-11-01
This study is focus on the parameters sensitivity of winding process for composite prepreg tape. The methods of multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis are proposed. The polynomial empirical model of interlaminar shear strength is established by response surface experimental method. Using this model, the relative sensitivity of key process parameters including temperature, tension, pressure and velocity is calculated, while the single-parameter sensitivity curves are obtained. According to the analysis of sensitivity curves, the stability and instability range of each parameter are recognized. Finally, the optimization method of winding process parameters is developed. The analysis results show that the optimized ranges of the process parameters for interlaminar shear strength are: temperature within [100 °C, 150 °C], tension within [275 N, 387 N], pressure within [800 N, 1500 N], and velocity within [0.2 m/s, 0.4 m/s], respectively.
Large-scale transient sensitivity analysis of a radiation damaged bipolar junction transistor.
Hoekstra, Robert John; Gay, David M.; Bartlett, Roscoe Ainsworth; Phipps, Eric Todd
2007-11-01
Automatic differentiation (AD) is useful in transient sensitivity analysis of a computational simulation of a bipolar junction transistor subject to radiation damage. We used forward-mode AD, implemented in a new Trilinos package called Sacado, to compute analytic derivatives for implicit time integration and forward sensitivity analysis. Sacado addresses element-based simulation codes written in C++ and works well with forward sensitivity analysis as implemented in the Trilinos time-integration package Rythmos. The forward sensitivity calculation is significantly more efficient and robust than finite differencing.
NASA Astrophysics Data System (ADS)
Sandu, Adrian; Daescu, Dacian N.; Carmichael, Gregory R.
The analysis of comprehensive chemical reactions mechanisms, parameter estimation techniques, and variational chemical data assimilation applications require the development of efficient sensitivity methods for chemical kinetics systems. The new release (KPP-1.2) of the kinetic preprocessor (KPP) contains software tools that facilitate direct and adjoint sensitivity analysis. The direct-decoupled method, built using BDF formulas, has been the method of choice for direct sensitivity studies. In this work, we extend the direct-decoupled approach to Rosenbrock stiff integration methods. The need for Jacobian derivatives prevented Rosenbrock methods to be used extensively in direct sensitivity calculations; however, the new automatic and symbolic differentiation technologies make the computation of these derivatives feasible. The direct-decoupled method is known to be efficient for computing the sensitivities of a large number of output parameters with respect to a small number of input parameters. The adjoint modeling is presented as an efficient tool to evaluate the sensitivity of a scalar response function with respect to the initial conditions and model parameters. In addition, sensitivity with respect to time-dependent model parameters may be obtained through a single backward integration of the adjoint model. KPP software may be used to completely generate the continuous and discrete adjoint models taking full advantage of the sparsity of the chemical mechanism. Flexible direct-decoupled and adjoint sensitivity code implementations are achieved with minimal user intervention. In a companion paper, we present an extensive set of numerical experiments that validate the KPP software tools for several direct/adjoint sensitivity applications, and demonstrate the efficiency of KPP-generated sensitivity code implementations.
Survey of sampling-based methods for uncertainty and sensitivity analysis.
Johnson, Jay Dean; Helton, Jon Craig; Sallaberry, Cedric J. PhD.; Storlie, Curt B. (Colorado State University, Fort Collins, CO)
2006-06-01
Sampling-based methods for uncertainty and sensitivity analysis are reviewed. The following topics are considered: (1) Definition of probability distributions to characterize epistemic uncertainty in analysis inputs, (2) Generation of samples from uncertain analysis inputs, (3) Propagation of sampled inputs through an analysis, (4) Presentation of uncertainty analysis results, and (5) Determination of sensitivity analysis results. Special attention is given to the determination of sensitivity analysis results, with brief descriptions and illustrations given for the following procedures/techniques: examination of scatterplots, correlation analysis, regression analysis, partial correlation analysis, rank transformations, statistical tests for patterns based on gridding, entropy tests for patterns based on gridding, nonparametric regression analysis, squared rank differences/rank correlation coefficient test, two dimensional Kolmogorov-Smirnov test, tests for patterns based on distance measures, top down coefficient of concordance, and variance decomposition.
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).
NASA Technical Reports Server (NTRS)
Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.
2004-01-01
A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).
Diagnosis of Middle Atmosphere Climate Sensitivity by the Climate Feedback Response Analysis Method
NASA Technical Reports Server (NTRS)
Zhu, Xun; Yee, Jeng-Hwa; Cai, Ming; Swartz, William H.; Coy, Lawrence; Aquila, Valentina; Talaat, Elsayed R.
2014-01-01
We present a new method to diagnose the middle atmosphere climate sensitivity by extending the Climate Feedback-Response Analysis Method (CFRAM) for the coupled atmosphere-surface system to the middle atmosphere. The Middle atmosphere CFRAM (MCFRAM) is built on the atmospheric energy equation per unit mass with radiative heating and cooling rates as its major thermal energy sources. MCFRAM preserves the CFRAM unique feature of an additive property for which the sum of all partial temperature changes due to variations in external forcing and feedback processes equals the observed temperature change. In addition, MCFRAM establishes a physical relationship of radiative damping between the energy perturbations associated with various feedback processes and temperature perturbations associated with thermal responses. MCFRAM is applied to both measurements and model output fields to diagnose the middle atmosphere climate sensitivity. It is found that the largest component of the middle atmosphere temperature response to the 11-year solar cycle (solar maximum vs. solar minimum) is directly from the partial temperature change due to the variation of the input solar flux. Increasing CO2 always cools the middle atmosphere with time whereas partial temperature change due to O3 variation could be either positive or negative. The partial temperature changes due to different feedbacks show distinctly different spatial patterns. The thermally driven globally averaged partial temperature change due to all radiative processes is approximately equal to the observed temperature change, ranging from 0.5 K near 70 km from the near solar maximum to the solar minimum.
Continuous adjoint sensitivity analysis for aerodynamic and acoustic optimization
NASA Astrophysics Data System (ADS)
Ghayour, Kaveh
1999-11-01
A gradient-based shape optimization methodology based on continuous adjoint sensitivities has been developed for two-dimensional steady Euler equations on unstructured meshes and the unsteady transonic small disturbance equation. The continuous adjoint sensitivities of the Helmholtz equation for acoustic applications have also been derived and discussed. The highlights of the developments for the steady two-dimensional Euler equations are the generalization of the airfoil surface boundary condition of the adjoint system to allow a proper closure of the Lagrangian functional associated with a general cost functional and the results for an inverse problem with density as the prescribed target. Furthermore, it has been demonstrated that a transformation to the natural coordinate system, in conjunction with the reduction of the governing state equations to the control surface, results in sensitivity integrals that are only a function of the tangential derivatives of the state variables. This approach alleviates the need for directional derivative computations with components along the normal to the control surface, which can render erroneous results. With regard to the unsteady transonic small disturbance equation (UTSD), the continuous adjoint methodology has been successfully extended to unsteady flows. It has been demonstrated that for periodic airfoil oscillations leading to limit-cycle behavior, the Lagrangian functional can be only closed if the time interval of interest spans one or more periods of the flow oscillations after the limit-cycle has been attained. The steady state and limit-cycle sensitivities are then validated by comparing with the brute-force derivatives. The importance of accounting for the flow circulation sensitivity, appearing in the form of a Dirac delta in the wall boundary condition at the trailing edge, has been stressed and demonstrated. Remarkably, the cost of an unsteady adjoint solution is about 0.2 times that of a UTSD solution
Coupled Aerodynamic and Structural Sensitivity Analysis of a High-Speed Civil Transport
NASA Technical Reports Server (NTRS)
Mason, B. H.; Walsh, J. L.
2001-01-01
An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite-element structural analysis and computational fluid dynamics aerodynamic analysis. In a previous study, a multi-disciplinary analysis system for a high-speed civil transport was formulated to integrate a set of existing discipline analysis codes, some of them computationally intensive, This paper is an extension of the previous study, in which the sensitivity analysis for the coupled aerodynamic and structural analysis problem is formulated and implemented. Uncoupled stress sensitivities computed with a constant load vector in a commercial finite element analysis code are compared to coupled aeroelastic sensitivities computed by finite differences. The computational expense of these sensitivity calculation methods is discussed.
Rea, Jennifer C; Freistadt, Benny S; McDonald, Daniel; Farnan, Dell; Wang, Yajun Jennifer
2015-12-11
Ion-exchange chromatography (IEC) is widely used for profiling the charge heterogeneity of proteins, including monoclonal antibodies (mAbs). Despite good resolving power and robustness, ionic strength-based ion-exchange separations are generally product specific and can be time consuming to develop. In addition, conventional analytical scale ion-exchange separations require tens of micrograms of mAbs for each injection, amounts that are often unavailable in sample-limited applications. We report the development of a capillary IEC (c-IEC) methodology for the analysis of nanogram amounts of mAb charge variants. Several key modifications were made to a commercially available liquid chromatography system to perform c-IEC for charge variant analysis of mAbs with nanogram sensitivity. We demonstrate the method for multiple monoclonal antibodies, including antibody fragments, on different columns from different manufacturers. Relative standard deviations of <10% were achieved for relative peak areas of main peak, acidic and basic regions, which are common regions of interest for quantifying monoclonal antibody charge variants using IEC. The results herein demonstrate the excellent sensitivity of this c-IEC characterization method, which can be used for analyzing charge variants in sample-limited applications, such as early-stage candidate screening and in vivo studies.
Sensitivity analysis of Repast computational ecology models with R/Repast.
Prestes García, Antonio; Rodríguez-Patón, Alfonso
2016-12-01
Computational ecology is an emerging interdisciplinary discipline founded mainly on modeling and simulation methods for studying ecological systems. Among the existing modeling formalisms, the individual-based modeling is particularly well suited for capturing the complex temporal and spatial dynamics as well as the nonlinearities arising in ecosystems, communities, or populations due to individual variability. In addition, being a bottom-up approach, it is useful for providing new insights on the local mechanisms which are generating some observed global dynamics. Of course, no conclusions about model results could be taken seriously if they are based on a single model execution and they are not analyzed carefully. Therefore, a sound methodology should always be used for underpinning the interpretation of model results. The sensitivity analysis is a methodology for quantitatively assessing the effect of input uncertainty in the simulation output which should be incorporated compulsorily to every work based on in-silico experimental setup. In this article, we present R/Repast a GNU R package for running and analyzing Repast Simphony models accompanied by two worked examples on how to perform global sensitivity analysis and how to interpret the results.
McNamara, C; Mehegan, J; O'Mahony, C; Safford, B; Smith, B; Tennant, D; Buck, N; Ehrlich, V; Sardi, M; Haldemann, Y; Nordmann, H; Jasti, P R
2011-12-01
The feasibility of using a retailer fidelity card scheme to estimate food additive intake was investigated in an earlier study. Fidelity card survey information was combined with information provided by the retailer on levels of the food colour Sunset Yellow (E110) in the foods to estimate a daily exposure to the additive in the Swiss population. As with any dietary exposure method the fidelity card scheme is subject to uncertainties and in this paper the impact of uncertainties associated with input variables including the amounts of food purchased, the levels of E110 in food, the proportion of food purchased at the retailer, the rate of fidelity card usage, the proportion of foods consumed outside of the home and bodyweights and with systematic uncertainties was assessed using a qualitative, deterministic and probabilistic approach. An analysis of the sensitivity of the results to each of the probabilistic inputs was also undertaken. The analysis identified the key factors responsible for uncertainty within the model and demonstrated how the application of some simple probabilistic approaches can be used quantitatively to assess uncertainty.
Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations
NASA Technical Reports Server (NTRS)
Newman, James C., III; Taylor, Arthur C., III; Barnwell, Richard W.; Newman, Perry A.; Hou, Gene J.-W.
1999-01-01
This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics (CFD). The focus here is on those methods particularly well-suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid CFD algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid CFDs in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.
Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Newman, James C., III; Barnwell, Richard W.; Taylor, Arthur C., III; Hou, Gene J.-W.
1998-01-01
This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics. The focus here is on those methods particularly well- suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid computational fluid dynamics algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid computational fluid dynamics in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.
NASA Astrophysics Data System (ADS)
Guse, Björn; Pfannerstill, Matthias; Gafurov, Abror; Fohrer, Nicola; Gupta, Hoshin
2016-04-01
The hydrologic response variable most often used in sensitivity analysis is discharge which provides an integrated value of all catchment processes. The typical sensitivity analysis evaluates how changes in the model parameters affect the model output. However, due to discharge being the aggregated effect of all hydrological processes, the sensitivity signal of a certain model parameter can be strongly masked. A more advanced form of sensitivity analysis would be achieved if we could investigate how the sensitivity of a certain modelled process variable relates to the changes in a parameter. Based on this, the controlling parameters for different hydrological components could be detected. Towards this end, we apply the approach of temporal dynamics of parameter sensitivity (TEDPAS) to calculate the daily sensitivities for different model outputs with the FAST method. The temporal variations in parameter dominance are then analysed for both the modelled hydrological components themselves, and also for the rates of change (derivatives) in the modelled hydrological components. The daily parameter sensitivities are then compared with the modelled hydrological components using regime curves. Application of this approach shows that when the corresponding modelled process is investigated instead of discharge, we obtain both an increased indication of parameter sensitivity, and also a clear pattern showing how the seasonal patterns of parameter dominance change over time for each hydrological process. By relating these results with the model structure, we can see that the sensitivity of model parameters is influenced by the function of the parameter. While capacity parameters show more sensitivity to the modelled hydrological component, flux parameters tend to have a higher sensitivity to rates of change in the modelled hydrological component. By better disentangling the information hidden in the discharge values, we can use sensitivity analyses to obtain a clearer signal
Long vs. short-term energy storage:sensitivity analysis.
Schoenung, Susan M. (Longitude 122 West, Inc., Menlo Park, CA); Hassenzahl, William V. (,Advanced Energy Analysis, Piedmont, CA)
2007-07-01
This report extends earlier work to characterize long-duration and short-duration energy storage technologies, primarily on the basis of life-cycle cost, and to investigate sensitivities to various input assumptions. Another technology--asymmetric lead-carbon capacitors--has also been added. Energy storage technologies are examined for three application categories--bulk energy storage, distributed generation, and power quality--with significant variations in discharge time and storage capacity. Sensitivity analyses include cost of electricity and natural gas, and system life, which impacts replacement costs and capital carrying charges. Results are presented in terms of annual cost, $/kW-yr. A major variable affecting system cost is hours of storage available for discharge.
Sensitive Detection of Deliquescent Bacterial Capsules through Nanomechanical Analysis.
Nguyen, Song Ha; Webb, Hayden K
2015-10-20
Encapsulated bacteria usually exhibit strong resistance to a wide range of sterilization methods, and are often virulent. Early detection of encapsulation can be crucial in microbial pathology. This work demonstrates a fast and sensitive method for the detection of encapsulated bacterial cells. Nanoindentation force measurements were used to confirm the presence of deliquescent bacterial capsules surrounding bacterial cells. Force/distance approach curves contained characteristic linear-nonlinear-linear domains, indicating cocompression of the capsular layer and cell, indentation of the capsule, and compression of the cell alone. This is a sensitive method for the detection and verification of the encapsulation status of bacterial cells. Given that this method was successful in detecting the nanomechanical properties of two different layers of cell material, i.e. distinguishing between the capsule and the remainder of the cell, further development may potentially lead to the ability to analyze even thinner cellular layers, e.g. lipid bilayers.
Stochastic averaging and sensitivity analysis for two scale reaction networks
NASA Astrophysics Data System (ADS)
Hashemi, Araz; Núñez, Marcel; Plecháč, Petr; Vlachos, Dionisios G.
2016-02-01
In the presence of multiscale dynamics in a reaction network, direct simulation methods become inefficient as they can only advance the system on the smallest scale. This work presents stochastic averaging techniques to accelerate computations for obtaining estimates of expected values and sensitivities with respect to the steady state distribution. A two-time-scale formulation is used to establish bounds on the bias induced by the averaging method. Further, this formulation provides a framework to create an accelerated "averaged" version of most single-scale sensitivity estimation methods. In particular, we propose the use of a centered ergodic likelihood ratio method for steady state estimation and show how one can adapt it to accelerated simulations of multiscale systems. Finally, we develop an adaptive "batch-means" stopping rule for determining when to terminate the micro-equilibration process.
Sensitivity analysis of random shell-model interactions
NASA Astrophysics Data System (ADS)
Krastev, Plamen; Johnson, Calvin
2010-02-01
The input to the configuration-interaction shell model includes many dozens or even hundreds of independent two-body matrix elements. Previous studies have shown that when fitting to experimental low-lying spectra, the greatest sensitivity is to only a few linear combinations of matrix elements. Following Brown and Richter [1], here we consider general two-body interactions in the 1s-0d shell and find that the low-lying spectra are also only sensitive to a few linear combinations of two-body matrix elements. We find out in particular the ground state energies for both the random and non-random (here given by the USDB) interaction are dominated by similar matrix elements, which we try to interpret in terms of monopole and contact interactions, while the excitation energies have completely different character. [4pt] [1] B. Alex Brown and W. A. Richter, Phys. Rev. C 74, 034315 (2006) )
NASA Astrophysics Data System (ADS)
Jiang, Shan; Wang, Fang; Shen, Luming; Liao, Guiping; Wang, Lin
2017-03-01
Spectrum technology has been widely used in crop non-destructive testing diagnosis for crop information acquisition. Since spectrum covers a wide range of bands, it is of critical importance to extract the sensitive bands. In this paper, we propose a methodology to extract the sensitive spectrum bands of rapeseed using multiscale multifractal detrended fluctuation analysis. Our obtained sensitive bands are relatively robust in the range of 534 nm-574 nm. Further, by using the multifractal parameter (Hurst exponent) of the extracted sensitive bands, we propose a prediction model to forecast the Soil and plant analyzer development values ((SPAD), often used as a parameter to indicate the chlorophyll content) and an identification model to distinguish the different planting patterns. Three vegetation indices (VIs) based on previous work are used for comparison. Three evaluation indicators, namely, the root mean square error, the correlation coefficient, and the relative error employed in the SPAD values prediction model all demonstrate that our Hurst exponent has the best performance. Four rapeseed compound planting factors, namely, seeding method, planting density, fertilizer type, and weed control method are considered in the identification model. The Youden indices calculated by the random decision forest method and the K-nearest neighbor method show that our Hurst exponent is superior to other three Vis, and their combination for the factor of seeding method. In addition, there is no significant difference among the five features for other three planting factors. This interesting finding suggests that the transplanting and the direct seeding would make a big difference in the growth of rapeseed.
NASA Astrophysics Data System (ADS)
Mockler, Eva M.; O'Loughlin, Fiachra E.; Bruen, Michael
2016-05-01
Increasing pressures on water quality due to intensification of agriculture have raised demands for environmental modeling to accurately simulate the movement of diffuse (nonpoint) nutrients in catchments. As hydrological flows drive the movement and attenuation of nutrients, individual hydrological processes in models should be adequately represented for water quality simulations to be meaningful. In particular, the relative contribution of groundwater and surface runoff to rivers is of interest, as increasing nitrate concentrations are linked to higher groundwater discharges. These requirements for hydrological modeling of groundwater contribution to rivers initiated this assessment of internal flow path partitioning in conceptual hydrological models. In this study, a variance based sensitivity analysis method was used to investigate parameter sensitivities and flow partitioning of three conceptual hydrological models simulating 31 Irish catchments. We compared two established conceptual hydrological models (NAM and SMARG) and a new model (SMART), produced especially for water quality modeling. In addition to the criteria that assess streamflow simulations, a ratio of average groundwater contribution to total streamflow was calculated for all simulations over the 16 year study period. As observations time-series of groundwater contributions to streamflow are not available at catchment scale, the groundwater ratios were evaluated against average annual indices of base flow and deep groundwater flow for each catchment. The exploration of sensitivities of internal flow path partitioning was a specific focus to assist in evaluating model performances. Results highlight that model structure has a strong impact on simulated groundwater flow paths. Sensitivity to the internal pathways in the models are not reflected in the performance criteria results. This demonstrates that simulated groundwater contribution should be constrained by independent data to ensure results
Sensitivity analysis of eigenvalues for an electro-hydraulic servomechanism
NASA Astrophysics Data System (ADS)
Stoia-Djeska, M.; Safta, C. A.; Halanay, A.; Petrescu, C.
2012-11-01
Electro-hydraulic servomechanisms (EHSM) are important components of flight control systems and their role is to control the movement of the flying control surfaces in response to the movement of the cockpit controls. As flight-control systems, the EHSMs have a fast dynamic response, a high power to inertia ratio and high control accuracy. The paper is devoted to the study of the sensitivity for an electro-hydraulic servomechanism used for an aircraft aileron action. The mathematical model of the EHSM used in this paper includes a large number of parameters whose actual values may vary within some ranges of uncertainty. It consists in a nonlinear ordinary differential equation system composed by the mass and energy conservation equations, the actuator movement equations and the controller equation. In this work the focus is on the sensitivities of the eigenvalues of the linearized homogeneous system, which are the partial derivatives of the eigenvalues of the state-space system with respect the parameters. These are obtained using a modal approach based on the eigenvectors of the state-space direct and adjoint systems. To calculate the eigenvalues and their sensitivity the system's Jacobian and its partial derivatives with respect the parameters are determined. The calculation of the derivative of the Jacobian matrix with respect to the parameters is not a simple task and for many situations it must be done numerically. The system stability is studied in relation with three parameters: m, the equivalent inertial load of primary control surface reduced to the actuator rod; B, the bulk modulus of oil and p a pressure supply proportionality coefficient. All the sensitivities calculated in this work are in good agreement with those obtained through recalculations.
Thermal analysis of microlens formation on a sensitized gelatin layer
Muric, Branka; Pantelic, Dejan; Vasiljevic, Darko; Panic, Bratimir; Jelenkovic, Branislav
2009-07-01
We analyze a mechanism of direct laser writing of microlenses. We find that thermal effects and photochemical reactions are responsible for microlens formation on a sensitized gelatin layer. An infrared camera was used to assess the temperature distribution during the microlens formation, while the diffraction pattern produced by the microlens itself was used to estimate optical properties. The study of thermal processes enabled us to establish the correlation between thermal and optical parameters.
Sensitivity Analysis and Optimization of Aerodynamic Configurations with Blend Surfaces
NASA Technical Reports Server (NTRS)
Thomas, A. M.; Tiwari, S. N.
1997-01-01
A novel (geometrical) parametrization procedure using solutions to a suitably chosen fourth order partial differential equation is used to define a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. The general airplane configuration has wing, fuselage, vertical tail and horizontal tail. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. A graphic interface software is developed which dynamically changes the surface of the airplane configuration with the change in input design variable. The software is made user friendly and is targeted towards the initial conceptual development of any aerodynamic configurations. Grid sensitivity with respect to surface design parameters and aerodynamic sensitivity coefficients based on potential flow is obtained using an Automatic Differentiation precompiler software tool ADIFOR. Aerodynamic shape optimization of the complete aircraft with twenty four design variables is performed. Unstructured and structured volume grids and Euler solutions are obtained with standard software to demonstrate the feasibility of the new surface definition.
Superconducting Accelerating Cavity Pressure Sensitivity Analysis and Stiffening
Rodnizki, J; Ben Aliz, Y; Grin, A; Horvitz, Z; Perry, A; Weissman, L; Davis, G Kirk; Delayen, Jean R.
2014-12-01
The Soreq Applied Research Accelerator Facility (SARAF) design is based on a 40 MeV 5 mA light ions superconducting RF linac. Phase-I of SARAF delivers up to 2 mA CW proton beams in an energy range of 1.5 - 4.0 MeV. The maximum beam power that we have reached is 5.7 kW. Today, the main limiting factor to reach higher ion energy and beam power is related to the HWR sensitivity to the liquid helium coolant pressure fluctuations. The HWR sensitivity to helium pressure is about 60 Hz/mbar. The cavities had been designed, a decade ago, to be soft in order to enable tuning of their novel shape. However, the cavities turned out to be too soft. In this work we found that increasing the rigidity of the cavities in the vicinity of the external drift tubes may reduce the cavity sensitivity by a factor of three. A preliminary design to increase the cavity rigidity is presented.
Adjoint based sensitivity analysis of a reacting jet in crossflow
NASA Astrophysics Data System (ADS)
Sashittal, Palash; Sayadi, Taraneh; Schmid, Peter
2016-11-01
With current advances in computational resources, high fidelity simulations of reactive flows are increasingly being used as predictive tools in various industrial applications. In order to capture the combustion process accurately, detailed/reduced chemical mechanisms are employed, which in turn rely on various model parameters. Therefore, it would be of great interest to quantify the sensitivities of the predictions with respect to the introduced models. Due to the high dimensionality of the parameter space, methods such as finite differences which rely on multiple forward simulations prove to be very costly and adjoint based techniques are a suitable alternative. The complex nature of the governing equations, however, renders an efficient strategy in finding the adjoint equations a challenging task. In this study, we employ the modular approach of Fosas de Pando et al. (2012), to build a discrete adjoint framework applied to a reacting jet in crossflow. The developed framework is then used to extract the sensitivity of the integrated heat release with respect to the existing combustion parameters. Analyzing the sensitivities in the three-dimensional domain provides insight towards the specific regions of the flow that are more susceptible to the choice of the model.
Computational aspects of sensitivity calculations in transient structural analysis
NASA Technical Reports Server (NTRS)
Greene, William H.; Haftka, Raphael T.
1988-01-01
A key step in the application of formal automated design techniques to structures under transient loading is the calculation of sensitivities of response quantities to the design parameters. This paper considers structures with general forms of damping acted on by general transient loading and addresses issues of computational errors and computational efficiency. The equations of motion are reduced using the traditional basis of vibration modes and then integrated using a highly accurate, explicit integration technique. A critical point constraint formulation is used to place constraints on the magnitude of each response quantity as a function of time. Three different techniques for calculating sensitivities of the critical point constraints are presented. The first two are based on the straightforward application of the forward and central difference operators, respectively. The third is based on explicit differentiation of the equations of motion. Condition errors, finite difference truncation errors, and modal convergence errors for the three techniques are compared by applying them to a simple five-span-beam problem. Sensitivity results are presented for two different transient loading conditions and for both damped and undamped cases.
Steed, Chad A.; Halsey, William; Dehoff, Ryan; ...
2017-02-16
Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less
Tinnemann, Peter; Stöber, Yvonne; Roll, Stephanie; Vauth, Christoph; Willich, Stefan N.; Greiner, Wolfgang
2010-01-01
Background Besides clinical and radiological examination instrumental functional analyses are performed as diagnostic procedures for craniomandibular dysfunctions. Instrumental functional analyses cause substantial costs and shows a considerable variability between individual dentist practices. Objectives On the basis of published scientific evidence the validity of the instrumental functional analysis for the diagnosis of craniomandibular dysfunctions compared to clinical diagnostic procedures; the difference of the various forms of the instrumental functional analysis; the existence of a dependency on additional other factors and the need for further research are determined in this report. In addition, the cost effectiveness of the instrumental functional analysis is analysed in a health-policy context, and social, legal and ethical aspects are considered. Methods A literature search is performed in over 27 databases and by hand. Relevant companies and institutions are contacted concerning unpublished studies. The inclusion criteria for publications are (i) diagnostic studies with the indication “craniomandibular malfunction”, (ii) a comparison between clinical and instrumental functional analysis, (iii) publications since 1990, (iv) publications in English or German. The identified literature is evaluated by two scientists regarding the relevance of content and methodical quality. Results The systematic database search resulted in 962 hits. 187 medical and economic complete publications are evaluated. Since the evaluated studies are not relevant enough to answer the medical or health economic questions no study is included. Discussion The inconsistent terminology concerning craniomandibular dysfunctions and instrumental functional analyses results in a broad literature search in databases and an extensive search by hand. Since no relevant results concerning the validity of the instrumental functional analysis in comparison to the clinical functional analysis
NASA Astrophysics Data System (ADS)
Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.
2015-07-01
Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.
Sensitivity analysis and dynamic modification of modal parameter in mechanical transmission system
NASA Astrophysics Data System (ADS)
Xie, Shao-Wang; Chen, Qi-Lian; Chen, Chang-Zheng; Li, Qing-Fen
2005-12-01
Sensitivity analysis is one of the effective methods in the dynamic modification. The sensitivity of the modal parameters such as the natural frequencies and mode shapes in undamped free vibration of mechanical transmission system is analyzed in this paper. In particular, the sensitivities of the modal parameters to physical parameters of shaft system such as the inertia and stiffness are given. A calculation formula for dynamic modification is presented based on the analysis of modal parameter. With a mechanical transmission system as an example, the sensitivities of natural frequencies and modes shape are calculated and analyzed. Furthermore, the dynamic modification is also carried out and a good result is obtained.
Soriano-Maldonado, Alberto; Klokker, Louise; Bartholdy, Cecilie; Bandak, Elisabeth; Ellegaard, Karen; Bliddal, Henning; Henriksen, Marius
2016-01-01
Objective To assess the effects of one intra-articular corticosteroid injection two weeks prior to an exercise-based intervention program for reducing pain sensitivity in patients with knee osteoarthritis (OA). Design Randomized, masked, parallel, placebo-controlled trial involving 100 participants with clinical and radiographic knee OA that were randomized to one intra-articular injection on the knee with either 1 ml of 40 mg/ml methylprednisolone (corticosteroid) dissolved in 4 ml lidocaine (10 mg/ml) or 1 ml isotonic saline (placebo) mixed with 4 ml lidocaine (10 mg/ml). Two weeks after the injections all participants undertook a 12-week supervised exercise program. Main outcomes were changes from baseline in pressure-pain sensitivity (pressure-pain threshold [PPT] and temporal summation [TS]) assessed using cuff pressure algometry on the calf. These were exploratory outcomes from a randomized controlled trial. Results A total of 100 patients were randomized to receive either corticosteroid (n = 50) or placebo (n = 50); 45 and 44, respectively, completed the trial. Four participants had missing values for PPT and one for TS at baseline; thus modified intention-to-treat populations were analyzed. The mean group difference in changes from baseline at week 14 was 0.6 kPa (95% CI: -1.7 to 2.8; P = 0.626) for PPT and 384 mm×sec (95% CI: -2980 to 3750; P = 0.821) for TS. Conclusions These results suggest that adding intra-articular corticosteroid injection 2 weeks prior to an exercise program does not provide additional benefits compared to placebo in reducing pain sensitivity in patients with knee OA. Trial Registration EU clinical trials (EudraCT): 2012-002607-18 PMID:26871954
Design sensitivity analysis with Applicon IFAD using the adjoint variable method
NASA Technical Reports Server (NTRS)
Frederick, Marjorie C.; Choi, Kyung K.
1984-01-01
A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.
NASA Astrophysics Data System (ADS)
Muhlen, Luis S. W.; Najafi, Behzad; Rinaldi, Fabio; Marchesi, Renzo
2014-04-01
Solar troughs are amongst the most commonly used technologies for collecting solar thermal energy and any attempt to increase the performance of these systems is welcomed. In the present study a parabolic solar trough is simulated using a one dimensional finite element model in which the energy balances for the fluid, the absorber and the envelope in each element are performed. The developed model is then validated using the available experimental data . A sensitivity analysis is performed in the next step in order to study the effect of changing the type of the working fluid and the corresponding Reynolds number on the overall performance of the system. The potential improvement due to the addition of a shield on the upper half of the annulus and enhancing the convection coefficient of the heat transfer fluid is also studied.
Bolivar, Paula-Andrea; Tracey, Martin; McCord, Bruce
2016-01-01
Experiments were performed to determine the extent of cross-contamination of DNA resulting from secondary transfer due to fingerprint brushes used on multiple items of evidence. Analysis of both standard and low copy number (LCN) STR was performed. Two different procedures were used to enhance sensitivity, post-PCR cleanup and increased cycle number. Under standard STR typing procedures, some additional alleles were produced that were not present in the controls or blanks; however, there was insufficient data to include the contaminant donor as a contributor. Inclusion of the contaminant donor did occur for one sample using post-PCR cleanup. Detection of the contaminant donor occurred for every replicate of the 31 cycle amplifications; however, using LCN interpretation recommendations for consensus profiles, only one sample would include the contaminant donor. Our results indicate that detection of secondary transfer of DNA can occur through fingerprint brush contamination and is enhanced using LCN-DNA methods.
Llorent-Martínez, E J; Ortega-Barrales, P; Molina-Díaz, A; Ruiz-Medina, A
2008-12-01
Orbifloxacin (ORBI) is a third-generation fluoroquinolone developed exclusively for use in veterinary medicine, mainly in companion animals. This antimicrobial agent has bactericidal activity against numerous gram-negative and gram-positive bacteria. A few chromatographic methods for its analysis have been described in the scientific literature. Here, coupling of sequential-injection analysis and solid-phase spectroscopy is described in order to develop, for the first time, a terbium-sensitized luminescent optosensor for analysis of ORBI. The cationic resin Sephadex-CM C-25 was used as solid support and measurements were made at 275/545 nm. The system had a linear dynamic range of 10-150 ng mL(-1), with a detection limit of 3.3 ng mL(-1) and an R.S.D. below 3% (n = 10). The analyte was satisfactorily determined in veterinary drugs and dog and horse urine.
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi
1994-01-01
An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.
Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.
2001-01-01
An efficient incremental-iterative approach for differentiating advanced flow codes is successfully demonstrated on a 2D inviscid model problem. The method employs the reverse-mode capability of the automatic- differentiation software tool ADIFOR 3.0, and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straight-forward, black-box reverse- mode application of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-order aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoint) procedures; then, a very efficient non-iterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hessian matrices) of lift, wave-drag, and pitching-moment coefficients are calculated with respect to geometric- shape, angle-of-attack, and freestream Mach number
Developing optical traps for ultra-sensitive analysis
Zhao, X.; Vieira, D.J.; Guckert, R. |; Crane, S.
1998-09-01
The authors describe the coupling of a magneto-optical trap to a mass separator for the ultra-sensitive detection of selected radioactive species. As a proof of principle test, they have demonstrated the trapping of {approximately} 6 million {sup 82} Rb (t{sub 1/2} = 75 s) atoms using an ion implantation and heated foil release method for introducing the sample into a trapping cell with minimal gas loading. Gamma-ray counting techniques were used to determine the efficiencies of each step in the process. By far the weakest step in the process is the efficiency of the optical trap itself (0.3%). Further improvements in the quality of the nonstick dryfilm coating on the inside of the trapping cell and the possible use of larger diameter laser beams are indicated. In the presence of a large background of scattered light, this initial work achieved a detection sensitivity of {approximately} 4,000 trapped atoms. Improved detection schemes using a pulsed trap and gated photon detection method are outlined. Application of this technology to the areas of environmental monitoring and nuclear proliferation are foreseen.
Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.
2003-01-01
An efficient incremental iterative approach for differentiating advanced flow codes is successfully demonstrated on a two-dimensional inviscid model problem. The method employs the reverse-mode capability of the automatic differentiation software tool ADIFOR 3.0 and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straightforward, black-box reverse-mode applicaiton of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-rder aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoinct) procedures; then, a very efficient noniterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hesian matrices) of lift, wave drag, and pitching-moment coefficients are calculated with respect to geometric shape, angle of attack, and freestream Mach number.
Analysis of the stability and sensitivity of jets in crossflow
NASA Astrophysics Data System (ADS)
Regan, Marc; Mahesh, Krishnan
2016-11-01
Jets in crossflow (transverse jets) are a canonical fluid flow in which a jet of fluid is injected normal to a crossflow. A high-fidelity, unstructured, incompressible, DNS solver is shown (Iyer & Mahesh 2016) to reproduce the complex shear layer instability seen in low-speed jets in crossflow experiments. Vertical velocity spectra taken along the shear layer show good agreement between simulation and experiment. An analogy to countercurrent mixing layers has been proposed to explain the transition from absolute to convective stability with increasing jet to crossflow ratios. Global linear stability and adjoint sensitivity techniques are developed within the unstructured DNS solver in an effort to further understand the stability and sensitivity of jets in crossflow. An Arnoldi iterative approach is used to solve for the most unstable eigenvalues and their associated eigenmodes for the direct and adjoint formulations. Frequencies from the direct and adjoint modal analyses show good agreement with simulation and experiment. Development, validation, and results for the transverse jet will be presented. Supported by AFOSR.
Quantitative and sensitive analysis of CN molecules using laser induced low pressure He plasma
Pardede, Marincan; Hedwig, Rinda; Abdulmadjid, Syahrun Nur; Lahna, Kurnia; Idris, Nasrullah; Ramli, Muliadi; Jobiliong, Eric; Suyanto, Hery; Marpaung, Alion Mangasi; Suliyanti, Maria Margaretha; Tjia, May On
2015-03-21
We report the results of experimental study on CN 388.3 nm and C I 247.8 nm emission characteristics using 40 mJ laser irradiation with He and N{sub 2} ambient gases. The results obtained with N{sub 2} ambient gas show undesirable interference effect between the native CN emission and the emission of CN molecules arising from the recombination of native C ablated from the sample with the N dissociated from the ambient gas. This problem is overcome by the use of He ambient gas at low pressure of 2 kPa, which also offers the additional advantages of cleaner and stronger emission lines. The result of applying this favorable experimental condition to emission spectrochemical measurement of milk sample having various protein concentrations is shown to yield a close to linear calibration curve with near zero extrapolated intercept. Additionally, a low detection limit of 5 μg/g is found in this experiment, making it potentially applicable for quantitative and sensitive CN analysis. The visibility of laser induced breakdown spectroscopy with low pressure He gas is also demonstrated by the result of its application to spectrochemical analysis of fossil samples. Furthermore, with the use of CO{sub 2} ambient gas at 600 Pa mimicking the Mars atmosphere, this technique also shows promising applications to exploration in Mars.
Quantitative and sensitive analysis of CN molecules using laser induced low pressure He plasma
NASA Astrophysics Data System (ADS)
Pardede, Marincan; Hedwig, Rinda; Abdulmadjid, Syahrun Nur; Lahna, Kurnia; Idris, Nasrullah; Jobiliong, Eric; Suyanto, Hery; Marpaung, Alion Mangasi; Suliyanti, Maria Margaretha; Ramli, Muliadi; Tjia, May On; Lie, Tjung Jie; Lie, Zener Sukra; Kurniawan, Davy Putra; Kurniawan, Koo Hendrik; Kagawa, Kiichiro
2015-03-01
We report the results of experimental study on CN 388.3 nm and C I 247.8 nm emission characteristics using 40 mJ laser irradiation with He and N2 ambient gases. The results obtained with N2 ambient gas show undesirable interference effect between the native CN emission and the emission of CN molecules arising from the recombination of native C ablated from the sample with the N dissociated from the ambient gas. This problem is overcome by the use of He ambient gas at low pressure of 2 kPa, which also offers the additional advantages of cleaner and stronger emission lines. The result of applying this favorable experimental condition to emission spectrochemical measurement of milk sample having various protein concentrations is shown to yield a close to linear calibration curve with near zero extrapolated intercept. Additionally, a low detection limit of 5 μg/g is found in this experiment, making it potentially applicable for quantitative and sensitive CN analysis. The visibility of laser induced breakdown spectroscopy with low pressure He gas is also demonstrated by the result of its application to spectrochemical analysis of fossil samples. Furthermore, with the use of CO2 ambient gas at 600 Pa mimicking the Mars atmosphere, this technique also shows promising applications to exploration in Mars.
ANALYSIS OF DISTRIBUTION FEEDER LOSSES DUE TO ADDITION OF DISTRIBUTED PHOTOVOLTAIC GENERATORS
Tuffner, Francis K.; Singh, Ruchi
2011-08-09
Distributed generators (DG) are small scale power supplying sources owned by customers or utilities and scattered throughout the power system distribution network. Distributed generation can be both renewable and non-renewable. Addition of distributed generation is primarily to increase feeder capacity and to provide peak load reduction. However, this addition comes with several impacts on the distribution feeder. Several studies have shown that addition of DG leads to reduction of feeder loss. However, most of these studies have considered lumped load and distributed load models to analyze the effects on system losses, where the dynamic variation of load due to seasonal changes is ignored. It is very important for utilities to minimize the losses under all scenarios to decrease revenue losses, promote efficient asset utilization, and therefore, increase feeder capacity. This paper will investigate an IEEE 13-node feeder populated with photovoltaic generators on detailed residential houses with water heater, Heating Ventilation and Air conditioning (HVAC) units, lights, and other plug and convenience loads. An analysis of losses for different power system components, such as transformers, underground and overhead lines, and triplex lines, will be performed. The analysis will utilize different seasons and different solar penetration levels (15%, 30%).
Analysis of redox additive-based overcharge protection for rechargeable lithium batteries
NASA Technical Reports Server (NTRS)
Narayanan, S. R.; Surampudi, S.; Attia, A. I.; Bankston, C. P.
1991-01-01
The overcharge condition in secondary lithium batteries employing redox additives for overcharge protection, has been theoretically analyzed in terms of a finite linear diffusion model. The analysis leads to expressions relating the steady-state overcharge current density and cell voltage to the concentration, diffusion coefficient, standard reduction potential of the redox couple, and interelectrode distance. The model permits the estimation of the maximum permissible overcharge rate for any chosen set of system conditions. Digital simulation of the overcharge experiment leads to numerical representation of the potential transients, and estimate of the influence of diffusion coefficient and interelectrode distance on the transient attainment of the steady state during overcharge. The model has been experimentally verified using 1,1-prime-dimethyl ferrocene as a redox additive. The analysis of the experimental results in terms of the theory allows the calculation of the diffusion coefficient and the formal potential of the redox couple. The model and the theoretical results may be exploited in the design and optimization of overcharge protection by the redox additive approach.
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.
1997-01-01
Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.
Analysis on sensitivity and landscape ecological spatial structure of site resources.
Li, Zhen; He, Fang; Wu, Qiao-jun; Tao, Wei
2003-03-01
This article establishes a set of indicators and standards for landscape ecological sensitivity analysis of site resources by using the theories and approaches of landscape ecology. It uses landscape diversity index (H), evenness (E), natural degree (N), contrast degree (C) to study spatial structure and landscape heterogeneity of site resources and thus provides a qualitative-quantitative evaluation method for land planning and management of small, medium scale areas. The analysis of Yantian District, Shenzhen of China showed that Wutong Mountain belonged to high landscape ecological sensitivity area, Sanzhoutian Reservoir and Shangping Reservoir were medium landscape sensitivity and high ecological sensitivity area; Dameisha and Xiaomeisha belonged to medium sensitivity area caused by the decline of natural ecological areas. Shatoujiao, Yantian Pier belonged to low sensitivity area but urban landscape ecological development had reshaped and influenced their landscape ecological roles in a great extent. Suggestions on planning, protection goals and development intensity of each site or district were raised.
Carmichael, Marc G; Liu, Dikai
2015-01-01
Sensitivity of upper limb strength calculated from a musculoskeletal model was analyzed, with focus on how the sensitivity is affected when the model is adapted to represent a person with physical impairment. Sensitivity was calculated with respect to four muscle-tendon parameters: muscle peak isometric force, muscle optimal length, muscle pennation, and tendon slack length. Results obtained from a musculoskeletal model of average strength showed highest sensitivity to tendon slack length, followed by muscle optimal length and peak isometric force, which is consistent with existing studies. Muscle pennation angle was relatively insensitive. The analysis was repeated after adapting the musculoskeletal model to represent persons with varying severities of physical impairment. Results showed that utilizing the weakened model significantly increased the sensitivity of the calculated strength at the hand, with parameters previously insensitive becoming highly sensitive. This increased sensitivity presents a significant challenge in applications utilizing musculoskeletal models to represent impaired individuals.
Preconditioned domain decomposition scheme for three-dimensional aerodynamic sensitivity analysis
NASA Technical Reports Server (NTRS)
Eleshaky, Mohammed E.; Baysal, Oktay
1993-01-01
A preconditioned domain decomposition scheme is introduced for the solution of the 3D aerodynamic sensitivity equation. This scheme uses the iterative GMRES procedure to solve the effective sensitivity equation of the boundary-interface cells in the sensitivity analysis domain-decomposition scheme. Excluding the dense matrices and the effect of cross terms between boundary-interfaces is found to produce an efficient preconditioning matrix.
Chang, Yu-Cheng; Wu, Hui-Ping; Reddy, Nagannagari Masi; Lee, Hsuan-Wei; Lu, Hsueh-Pei; Yeh, Chen-Yu; Diau, Eric Wei-Guang
2013-04-07
The effects of the 4-tert-butylpyridine (TBP) additive in the electrolyte on photovoltaic performance of two push-pull porphyrin sensitizers (YD12 and YD12CN) were examined. Addition of TBP significantly increased the open-circuit voltage (VOC) for YD12 (from 550 to 729 mV) but it was to a lesser extent for YD12CN (from 544 to 636 mV); adding TBP also had the effect of reducing the short-circuit current density (JSC) slightly for YD12 (from 17.65 to 17.19 mA cm(-2)) but it led to a significant reduction for YD12CN (from 16.45 to 9.78 mA cm(-2)). The resulting power conversion efficiencies of the YD12 devices increase from 6.2% to 8.5% whereas those of the YD12CN devices decrease from 5.8% to 4.5%. Based on measurements of temporally resolved photoelectric transients of the devices and femtosecond fluorescence decays of thin-film samples, the poor performance of the YD12CN device in the presence of TBP can be understood as being due to the enhanced charge recombination, decreased electron injection, and a lesser extent of inhibition of the intermolecular energy transfer.
Sobuś, Jan; Kubicki, Jacek; Burdziński, Gotard; Ziółek, Marcin
2015-09-21
Comprehensive studies of all charge-separation processes in efficient carbazole dye-sensitized solar cells are correlated with their photovoltaic parameters. An important role of partial, fast electron recombination from the semiconductor nanoparticles to the oxidized dye is revealed; this takes place on the picosecond and sub-nanosecond timescales. The charge-transfer dynamics in cobalt tris(bipyridyl) based electrolytes and iodide-based electrolyte is observed to depend on potential-determining additives in a similar way. Upon addition of 0.5 M 4-tert-butylpiridine to both types of electrolytes, the stability of the cells is greatly improved; the cell photovoltage increases by 150-200 mV, the electron injection rate decreases about five times (from 5 to 1 ps(-1) ), and fast recombination slows down about two to three times. Dye regeneration proceeds at a rate of about 1 μs(-1) in all electrolytes. Electron recombination from titania to cobalt electrolytes is much faster than that to iodide ones.
Yan, Ying; Yi, Grace Y
2016-07-01
Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.
Application of liquid chromatography in polymer non-ionic antistatic additives analysis.
González-Rodríguez, M Victoria; Dopico-García, M Sonia; Noguerol-Cal, Rosalía; Carballeira-Amarelo, Tania; López-Vilariño, José M; Fernández-Martínez, Gerado
2010-11-01
This article investigates the applicability of HPLC-UV, ultra performance LC-evaporative light-scattering detection (UPLC-ELSD), HPLC-ESI(+)-MS and HPLC-hybrid linear ion trap (LTQ) Orbitrap MS for the analysis of different non-ionic antistatic additives, Span 20, Span 60, Span 65, Span 80, Span 85 (sorbitan fatty acid esters), Atmer 129 (glycerol fatty acid ester) and Atmer 163 (ethoxylated alkylamine). Several alkyl chain length or different degrees of esterification of polyol derivatives can be present in commercial mixtures of these polymer additives. Therefore, their identification and quantification is complicated. Qualitative composition of the studied compounds was analysed by MS. HPLC-UV, UPLC-ELSD and HPLC-LTQ Orbitrap MS methods were applied to the quantitative determination of the different Spans, Atmer 129 and Atmer 163, respectively. Quality parameters of these methods were established and no derivatization was necessary.
Inference of Climate Sensitivity from Analysis of Earth's Energy Budget
NASA Astrophysics Data System (ADS)
Forster, Piers M.
2016-06-01
Recent attempts to diagnose equilibrium climate sensitivity (ECS) from changes in Earth's energy budget point toward values at the low end of the Intergovernmental Panel on Climate Change Fifth Assessment Report (AR5)'s likely range (1.5-4.5 K). These studies employ observations but still require an element of modeling to infer ECS. Their diagnosed effective ECS over the historical period of around 2 K holds up to scrutiny, but there is tentative evidence that this underestimates the true ECS from a doubling of carbon dioxide. Different choices of energy imbalance data explain most of the difference between published best estimates, and effective radiative forcing dominates the overall uncertainty. For decadal analyses the largest source of uncertainty comes from a poor understanding of the relationship between ECS and decadal feedback. Considerable progress could be made by diagnosing effective radiative forcing in models.
Comprehensive mechanisms for combustion chemistry: Experiment, modeling, and sensitivity analysis
Dryer, F.L.; Yetter, R.A.
1993-12-01
This research program is an integrated experimental/numerical effort to study pyrolysis and oxidation reactions and mechanisms for small-molecule hydrocarbon structures under conditions representative of combustion environments. The experimental aspects of the work are conducted in large diameter flow reactors, at pressures from one to twenty atmospheres, temperatures from 550 K to 1200 K, and with observed reaction times from 10{sup {minus}2} to 5 seconds. Gas sampling of stable reactant, intermediate, and product species concentrations provides not only substantial definition of the phenomenology of reaction mechanisms, but a significantly constrained set of kinetic information with negligible diffusive coupling. Analytical techniques used for detecting hydrocarbons and carbon oxides include gas chromatography (GC), and gas infrared (NDIR) and FTIR methods are utilized for continuous on-line sample detection of light absorption measurements of OH have also been performed in an atmospheric pressure flow reactor (APFR), and a variable pressure flow (VPFR) reactor is presently being instrumented to perform optical measurements of radicals and highly reactive molecular intermediates. The numerical aspects of the work utilize zero and one-dimensional pre-mixed, detailed kinetic studies, including path, elemental gradient sensitivity, and feature sensitivity analyses. The program emphasizes the use of hierarchical mechanistic construction to understand and develop detailed kinetic mechanisms. Numerical studies are utilized for guiding experimental parameter selections, for interpreting observations, for extending the predictive range of mechanism constructs, and to study the effects of diffusive transport coupling on reaction behavior in flames. Modeling using well defined and validated mechanisms for the CO/H{sub 2}/oxidant systems.
Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities
NASA Astrophysics Data System (ADS)
Esposito, Gaetano
Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and
Results of an integrated structure-control law design sensitivity analysis
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1988-01-01
Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.
Gu, Binghe; Meldrum, Brian; McCabe, Terry; Phillips, Scott
2012-01-01
A theoretical treatment was developed and validated that relates analyte concentration and mass sensitivities to injection volume, retention factor, particle diameter, column length, column inner diameter and detection wavelength in liquid chromatography, and sample volume and extracted volume in solid-phase extraction (SPE). The principles were applied to improve sensitivity for trace analysis of clopyralid in drinking water. It was demonstrated that a concentration limit of detection of 0.02 ppb (μg/L) for clopyralid could be achieved with the use of simple UV detection and 100 mL of a spiked drinking water sample. This enabled reliable quantitation of clopyralid at the targeted 0.1 ppb level. Using a buffered solution as the elution solvent (potassium acetate buffer, pH 4.5, containing 10% of methanol) in the SPE procedures was found superior to using 100% methanol, as it provided better extraction recovery (70-90%) and precision (5% for a concentration at 0.1 ppb level). In addition, the eluted sample was in a weaker solvent than the mobile phase, permitting the direct injection of the extracted sample, which enabled a faster cycle time of the overall analysis. Excluding the preparation of calibration standards, the analysis of a single sample, including acidification, extraction, elution and LC run, could be completed in 1 h. The method was used successfully for the determination of clopyralid in over 200 clopyralid monoethanolamine-fortified drinking water samples, which were treated with various water treatment resins.
NASA Astrophysics Data System (ADS)
Hostache, R.; Hissler, C.; Matgen, P.; Guignard, C.; Bates, P.
2014-09-01
Fine sediments represent an important vector of pollutant diffusion in rivers. When deposited in floodplains and riverbeds, they can be responsible for soil pollution. In this context, this paper proposes a modelling exercise aimed at predicting transport and diffusion of fine sediments and dissolved pollutants. The model is based upon the Telemac hydro-informatic system (dynamical coupling Telemac-2D-Sysiphe). As empirical and semiempirical parameters need to be calibrated for such a modelling exercise, a sensitivity analysis is proposed. An innovative point in this study is the assessment of the usefulness of dissolved trace metal contamination information for model calibration. Moreover, for supporting the modelling exercise, an extensive database was set up during two flood events. It includes water surface elevation records, discharge measurements and geochemistry data such as time series of dissolved/particulate contaminants and suspended-sediment concentrations. The most sensitive parameters were found to be the hydraulic friction coefficients and the sediment particle settling velocity in water. It was also found that model calibration did not benefit from dissolved trace metal contamination information. Using the two monitored hydrological events as calibration and validation, it was found that the model is able to satisfyingly predict suspended sediment and dissolve pollutant transport in the river channel. In addition, a qualitative comparison between simulated sediment deposition in the floodplain and a soil contamination map shows that the preferential zones for deposition identified by the model are realistic.
NASA Astrophysics Data System (ADS)
Ritschel, Thomas; Totsche, Kai Uwe
2016-08-01
The identification of transport parameters by inverse modeling often suffers from equifinality or parameter correlation when models are fitted to measurements of the solute breakthrough in column outflow experiments. This parameter uncertainty can be approached by performing multiple experiments with different sets of boundary conditions, each provoking observations that are uniquely attributable to the respective transport processes. A promising approach to further increase the information potential of the experimental outcome is the closed-flow column design. It is characterized by the recirculation of the column effluent into the solution supply vessel that feeds the inflow, which results in a damped sinusoidal oscillation in the breakthrough curve. In order to reveal the potential application of closed-flow experiments, we present a comprehensive sensitivity analysis using common models for adsorption and degradation. We show that the sensitivity of inverse parameter determination with respect to the apparent dispersion can be controlled by the experimenter. For optimal settings, a decrease in parameter uncertainty as compared to classical experiments by an order of magnitude is achieved. In addition, we show a reduced equifinality between rate-limited interactions and apparent dispersion. Furthermore, we illustrate the expected breakthrough curve for equilibrium and nonequilibrium adsorption, the latter showing strong similarities to the behavior found for completely mixed batch reactor experiments. Finally, breakthrough data from a reactive tracer experiment is evaluated using the proposed framework with excellent agreement of model and experimental results.
Addition of three-dimensional isoparametric elements to NASA structural analysis program (NASTRAN)
NASA Technical Reports Server (NTRS)
Field, E. I.; Johnson, S. E.
1973-01-01
Implementation is made of the three-dimensional family of linear, quadratic and cubic isoparametric solid elements into the NASA Structural Analysis program, NASTRAN. This work included program development, installation, testing, and documentation. The addition of these elements to NASTRAN provides a significant increase in modeling capability particularly for structures requiring specification of temperatures, material properties, displacements, and stresses which vary throughout each individual element. Complete program documentation is presented in the form of new sections and updates for direct insertion to the three NASTRAN manuals. The results of demonstration test problems are summarized. Excellent results are obtained with the isoparametric elements for static, normal mode, and buckling analyses.
NASA Astrophysics Data System (ADS)
Sultanov, Albert H.; Gayfulin, Renat R.; Vinogradova, Irina L.
2008-04-01
Fiber optic telecommunication systems with duplex data transmitting over single fiber require reflection minimization. Moreover reflections may be so high that causes system deactivating by misoperation of conventional alarm, and system can not automatically adjudge the collision, so operator manual control is required. In this paper we proposed technical solution of mentioned problem based on additional analysis subsystem, realized on the installed Ufa-city fiber optic CTV system "Crystal". Experience of it's maintenance and results of investigations of the fault tolerance parameters are represented
Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao
2015-01-01
There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants.
We present a multi-faceted sensitivity analysis of a spatially explicit, individual-based model (IBM) (HexSim) of a threatened species, the Northern Spotted Owl (Strix occidentalis caurina) on a national forest in Washington, USA. Few sensitivity analyses have been conducted on ...
Park, Sang Hyun; Jeon, Hyeong Kyu; Kim, Jin Bong
2015-01-01
Most of the diphyllobothriid tapeworms isolated from human samples in the Republic of Korea (= Korea) have been identified as Diphyllobothrium nihonkaiense by genetic analysis. This paper reports confirmation of D. nihonkaiense infections in 4 additional human samples obtained between 1995 and 2014, which were analyzed at the Department of Parasitology, Hallym University College of Medicine, Korea. Analysis of the mitochondrial cytochrome c oxidase 1 (cox1) gene revealed a 98.5-99.5% similarity with a reference D. nihonkaiense sequence in GenBank. The present report adds 4 cases of D. nihonkaiense infections to the literature, indicating that the dominant diphyllobothriid tapeworm species in Korea is D. nihonkaiense but not D. latum. PMID:25748716
A multiple additive regression tree analysis of three exposure measures during Hurricane Katrina.
Curtis, Andrew; Li, Bin; Marx, Brian D; Mills, Jacqueline W; Pine, John
2011-01-01
This paper analyses structural and personal exposure to Hurricane Katrina. Structural exposure is measured by flood height and building damage; personal exposure is measured by the locations of 911 calls made during the response. Using these variables, this paper characterises the geography of exposure and also demonstrates the utility of a robust analytical approach in understanding health-related challenges to disadvantaged populations during recovery. Analysis is conducted using a contemporary statistical approach, a multiple additive regression tree (MART), which displays considerable improvement over traditional regression analysis. By using MART, the percentage of improvement in R-squares over standard multiple linear regression ranges from about 62 to more than 100 per cent. The most revealing finding is the modelled verification that African Americans experienced disproportionate exposure in both structural and personal contexts. Given the impact of exposure to health outcomes, this finding has implications for understanding the long-term health challenges facing this population.
NASA Astrophysics Data System (ADS)
Saptari, Vidi A.; Youcef-Toumi, Kamal
2000-11-01
Noninvasive blood glucose monitoring is a long pursued goal in clinical diagnostic. Among several other optical methods, near infrared absorption spectroscopy is the most promising one for the noninvasive application to date. However, realization has not been achieved. A major obstacle is the low signal-to-noise ration pertinent to physiological blood glucose measurement using the near infrared absorption technique. Sensitivity analysis of aqueous glucose absorption signals was performed in the combination band region and in the first-overtone region. The analysis involved quantification of both glucose absorption signal and the corresponding spectral noise within a particular wavelength region. Glucose absorption band at 4430cm-1 (2257nm) in the combination band region was found to give an order of magnitude higher signal-to-noise ratio than the strongest band in the first-overtone region. A Fourier- filtering algorithm was applied to the raw absorbance data to remove some of the unwanted spectral variations. With simple peak-to-peak analysis to the Fourier-filtered absorbance data, repeatability of less than +/-0.5mmol/L was achieved. In addition, effects of temperature variations on the absorption spectra were studied. The effects of sample temperature were compensated with the application of the Fourier filter.
NASA Astrophysics Data System (ADS)
Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten
2015-04-01
Predicting flood inundation extents using hydraulic models is subject to a number of critical uncertainties. For a specific event, these uncertainties are known to have a large influence on model outputs and any subsequent analyses made by risk managers. Hydraulic modellers often approach such problems by applying uncertainty analysis techniques such as the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. However, these methods do not allow one to attribute which source of uncertainty has the most influence on the various model outputs that inform flood risk decision making. Another issue facing modellers is the amount of computational resource that is available to spend on modelling flood inundations that are 'fit for purpose' to the modelling objectives. Therefore a balance needs to be struck between computation time, realism and spatial resolution, and effectively characterising the uncertainty spread of predictions (for example from boundary conditions and model parameterisations). However, it is not fully understood how much of an impact each factor has on model performance, for example how much influence changing the spatial resolution of a model has on inundation predictions in comparison to other uncertainties inherent in the modelling process. Furthermore, when resampling fine scale topographic data in the form of a Digital Elevation Model (DEM) to coarser resolutions, there are a number of possible coarser DEMs that can be produced. Deciding which DEM is then chosen to represent the surface elevations in the model could also influence model performance. In this study we model a flood event using the hydraulic model LISFLOOD-FP and apply Sobol' Sensitivity Analysis to estimate which input factor, among the uncertainty in model boundary conditions, uncertain model parameters, the spatial resolution of the DEM and the choice of resampled DEM, have the most influence on a range of model outputs. These outputs include whole domain maximum
NASA Astrophysics Data System (ADS)
Gehman, V. M.; Goldschmidt, A.; Nygren, D.; Oliveira, C. A. B.; Renner, J.
2013-10-01
Xenon is an especially attractive candidate for both direct WIMP and 0νββ decay searches. Although the current trend has exploited the liquid phase, the gas phase xenon offers remarkable performance advantages for: energy resolution, topology visualization, and discrimination between electron and nuclear recoils. The NEXT-100 experiment, now under construction in the Canfranc Underground Laboratory, Spain, will operate at ~ 15 bars with 100 kg of 136Xe for the 0νββ decay search. We will describe recent results with small prototypes, indicating that NEXT-100 can provide about 0.5% FWHM energy resolution at the decay's Q value (2457.83 keV), as well as rejection of γ-rays with topological cuts. However, sensitivity goals for WIMP dark matter and 0νββ decay searches indicate the probable need for ton-scale active masses. NEXT-100 provides the springboard to reach this scale with xenon gas. We describe a scenario for performing both searches in a single, high-pressure, ton-scale xenon gas detector, without significant compromise to either. In addition, even in a single ton-scale, high-pressure xenon gas TPC, an intrinsic sensitivity to the nuclear recoil direction may exist. This plausibly offers an advance of more than two orders of magnitude relative to current low-pressure TPC concepts. We argue that, in an era of deepening fiscal austerity, such a dual-purpose detector may be possible at acceptable cost, within the time frame of interest, and deserves our collective attention.
Sensitivity Analysis of Mixed Models for Incomplete Longitudinal Data
ERIC Educational Resources Information Center
Xu, Shu; Blozis, Shelley A.
2011-01-01
Mixed models are used for the analysis of data measured over time to study population-level change and individual differences in change characteristics. Linear and nonlinear functions may be used to describe a longitudinal response, individuals need not be observed at the same time points, and missing data, assumed to be missing at random (MAR),…
Kuramoto, S. Janet; Stuart, Elizabeth A.
2013-01-01
Despite that randomization is the gold standard for estimating causal relationships, many questions in prevention science are left to be answered through non-experimental studies often because randomization is either infeasible or unethical. While methods such as propensity score matching can adjust for observed confounding, unobserved confounding is the Achilles heel of most non-experimental studies. This paper describes and illustrates seven sensitivity analysis techniques that assess the sensitivity of study results to an unobserved confounder. These methods were categorized into two groups to reflect differences in their conceptualization of sensitivity analysis, as well as their targets of interest. As a motivating example we examine the sensitivity of the association between maternal suicide and offspring’s risk for suicide attempt hospitalization. While inferences differed slightly depending on the type of sensitivity analysis conducted, overall the association between maternal suicide and offspring’s hospitalization for suicide attempt was found to be relatively robust to an unobserved confounder. The ease of implementation and the insight these analyses provide underscores sensitivity analysis techniques as an important tool for non-experimental studies. The implementation of sensitivity analysis can help increase confidence in results from non-experimental studies and better inform prevention researchers and policymakers regarding potential intervention targets. PMID:23408282
Wang, Lin; Qu, Moying; Chen, Yao; Zhou, Yaxiong; Wan, Zhi
2016-01-01
Objectives We performed a meta-analysis to explore the effects of adding statins to standard treatment on adult patients of pulmonary hypertension (PH). Methods A systematic search up to December, 2015 of Medline, EMBASE, Cochrane Database of Systematic reviews and Cochrane Central Register of Controlled Trials was performed to identify randomized controlled trials with PH patients treated with statins. Results Five studies involving 425 patients were included into this meta-analysis. The results of our analysis showed that the statins can’t significantly increase 6-minute walking distance (6MWD, mean difference [MD] = -0.33 [CI: -18.25 to 17.59]), decrease the BORG dyspnea score (MD = -0.72 [CI: -2.28 to 0.85]), the clinical worsening risk (11% in statins vs. 10.1% in controls, Risk ratio = 1.06 [CI: 0.61, 1.83]), or the systolic pulmonary arterial pressure (SPAP) (MD = -0.72 [CI: -2.28 to 0.85]). Subgroup analysis for PH due to COPD or non-COPD also showed no significance. Conclusions Statins have no additional beneficial effect on standard therapy for PH, but the results from subgroup of PH due to COPD seem intriguing and further study with larger sample size and longer follow-up is suggested. PMID:27992469
Regression analysis of mixed recurrent-event and panel-count data with additive rate models.
Zhu, Liang; Zhao, Hui; Sun, Jianguo; Leisenring, Wendy; Robison, Leslie L
2015-03-01
Event-history studies of recurrent events are often conducted in fields such as demography, epidemiology, medicine, and social sciences (Cook and Lawless, 2007, The Statistical Analysis of Recurrent Events. New York: Springer-Verlag; Zhao et al., 2011, Test 20, 1-42). For such analysis, two types of data have been extensively investigated: recurrent-event data and panel-count data. However, in practice, one may face a third type of data, mixed recurrent-event and panel-count data or mixed event-history data. Such data occur if some study subjects are monitored or observed continuously and thus provide recurrent-event data, while the others are observed only at discrete times and hence give only panel-count data. A more general situation is that each subject is observed continuously over certain time periods but only at discrete times over other time periods. There exists little literature on the analysis of such mixed data except that published by Zhu et al. (2013, Statistics in Medicine 32, 1954-1963). In this article, we consider the regression analysis of mixed data using the additive rate model and develop some estimating equation-based approaches to estimate the regression parameters of interest. Both finite sample and asymptotic properties of the resulting estimators are established, and the numerical studies suggest that the proposed methodology works well for practical situations. The approach is applied to a Childhood Cancer Survivor Study that motivated this study.
Jia, Wei; Ling, Yun; Lin, Yuanhui; Chang, James; Chu, Xiaogang
2014-04-04
A new method combining QuEChERS with ultrahigh-performance liquid chromatography and electrospray ionization quadrupole Orbitrap high-resolution mass spectrometry (UHPLC/ESI Q-Orbitrap) was developed for the highly accurate and sensitive screening of 43 antioxidants, preservatives and synthetic sweeteners in dairy products. Response surface methodology was employed to optimize a quick, easy, cheap, effective, rugged, and safe (QuEChERS) sample preparation method for the determination of 42 different analytes in dairy products for the first time. After optimization, the maximum predicted recovery was 99.33% rate for aspartame under the optimized conditions of 10 mL acetionitrile, 1.52 g sodium acetate, 410 mg PSA and 404 mgC18. For the matrices studied, the recovery rates of the other 42 compounds ranged from 89.4% to 108.2%, with coefficient of variation <6.4%. UHPLC/ESI Q-Orbitrap Mass full scan mode acquired full MS data was used to identify and quantify additives, and data-dependent scan mode obtained fragment ion spectra for confirmation. The mass accuracy typically obtained is routinely better than 1.5ppm, and only need to calibrate once a week. The 43 compounds behave dynamic in the range 0.001-1000 μg kg(-1) concentration, with correlation coefficient >0.999. The limits of detection for the analytes are in the range 0.0001-3.6 μg kg(-1). This method has been successfully applied on screening of antioxidants, preservatives and synthetic sweeteners in commercial dairy product samples, and it is very useful for fast screening of different food additives.
The application of sensitivity analysis to models of large scale physiological systems
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1974-01-01
A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.
On 3-D modeling and automatic regridding in shape design sensitivity analysis
NASA Technical Reports Server (NTRS)
Choi, Kyung K.; Yao, Tse-Min
1987-01-01
The material derivative idea of continuum mechanics and the adjoint variable method of design sensitivity analysis are used to obtain a computable expression for the effect of shape variations on measures of structural performance of three-dimensional elastic solids.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1994-01-01
The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.
Sensitivity Analysis for Atmospheric Infrared Sounder (AIRS) CO2 Retrieval
NASA Technical Reports Server (NTRS)
Gat, Ilana
2012-01-01
The Atmospheric Infrared Sounder (AIRS) is a thermal infrared sensor able to retrieve the daily atmospheric state globally for clear as well as partially cloudy field-of-views. The AIRS spectrometer has 2378 channels sensing from 15.4 micrometers to 3.7 micrometers, of which a small subset in the 15 micrometers region has been selected, to date, for CO2 retrieval. To improve upon the current retrieval method, we extended the retrieval calculations to include a prior estimate component and developed a channel ranking system to optimize the channels and number of channels used. The channel ranking system uses a mathematical formalism to rapidly process and assess the retrieval potential of large numbers of channels. Implementing this system, we identifed a larger optimized subset of AIRS channels that can decrease retrieval errors and minimize the overall sensitivity to other iridescent contributors, such as water vapor, ozone, and atmospheric temperature. This methodology selects channels globally by accounting for the latitudinal, longitudinal, and seasonal dependencies of the subset. The new methodology increases accuracy in AIRS CO2 as well as other retrievals and enables the extension of retrieved CO2 vertical profiles to altitudes ranging from the lower troposphere to upper stratosphere. The extended retrieval method for CO2 vertical profile estimation using a maximum-likelihood estimation method. We use model data to demonstrate the beneficial impact of the extended retrieval method using the new channel ranking system on CO2 retrieval.
Design tradeoff studies and sensitivity analysis, appendix B
NASA Technical Reports Server (NTRS)
1979-01-01
Further work was performed on the Near Term Hybrid Passenger Vehicle Development Program. Fuel economy on the order of 2 to 3 times that of a conventional vehicle, with a comparable life cycle cost, is possible. The two most significant factors in keeping the life cycle cost down are the retail price increment and the ratio of battery replacement cost to battery life. Both factors can be reduced by reducing the power rating of the electric drive portion of the system relative to the system power requirements. The type of battery most suitable for the hybrid, from the point of view of minimizing life cycle cost, is nickel-iron. The hybrid is much less sensitive than a conventional vehicle is, in terms of the reduction in total fuel consumption and resultant decreases in operating expense, to reductions in vehicle weight, tire rolling resistance, etc., and to propulsion system and drivetrain improvements designed to improve the brake specific fuel consumption of the engine under low road load conditions. It is concluded that modifications to package the propulsion system and battery pack can be easily accommodated within the confines of a modified carryover body such as the Ford Ltd.
Xi, Qing; Li, Zhao-Fu; Luo, Chuan
2014-05-01
Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.
2014-07-01
Barrier Shoreline Wetland Value Assessment Model1 by S. Kyle McKay2 and J. Craig Fischenich3 OVERVIEW: Sensitivity analysis is a technique for...scale restoration projects to reduce marsh loss and maintain these wetlands as healthy functioning ecosystems. The Barataria Basin Barrier Shoreline...Sensitivity Analysis of the Barataria Basin Barrier Shoreline Wetland Value Assessment Model1 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
Optimization by decomposition, complex system sensitivity analysis, and a rapid growth of disciplinary sensitivity analysis are some of the recent developments that hold promise of a quantum jump in the support engineers receive from computers in the quantitative aspects of design. Review of the salient points of these techniques is given and illustrated by examples from aircraft design as a process that combines the best of human intellect and computer power to manipulate data.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
Optimization by decomposition, complex system sensitivity analysis, and a rapid growth of disciplinary sensitivity analysis are some of the recent developments that hold promise of a quantum jump in the support engineers receive from computers in the quantitative aspects of design. Review of the salient points of these techniques is given and illustrated by examples from aircraft design as a process that combines the best of human intellect and computer power to manipulate data.
NASA Technical Reports Server (NTRS)
Grady, Joseph E.; Haller, William J.; Poinsatte, Philip E.; Halbig, Michael C.; Schnulo, Sydney L.; Singh, Mrityunjay; Weir, Don; Wali, Natalie; Vinup, Michael; Jones, Michael G.; Patterson, Clark; Santelle, Tom; Mehl, Jeremy
2015-01-01
The research and development activities reported in this publication were carried out under NASA Aeronautics Research Institute (NARI) funded project entitled "A Fully Nonmetallic Gas Turbine Engine Enabled by Additive Manufacturing." The objective of the project was to conduct evaluation of emerging materials and manufacturing technologies that will enable fully nonmetallic gas turbine engines. The results of the activities are described in three part report. The first part of the report contains the data and analysis of engine system trade studies, which were carried out to estimate reduction in engine emissions and fuel burn enabled due to advanced materials and manufacturing processes. A number of key engine components were identified in which advanced materials and additive manufacturing processes would provide the most significant benefits to engine operation. The technical scope of activities included an assessment of the feasibility of using additive manufacturing technologies to fabricate gas turbine engine components from polymer and ceramic matrix composites, which were accomplished by fabricating prototype engine components and testing them in simulated engine operating conditions. The manufacturing process parameters were developed and optimized for polymer and ceramic composites (described in detail in the second and third part of the report). A number of prototype components (inlet guide vane (IGV), acoustic liners, engine access door) were additively manufactured using high temperature polymer materials. Ceramic matrix composite components included turbine nozzle components. In addition, IGVs and acoustic liners were tested in simulated engine conditions in test rigs. The test results are reported and discussed in detail.
Raman analyzer for sensitive natural gas composition analysis
NASA Astrophysics Data System (ADS)
Sharma, Rachit; Poonacha, Samhitha; Bekal, Anish; Vartak, Sameer; Weling, Aniruddha; Tilak, Vinayak; Mitra, Chayan
2016-10-01
Raman spectroscopy is of significant importance in industrial gas analysis due to its unique capability of quantitative multigas measurement, especially diatomics (N2 and H2), with a single laser. This paper presents the development of a gas analyzer system based on high pressure Raman scattering in a multipass Raman cell and demonstrates its feasibility for real-time natural gas analysis. A 64-pass Raman cell operated at elevated pressure (5 bar) is used to create multiplicative enhancement (proportional to number of passes times pressure) of the natural gas Raman signal. A relatively low power 532-nm continuous wave laser beam (200 mW) is used as the source and the signals are measured through a cooled charge-coupled device grating spectrometer (30-s exposure). A hybrid algorithm based on background-correction and least-squares error minimization is used to estimate gas concentrations. Individual gas component concentration repeatability of the order of 0.1% is demonstrated. Further, the applicability of the technique for natural gas analysis is demonstrated through measurements on calibrated gas mixtures. Experimental details, analyzer characterization, and key measurements are presented to demonstrate the performance of the technique.
NASA Astrophysics Data System (ADS)
Im, Hyungbin; Bae, Dae Sung; Chung, Jintai
2012-04-01
This paper presents a design sensitivity analysis of dynamic responses of a BLDC motor with mechanical and electromagnetic interactions. Based on the equations of motion which consider mechanical and electromagnetic interactions of the motor, the sensitivity equations for the dynamic responses were derived by applying the direct differential method. From the sensitivity equation along with the equations of motion, the time responses for the sensitivity analysis were obtained by using the Newmark time integration method. The sensitivities of the motor performances such as the electromagnetic torque, rotating speed, and vibration level were analyzed for the six design parameters of rotor mass, shaft/bearing stiffness, rotor eccentricity, winding resistance, coil turn number, and residual magnetic flux density. Furthermore, to achieve a higher torque, higher speed, and lower vibration level, a new BLDC motor was designed by applying the multi-objective function method. It was found that all three performances are sensitive to the design parameters in the order of the coil turn number, magnetic flux density, rotor mass, winding resistance, rotor eccentricity, and stiffness. It was also found that the torque and vibration level are more sensitive to the parameters than the rotating speed. Finally, by applying the sensitivity analysis results, a new optimized design of the motor resulted in better performances. The newly designed motor showed an improved torque, rotating speed, and vibration level.
NASA Astrophysics Data System (ADS)
Edouard, C.; Petit, M.; Forgez, C.; Bernard, J.; Revel, R.
2016-09-01
In this work, a simplified electrochemical and thermal model that can predict both physicochemical and aging behavior of Li-ion batteries is studied. A sensitivity analysis of all its physical parameters is performed in order to find out their influence on the model output based on simulations under various conditions. The results gave hints on whether a parameter needs particular attention when measured or identified and on the conditions (e.g. temperature, discharge rate) under which it is the most sensitive. A specific simulation profile is designed for parameters involved in aging equations in order to determine their sensitivity. Finally, a step-wise method is followed to limit the influence of parameter values when identifying some of them, according to their relative sensitivity from the study. This sensitivity analysis and the subsequent step-wise identification method show very good results, such as a better fitting of the simulated cell voltage with experimental data.
Electric power exchanges with sensitivity matrices: an experimental analysis
Drozdal, Martin
2001-01-01
We describe a fast and incremental method for power flows computation. Fast in the sense that it can be used for real time power flows computation, and incremental in the sense that it computes any additional increase/decrease in line congestion caused by a particular contract. This is, to our best knowledge, the only suitable method for real time power flows computation, that at the same time offers a powerful way of dealing with congestion contingency. Many methods for this purpose have been designed, or thought of, but those either lack speed or being incremental, or have never been coded and tested. The author is in the process of obtaining a patent on methods, algorithms, and procedures described in this paper.
NASA Astrophysics Data System (ADS)
Hasuike, Takashi; Katagiri, Hideki
2010-10-01
This paper focuses on the proposition of a portfolio selection problem considering an investor's subjectivity and the sensitivity analysis for the change of subjectivity. Since this proposed problem is formulated as a random fuzzy programming problem due to both randomness and subjectivity presented by fuzzy numbers, it is not well-defined. Therefore, introducing Sharpe ratio which is one of important performance measures of portfolio models, the main problem is transformed into the standard fuzzy programming problem. Furthermore, using the sensitivity analysis for fuzziness, the analytical optimal portfolio with the sensitivity factor is obtained.
Morshed, Monjur; Ingalls, Brian; Ilie, Silvana
2017-01-01
Sensitivity analysis characterizes the dependence of a model's behaviour on system parameters. It is a critical tool in the formulation, characterization, and verification of models of biochemical reaction networks, for which confident estimates of parameter values are often lacking. In this paper, we propose a novel method for sensitivity analysis of discrete stochastic models of biochemical reaction systems whose dynamics occur over a range of timescales. This method combines finite-difference approximations and adaptive tau-leaping strategies to efficiently estimate parametric sensitivities for stiff stochastic biochemical kinetics models, with negligible loss in accuracy compared with previously published approaches. We analyze several models of interest to illustrate the advantages of our method.
Design component method for sensitivity analysis of built-up structures
NASA Technical Reports Server (NTRS)
Choi, Kyung K.; Seong, Hwai G.
1986-01-01
A 'design component method' that provides a unified and systematic organization of design sensitivity analysis for built-up structures is developed and implemented. Both conventional design variables, such as thickness and cross-sectional area, and shape design variables of components of built-up structures are considered. It is shown that design of components of built-up structures can be characterized and system design sensitivity expressions obtained by simply adding contributions from each component. The method leads to a systematic organization of computations for design sensitivity analysis that is similar to the way in which computations are organized within a finite element code.
NASA Technical Reports Server (NTRS)
Bittker, David A.; Radhakrishnan, Krishnan
1994-01-01
LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 3 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 3 explains the kinetics and kinetics-plus-sensitivity analysis problems supplied with LSENS and presents sample results. These problems illustrate the various capabilities of, and reaction models that can be solved by, the code and may provide a convenient starting point for the user to construct the problem data file required to execute LSENS. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.
An analysis of rate-sensitive skin in gas wells
Meehan, D.N.; Schell, E.J.
1983-10-01
This paper documents the analysis of rate dependent skin in a gas well. Three build-up tests and an isochronal test are analyzed in some detail. The results indicate the rate dependent skin is due to nondarcy flow near the wellbore. Evidence is presented that suggest the non-darcy flow results from calcium carbonate scale partially plugging the perforations. Also, the summary of a pressure build-up study is included on the wells recently drilled in Champlin's Stratton-Agua Dulce Field.
Analysis of rate-sensitive skin in gas wells
Meehan, D.N.; Schell, E.J.
1983-01-01
This study documents the analysis of rate dependent skin in a gas well. Three build-up tests and an isochronal test are analyzed in some detail. The results indicate the rate dependent skin is due to non-Darcy flow near the well bore. Evidence is presented that suggest the non-Darcy flow results from calcium carbonate scale partially plugging the perforations. Also, the summary of a pressure build-up study is included on the wells recently drilled in Champlin's Stratton-Agua Dulce field.
Dive Angle Sensitivity Analysis for Flight Test Safety and Efficiency
2010-03-01
These points develop into high- speed dives and require an accurate predictive model to prevent possible testing accidents. As a flight test is...Looking back at this concept and approach, Equation 2.1 and 2.4 are combined to obtain Equation 2.5. dh V V dVT D dt W g dt...number of attempts at each test point as well as prevent possible accidents and crashes from data that is misrepresented. The analysis took a Dive
Eigenvalue and eigenvector sensitivity and approximate analysis for repeated eigenvalue problems
NASA Technical Reports Server (NTRS)
Hou, Gene J. W.; Kenny, Sean P.
1991-01-01
A set of computationally efficient equations for eigenvalue and eigenvector sensitivity analysis are derived, and a method for eigenvalue and eigenvector approximate analysis in the presence of repeated eigenvalues is presented. The method developed for approximate analysis involves a reparamaterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations of changes in both the eigenvalues and eigenvectors associated with the repeated eigenvalue problem. Examples are given to demonstrate the application of such equations for sensitivity and approximate analysis.
Kinetic modeling and sensitivity analysis of plasma-assisted combustion
NASA Astrophysics Data System (ADS)
Togai, Kuninori
Plasma-assisted combustion (PAC) is a promising combustion enhancement technique that shows great potential for applications to a number of different practical combustion systems. In this dissertation, the chemical kinetics associated with PAC are investigated numerically with a newly developed model that describes the chemical processes induced by plasma. To support the model development, experiments were performed using a plasma flow reactor in which the fuel oxidation proceeds with the aid of plasma discharges below and above the self-ignition thermal limit of the reactive mixtures. The mixtures used were heavily diluted with Ar in order to study the reactions with temperature-controlled environments by suppressing the temperature changes due to chemical reactions. The temperature of the reactor was varied from 420 K to 1250 K and the pressure was fixed at 1 atm. Simulations were performed for the conditions corresponding to the experiments and the results are compared against each other. Important reaction paths were identified through path flux and sensitivity analyses. Reaction systems studied in this work are oxidation of hydrogen, ethylene, and methane, as well as the kinetics of NOx in plasma. In the fuel oxidation studies, reaction schemes that control the fuel oxidation are analyzed and discussed. With all the fuels studied, the oxidation reactions were extended to lower temperatures with plasma discharges compared to the cases without plasma. The analyses showed that radicals produced by dissociation of the reactants in plasma plays an important role of initiating the reaction sequence. At low temperatures where the system exhibits a chain-terminating nature, reactions of HO2 were found to play important roles on overall fuel oxidation. The effectiveness of HO2 as a chain terminator was weakened in the ethylene oxidation system, because the reactions of C 2H4 + O that have low activation energies deflects the flux of O atoms away from HO2. For the
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1996-01-01
An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.
Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil
NASA Technical Reports Server (NTRS)
Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris
2016-01-01
Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.
Sensitivity analysis of single-layer graphene resonators using atomic finite element method
Lee, Haw-Long; Hsu, Jung-Chang; Lin, Shu-Yu; Chang, Win-Jin
2013-09-28
Atomic finite element simulation is applied to study the natural frequency and sensitivity of a single-layer graphene-based resonator with CCCC, SSSS, CFCF, SFSF, and CFCF boundary conditions using the commercial code ANSYS. The fundamental frequencies of the graphene sheet are compared with the results of the previous finite element study. In addition, the sensitivity of the resonator is compared with the early work based on nonlocal elasticity theory. The results of the comparison are very good in all considered cases. The sensitivities of the resonator with different boundary conditions are obtained, and the order based on the boundary condition is CCCC > SSSS > CFCF > SFSF > CFFF. The highest sensitivity is obtained when the attached mass is located at the center of the resonator. This is useful for the design of a highly sensitive graphene-based mass sensor.
EH AND S ANALYSIS OF DYE-SENSITIZED PHOTOVOLTAIC SOLAR CELL PRODUCTION.
BOWERMAN,B.; FTHENAKIS,V.
2001-10-01
Photovoltaic solar cells based on a dye-sensitized nanocrystalline titanium dioxide photoelectrode have been researched and reported since the early 1990's. Commercial production of dye-sensitized photovoltaic solar cells has recently been reported in Australia. In this report, current manufacturing methods are described, and estimates are made of annual chemical use and emissions during production. Environmental, health and safety considerations for handling these materials are discussed. This preliminary EH and S evaluation of dye-sensitized titanium dioxide solar cells indicates that some precautions will be necessary to mitigate hazards that could result in worker exposure. Additional information required for a more complete assessment is identified.
Biomechanical modeling and sensitivity analysis of bipedal running ability. II. Extinct taxa.
Hutchinson, John R
2004-10-01
Using an inverse dynamics biomechanical analysis that was previously validated for extant bipeds, I calculated the minimum amount of actively contracting hindlimb extensor muscle that would have been needed for rapid bipedal running in several extinct dinosaur taxa. I analyzed models of nine theropod dinosaurs (including birds) covering over five orders of magnitude in size. My results uphold previous findings that large theropods such as Tyrannosaurus could not run very quickly, whereas smaller theropods (including some extinct birds) were adept runners. Furthermore, my results strengthen the contention that many nonavian theropods, especially larger individuals, used fairly upright limb orientations, which would have reduced required muscular force, and hence muscle mass. Additional sensitivity analysis of muscle fascicle lengths, moment arms, and limb orientation supports these conclusions and points out directions for future research on the musculoskeletal limits on running ability. Although ankle extensor muscle support is shown to have been important for all taxa, the ability of hip extensor muscles to support the body appears to be a crucial limit for running capacity in larger taxa. I discuss what speeds were possible for different theropod dinosaurs, and how running ability evolved in an inverse relationship to body size in archosaurs.
Ultra-sensitive Flow Injection Analysis (FIA) determination of calcium in ice cores at ppt level.
Traversi, R; Becagli, S; Castellano, E; Maggi, V; Morganti, A; Severi, M; Udisti, R
2007-07-02
A Flow Injection Analysis (FIA) spectrofluorimetric method for calcium determination in ice cores was optimised in order to achieve better analytical performances which would make it suitable for reliable calcium measurements at ppt level. The method here optimised is based on the formation of a fluorescent compound between Ca and Quin-2 in buffered environment. A careful evaluation of operative parameters (reagent concentration, buffer composition and concentration, pH), influence of interfering species possibly present in real samples and potential favourable effect of surfactant addition was carried out. The obtained detection limit is around 15 ppt, which is one order of magnitude lower than the most sensitive Flow Analysis method for Ca determination currently available in literature and reproducibility is better than 4% for Ca concentrations of 0.2 ppb. The method was validated through measurements performed in parallel with Ion Chromatography on 200 samples from an alpine ice core (Lys Glacier) revealing an excellent fit between the two chemical series. Calcium stratigraphy in Lys ice core was discussed in terms of seasonal pattern and occurrence of Saharan dust events.
Breesch, H.; Janssens, A.
2010-08-15
Natural night ventilation is an interesting passive cooling method in moderate climates. Driven by wind and stack generated pressures, it cools down the exposed building structure at night, in which the heat of the previous day is accumulated. The performance of natural night ventilation highly depends on the external weather conditions and especially on the outdoor temperature. An increase of this outdoor temperature is noticed over the last century and the IPCC predicts an additional rise to the end of this century. A methodology is needed to evaluate the reliable operation of the indoor climate of buildings in case of warmer and uncertain summer conditions. The uncertainty on the climate and on other design data can be very important in the decision process of a building project. The aim of this research is to develop a methodology to predict the performance of natural night ventilation using building energy simulation taking into account the uncertainties in the input. The performance evaluation of natural night ventilation is based on uncertainty and sensitivity analysis. The results of the uncertainty analysis showed that thermal comfort in a single office cooled with single-sided night ventilation had the largest uncertainty. The uncertainties on thermal comfort in case of passive stack and cross ventilation were substantially smaller. However, since wind, as the main driving force for cross ventilation, is highly variable, the cross ventilation strategy required larger louvre areas than the stack ventilation strategy to achieve a similar performance. The differences in uncertainty between the orientations were small. Sensitivity analysis was used to determine the most dominant set of input parameters causing the uncertainty on thermal comfort. The internal heat gains, solar heat gain coefficient of the sunblinds, internal convective heat transfer coefficient, thermophysical properties related to thermal mass, set-point temperatures controlling the natural
Results of an integrated structure/control law design sensitivity analysis
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1989-01-01
A design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations is discussed. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changes in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient than finite difference methods for the computation of the equivalent sensitivity information.
Additional Keplerian Signals in the HARPS data for Gliese 667C: Further Analysis
NASA Astrophysics Data System (ADS)
Gregory, Philip C.; Lawler, Samantha M.; Gladman, Brett
2014-01-01
A re-analysis of Gliese 667C HARPS precision radial velocity data was carried out with a Bayesian multi-planet Kepler periodogram (from 0 to 7 planets) based on a fusion Markov chain Monte Carlo algorithm. The most probable number of signals detected is six with a Bayesian false alarm probability of 0.012. The residuals were shown to be consistent with white noise. The six signals detected include two previously reported with periods of 7.198 (b) and 28.14 (c) days, plus additional periods of 30.82, 38.82, 53.22, and 91.3 days. The existence of these Keplerian-like signals suggest the possibility of additional planets in the habitable zone of Gl 667C although some of the signals could be artifacts arising from the sampling or stellar surface activity. N-body orbital integrations are being undertaken to determine which of these signals are consistent with a stable planetary system. Preliminary results demonstrate that four of the signals, with periods of 7.2, 28.1, 38.8, & 91 d, are consistent with a stable 4 planet system on time scales of 107 yr. The M sin i values are ~5.5, 4.4, 1.9, and 4.7 M⊕, respectively.
An analysis of candidates for addition to the Clean Air Act list of hazardous air pollutants.
Lunder, Sonya; Woodruff, Tracey J; Axelrad, Daniel A
2004-02-01
There are 188 air toxics listed as hazardous air pollutants (HAPs) in the Clean Air Act (CAA), based on their potential to adversely impact public health. This paper presents several analyses performed to screen potential candidates for addition to the HAPs list. We analyzed 1086 HAPs and potential HAPs, including chemicals regulated by the state of California or with emissions reported to the Toxics Release Inventory (TRI). HAPs and potential HAPs were ranked by their emissions to air, and by toxicity-weighted (tox-wtd) emissions for cancer and noncancer, using emissions information from the TRI and toxicity information from state and federal agencies. Separate consideration was given for persistent, bioaccumulative toxins (PBTs), reproductive or developmental toxins, and chemicals under evaluation for regulation as toxic air contaminants in California. Forty-four pollutants were identified as candidate HAPs based on three ranking analyses and whether they were a PBT or a reproductive or developmental toxin. Of these, nine qualified in two or three different rankings (ammonia [NH3], copper [Cu], Cu compounds, nitric acid [HNO3], N-methyl-2-pyrrolidone, sulfuric acid [H2SO4], vanadium [V] compounds, zinc [Zn], and Zn compounds). This analysis suggests further evaluation of several pollutants for possible addition to the CAA list of HAPs.
[Local sensitivity and its stationarity analysis for urban rainfall runoff modelling].
Lin, Jie; Huang, Jin-Liang; Du, Peng-Fei; Tu, Zhen-Shun; Li, Qing-Sheng
2010-09-01
Sensitivity analysis of urban-runoff simulation is a crucial procedure for parameter identification and uncertainty analysis. Local sensitivity analysis using Morris screening method was carried out for urban rainfall runoff modelling based on Storm Water Management Model (SWMM). The results showed that Area, % Imperv and Dstore-Imperv are the most sensitive parameters for both total runoff volume and peak flow. Concerning total runoff volume, the sensitive indices of Area, % Imperv and Dstore-Imperv were 0.46-1.0, 0.61-1.0, -0.050(-) - 5.9, respectively; while with respect to peak runoff, they were 0.48-0.89, 0.59-0.83, 0(-) -9.6, respectively. In comparison, the most sensitive indices (Morris) for all parameters with regard to total runoff volume and peak flow appeared in the rainfall event with least rainfall; and less sensitive indices happened in the rainfall events with heavier rainfall. Furthermore, there is considerable variability in sensitive indices for each rainfall event. % Zero-Imperv's coefficient variations have the largest values among all parameters for total runoff volume and peak flow, namely 221.24% and 228.10%. On the contrary, the coefficient variations of conductivity among all parameters for both total runoff volume and peak flow are the smallest, namely 0.
Sensitive KIT D816V mutation analysis of blood as a diagnostic test in mastocytosis.
Kristensen, Thomas; Vestergaard, Hanne; Bindslev-Jensen, Carsten; Møller, Michael Boe; Broesby-Olsen, Sigurd
2014-05-01
The recent progress in sensitive KIT D816V mutation analysis suggests that mutation analysis of peripheral blood (PB) represents a promising diagnostic test in mastocytosis. However, there is a need for systematic assessment of the analytical sensitivity and specificity of the approach in order to establish its value in clinical use. We therefore evaluated sensitive KIT D816V mutation analysis of PB as a diagnostic test in an entire case-series of adults with mastocytosis. We demonstrate for the first time that by using a sufficiently sensitive KIT D816V mutation analysis, it is possible to detect the mutation in PB in nearly all adult mastocytosis patients. The mutation was detected in PB in 78 of 83 systemic mastocytosis (94%) and 3 of 4 cutaneous mastocytosis patients (75%). The test was 100% specific as determined by analysis of clinically relevant control patients who all tested negative. Mutation analysis of PB was significantly more sensitive than serum tryptase >20 ng/mL. Of 27 patients with low tryptase, 26 tested mutation positive (96%). The test is furthermore readily available and we consider the results to serve as a foundation of experimental evidence to support the inclusion of the test in diagnostic algorithms and clinical practice in mastocytosis.
Naujokaitis-Lewis, Ilona R; Curtis, Janelle M R; Arcese, Peter; Rosenfeld, Jordan
2009-02-01
Population viability analysis (PVA) is an effective framework for modeling species- and habitat-recovery efforts, but uncertainty in parameter estimates and model structure can lead to unreliable predictions. Integrating complex and often uncertain information into spatial PVA models requires that comprehensive sensitivity analyses be applied to explore the influence of spatial and nonspatial parameters on model predictions. We reviewed 87 analyses of spatial demographic PVA models of plants and animals to identify common approaches to sensitivity analysis in recent publications. In contrast to best practices recommended in the broader modeling community, sensitivity analyses of spatial PVAs were typically ad hoc, inconsistent, and difficult to compare. Most studies applied local approaches to sensitivity analyses, but few varied multiple parameters simultaneously. A lack of standards for sensitivity analysis and reporting in spatial PVAs has the potential to compromise the ability to learn collectively from PVA results, accurately interpret results in cases where model relationships include nonlinearities and interactions, prioritize monitoring and management actions, and ensure conservation-planning decisions are robust to uncertainties in spatial and nonspatial parameters. Our review underscores the need to develop tools for global sensitivity analysis and apply these to spatial PVA.
Tsujita-Inoue, Kyoko; Hirota, Morihiko; Ashikaga, Takao; Atobe, Tomomi; Kouzuki, Hirokazu; Aiba, Setsuya
2014-06-01
The sensitizing potential of chemicals is usually identified and characterized using in vivo methods such as the murine local lymph node assay (LLNA). Due to regulatory constraints and ethical concerns, alternatives to animal testing are needed to predict skin sensitization potential of chemicals. For this purpose, combined evaluation using multiple in vitro and in silico parameters that reflect different aspects of the sensitization process seems promising. We previously reported that LLNA thresholds could be well predicted by using an artificial neural network (ANN) model, designated iSENS ver.1 (integrating in vitro sensitization tests version 1), to analyze data obtained from two in vitro tests: the human Cell Line Activation Test (h-CLAT) and the SH test. Here, we present a more advanced ANN model, iSENS ver.2, which additionally utilizes the results of antioxidant response element (ARE) assay and the octanol-water partition coefficient (LogP, reflecting lipid solubility and skin absorption). We found a good correlation between predicted LLNA thresholds calculated by iSENS ver.2 and reported values. The predictive performance of iSENS ver.2 was superior to that of iSENS ver.1. We conclude that ANN analysis of data from multiple in vitro assays is a useful approach for risk assessment of chemicals for skin sensitization.
ERIC Educational Resources Information Center
Hayton, James C.
2009-01-01
In the article "Exploring the Sensitivity of Horn's Parallel Analysis to the Distributional Form of Random Data," Dinno (this issue) provides strong evidence that the distribution of random data does not have a significant influence on the outcome of the analysis. Hayton appreciates the thorough approach to evaluating this assumption, and agrees…
NASA Astrophysics Data System (ADS)
Kim, Sungho; Kim, Heekang
2016-10-01
This paper presents a weathering sensitivity analysis method for the safety diagnosis of Seongsan Ilchulbong Peak using hyperspectral images. Remote sensing-based safety diagnosis is important for preventing accidents in famous mountains. A hyperspectral correlation-based method is proposed to evaluate the weathering sensitivity. The three issues are how to reduce the illumination effect, how to remove camera motion while acquiring images on a boat, and how to define the weathering sensitivity index. A novel minimum subtraction and maximum normalization (MSM-norm) method is proposed to solve the shadow and specular illumination problem. Geometrically distorted hyperspectral images are corrected by estimating the borderline of the mountain and sea surface. The final issue is solved by proposing a weathering sensitivity index (WS-Index) based on a spectral angle mapper. Real experiments on the Seongsan Ilchulbong Peak (UNESCO, World Natural Heritage) highlighted the feasibility of the proposed method in safety diagnosis by the weathering sensitivity index.
Liu, Wei; Xu, Libin; Lamberson, Connor; Haas, Dorothea; Korade, Zeljka; Porter, Ned A
2014-02-01
We describe a highly sensitive method for the detection of 7-dehydrocholesterol (7-DHC), the biosynthetic precursor of cholesterol, based on its reactivity with 4-phenyl-1,2,4-triazoline-3,5-dione (PTAD) in a Diels-Alder cycloaddition reaction. Samples of biological tissues and fluids with added deuterium-labeled internal standards were derivatized with PTAD and analyzed by LC-MS. This protocol permits fast processing of samples, short chromatography times, and high sensitivity. We applied this method to the analysis of cells, blood, and tissues from several sources, including human plasma. Another innovative aspect of this study is that it provides a reliable and highly reproducible measurement of 7-DHC in 7-dehydrocholesterol reductase (Dhcr7)-HET mouse (a model for Smith-Lemli-Opitz syndrome) samples, showing regional differences in the brain tissue. We found that the levels of 7-DHC are consistently higher in Dhcr7-HET mice than in controls, with the spinal cord and peripheral nerve showing the biggest differences. In addition to 7-DHC, sensitive analysis of desmosterol in tissues and blood was also accomplished with this PTAD method by assaying adducts formed from the PTAD "ene" reaction. The method reported here may provide a highly sensitive and high throughput way to identify at-risk populations having errors in cholesterol biosynthesis.
Liu, Wei; Xu, Libin; Lamberson, Connor; Haas, Dorothea; Korade, Zeljka; Porter, Ned A.
2014-01-01
We describe a highly sensitive method for the detection of 7-dehydrocholesterol (7-DHC), the biosynthetic precursor of cholesterol, based on its reactivity with 4-phenyl-1,2,4-triazoline-3,5-dione (PTAD) in a Diels-Alder cycloaddition reaction. Samples of biological tissues and fluids with added deuterium-labeled internal standards were derivatized with PTAD and analyzed by LC-MS. This protocol permits fast processing of samples, short chromatography times, and high sensitivity. We applied this method to the analysis of cells, blood, and tissues from several sources, including human plasma. Another innovative aspect of this study is that it provides a reliable and highly reproducible measurement of 7-DHC in 7-dehydrocholesterol reductase (Dhcr7)-HET mouse (a model for Smith-Lemli-Opitz syndrome) samples, showing regional differences in the brain tissue. We found that the levels of 7-DHC are consistently higher in Dhcr7-HET mice than in controls, with the spinal cord and peripheral nerve showing the biggest differences. In addition to 7-DHC, sensitive analysis of desmosterol in tissues and blood was also accomplished with this PTAD method by assaying adducts formed from the PTAD “ene” reaction. The method reported here may provide a highly sensitive and high throughput way to identify at-risk populations having errors in cholesterol biosynthesis. PMID:24259532
NASA Astrophysics Data System (ADS)
Schumacher, Florian; Friederich, Wolfgang
Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).
Ulrich, Andrea; Wichser, Adrian
2003-09-01
Fuel additives used in particle traps have to comply with environmental directives and should not support the formation of additional toxic substances. The emission of metal additives from diesel engines with downstream particle traps has been studied. Aspects of the optimisation of sampling procedure, sample preparation and analysis are described. Exemplary results in form of a mass balance calculation are presented. The results demonstrate the high retention rate of the studied filter system but also possible deposition of additive metals in the engine.
NASA Astrophysics Data System (ADS)
Hwang, Joonki; Park, Aaron; Chung, Jin Hyuk; Choi, Namhyun; Park, Jun-Qyu; Cho, Soo Gyeong; Baek, Sung-June; Choo, Jaebum
2013-06-01
Recently, the development of methods for the identification of explosive materials that are faster, more sensitive, easier to use, and more cost-effective has become a very important issue for homeland security and counter-terrorism applications. However, limited applicability of several analytical methods such as, the incapability of detecting explosives in a sealed container, the limited portability of instruments, and false alarms due to the inherent lack of selectivity, have motivated the increased interest in the application of Raman spectroscopy for the rapid detection and identification of explosive materials. Raman spectroscopy has received a growing interest due to its stand-off capacity, which allows samples to be analyzed at distance from the instrument. In addition, Raman spectroscopy has the capability to detect explosives in sealed containers such as glass or plastic bottles. We report a rapid and sensitive recognition technique for explosive compounds using Raman spectroscopy and principal component analysis (PCA). Seven hundreds of Raman spectra (50 measurements per sample) for 14 selected explosives were collected, and were pretreated with noise suppression and baseline elimination methods. PCA, a well-known multivariate statistical method, was applied for the proper evaluation, feature extraction, and identification of measured spectra. Here, a broad wavenumber range (200- 3500 cm-1) on the collected spectra set was used for the classification of the explosive samples into separate classes. It was found that three principal components achieved 99.3 % classification rates in the sample set. The results show that Raman spectroscopy in combination with PCA is well suited for the identification and differentiation of explosives in the field.
NASA Astrophysics Data System (ADS)
Siamphukdee, Kanjana; Collins, Frank; Zou, Roger
2013-06-01
Chloride-induced reinforcement corrosion is one of the major causes of premature deterioration in reinforced concrete (RC) structures. Given the high maintenance and replacement costs, accurate modeling of RC deterioration is indispensable for ensuring the optimal allocation of limited economic resources. Since corrosion rate is one of the major factors influencing the rate of deterioration, many predictive models exist. However, because the existing models use very different sets of input parameters, the choice of model for RC deterioration is made difficult. Although the factors affecting corrosion rate are frequently reported in the literature, there is no published quantitative study on the sensitivity of predicted corrosion rate to the various input parameters. This paper presents the results of the sensitivity analysis of the input parameters for nine selected corrosion rate prediction models. Three different methods of analysis are used to determine and compare the sensitivity of corrosion rate to various input parameters: (i) univariate regression analysis, (ii) multivariate regression analysis, and (iii) sensitivity index. The results from the analysis have quantitatively verified that the corrosion rate of steel reinforcement bars in RC structures is highly sensitive to corrosion duration time, concrete resistivity, and concrete chloride content. These important findings establish that future empirical models for predicting corrosion rate of RC should carefully consider and incorporate these input parameters.
Estep, Donald
2015-11-30
This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.
Dynamical sensitivity analysis of tropical cyclone steering and genesis using an adjoint model
NASA Astrophysics Data System (ADS)
Hoover, Brett T.
The adjoint of a numerical weather prediction (NWP) model is a powerful tool for efficiently calculating the "sensitivity" of some function of the model forecast state with respect to small but otherwise arbitrary perturbations to the model state at earlier times. Physical interpretation of these sensitivity gradients for functions describing some phenomenon of dynamical interest allows the user to approach a variety of dynamical problems in atmospheric science from the perspective of the potential impact of small perturbations on the future development of that phenomenon; the integration of adjoint-derived sensitivity gradients as a dynamical tool for approaching these problems can be called dynamical sensitivity analysis. A methodology for dynamical sensitivity analysis is developed and applied to problems related to the steering and genesis of modeled tropical cyclones. Functions defining the steering and genesis of tropical cyclones are developed and tested, and sensitivity gradients of those functions with respect to model initial conditions are interpreted physically. Results indicate that regions of strong sensitivity tend to localize where small vorticity perturbations have the capacity to grow quickly and impact the future state of the model, such as regions of strong ascent and subsidence surrounding midlatitude troughs, or near zonal jets where upshear-tilted perturbations can grow barotropically. Consequences for dynamics and predictability of these events are discussed.
NASA Astrophysics Data System (ADS)
Li, Hui-Chuan
2014-10-01
This study examines students' procedural and conceptual achievement in fraction addition in England and Taiwan. A total of 1209 participants (561 British students and 648 Taiwanese students) at ages 12 and 13 were recruited from England and Taiwan to take part in the study. A quantitative design by means of a self-designed written test is adopted as central to the methodological considerations. The test has two major parts: the concept part and the skill part. The former is concerned with students' conceptual knowledge of fraction addition and the latter is interested in students' procedural competence when adding fractions. There were statistically significant differences both in concept and skill parts between the British and Taiwanese groups with the latter having a higher score. The analysis of the students' responses to the skill section indicates that the superiority of Taiwanese students' procedural achievements over those of their British peers is because most of the former are able to apply algorithms to adding fractions far more successfully than the latter. Earlier, Hart [1] reported that around 30% of the British students in their study used an erroneous strategy (adding tops and bottoms, for example, 2/3 + 1/7 = 3/10) while adding fractions. This study also finds that nearly the same percentage of the British group remained using this erroneous strategy to add fractions as Hart found in 1981. The study also provides evidence to show that students' understanding of fractions is confused and incomplete, even those who are successfully able to perform operations. More research is needed to be done to help students make sense of the operations and eventually attain computational competence with meaningful grounding in the domain of fractions.
Analysis of Time to Event Outcomes in Randomized Controlled Trials by Generalized Additive Models
Argyropoulos, Christos; Unruh, Mark L.
2015-01-01
Background Randomized Controlled Trials almost invariably utilize the hazard ratio calculated with a Cox proportional hazard model as a treatment efficacy measure. Despite the widespread adoption of HRs, these provide a limited understanding of the treatment effect and may even provide a biased estimate when the assumption of proportional hazards in the Cox model is not verified by the trial data. Additional treatment effect measures on the survival probability or the time scale may be used to supplement HRs but a framework for the simultaneous generation of these measures is lacking. Methods By splitting follow-up time at the nodes of a Gauss Lobatto numerical quadrature rule, techniques for Poisson Generalized Additive Models (PGAM) can be adopted for flexible hazard modeling. Straightforward simulation post-estimation transforms PGAM estimates for the log hazard into estimates of the survival function. These in turn were used to calculate relative and absolute risks or even differences in restricted mean survival time between treatment arms. We illustrate our approach with extensive simulations and in two trials: IPASS (in which the proportionality of hazards was violated) and HEMO a long duration study conducted under evolving standards of care on a heterogeneous patient population. Findings PGAM can generate estimates of the survival function and the hazard ratio that are essentially identical to those obtained by Kaplan Meier curve analysis and the Cox model. PGAMs can simultaneously provide multiple measures of treatment efficacy after a single data pass. Furthermore, supported unadjusted (overall treatment effect) but also subgroup and adjusted analyses, while incorporating multiple time scales and accounting for non-proportional hazards in survival data. Conclusions By augmenting the HR conventionally reported, PGAMs have the potential to support the inferential goals of multiple stakeholders involved in the evaluation and appraisal of clinical trial
Guo, Mei; Rupe, Mary A; Yang, Xiaofeng; Crasta, Oswald; Zinselmeier, Christopher; Smith, Oscar S; Bowen, Ben
2006-09-01
Heterosis, or hybrid vigor, has been widely exploited in plant breeding for many decades, but the molecular mechanisms underlying the phenomenon remain unknown. In this study, we applied genome-wide transcript profiling to gain a global picture of the ways in which a large proportion of genes are expressed in the immature ear tissues of a series of 16 maize hybrids that vary in their degree of heterosis. Key observations include: (1) the proportion of allelic additively expressed genes is positively associated with hybrid yield and heterosis; (2) the proportion of genes that exhibit a bias towards the expression level of the paternal parent is negatively correlated with hybrid yield and heterosis; and (3) there is no correlation between the over- or under-expression of specific genes in maize hybrids with either yield or heterosis. The relationship of the expression patterns with hybrid performance is substantiated by analysis of a genetically improved modern hybrid (Pioneer hybrid 3394) versus a less improved older hybrid (Pioneer hybrid 3306) grown at different levels of plant density stress. The proportion of allelic additively expressed genes is positively associated with the modern high yielding hybrid, heterosis and high yielding environments, whereas the converse is true for the paternally biased gene expression. The dynamic changes of gene expression in hybrids responding to genotype and environment may result from differential regulation of the two parental alleles. Our findings suggest that differential allele regulation may play an important role in hybrid yield or heterosis, and provide a new insight to the molecular understanding of the underlying mechanisms of heterosis.
Comparative proteomic analysis of drug sodium iron chlorophyllin addition to Hep 3B cell line.
Zhang, Jun; Wang, Wenhai; Yang, Fengying; Zhou, Xinwen; Jin, Hong; Yang, Peng-yuan
2012-09-21
The human hepatoma 3B cell line was chosen as an experimental model for in vitro test of drug screening. The drugs included chlorophyllin and its derivatives such as fluo-chlorophyllin, sodium copper chlorophyllin, and sodium iron chlorophyllin. The 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyl-tetrazolium bromide (MTT) method was used in this study to obtain the primary screening results. The results showed that sodium iron chlorophyllin had the best LC(50) value. Proteomic analysis was then performed for further investigation of the effect of sodium iron chlorophyllin addition to the Hep 3B cell line. The proteins identified from a total protein extract of Hep 3B before and after the drug addition were compared by two-dimensional-gel-electrophoresis. Then 32 three-fold differentially expressed proteins were successfully identified by MALDI-TOF-TOF-MS. There are 29 unique proteins among those identified proteins. These proteins include proliferating cell nuclear antigen (PCNA), T-complex protein, heterogeneous nuclear protein, nucleophosmin, heat shock protein A5 (HspA5) and peroxiredoxin. HspA5 is one of the proteins which are involved in protecting cancer cells against stress-induced apoptosis in cultured cells, protecting them against apoptosis through various mechanisms. Peroxiredoxin has anti-oxidant function and is related to cell proliferation, and signal transduction. It can protect the oxidation of other proteins. Peroxiredoxin has a close relationship with cancer and can eventually become a disease biomarker. This might help to develop a novel treatment method for carcinoma cancer.
Value-Driven Design and Sensitivity Analysis of Hybrid Energy Systems using Surrogate Modeling
Wenbo Du; Humberto E. Garcia; William R. Binder; Christiaan J. J. Paredis
2001-10-01
A surrogate modeling and analysis methodology is applied to study dynamic hybrid energy systems (HES). The effect of battery size on the smoothing of variability in renewable energy generation is investigated. Global sensitivity indices calculated using surrogate models show the relative sensitivity of system variability to dynamic properties of key components. A value maximization approach is used to consider the tradeoff between system variability and required battery size. Results are found to be highly sensitive to the renewable power profile considered, demonstrating the importance of accurate renewable resource modeling and prediction. The documented computational framework and preliminary results represent an important step towards a comprehensive methodology for HES evaluation, design, and optimization.
Using sensitivity analysis to validate the predictions of a biomechanical model of bite forces.
Sellers, William Irvin; Crompton, Robin Huw
2004-02-01
Biomechanical modelling has become a very popular technique for investigating functional anatomy. Modern computer simulation packages make producing such models straightforward and it is tempting to take the results produced at face value. However the predictions of a simulation are only valid when both the model and the input parameters are accurate and little work has been done to verify this. In this paper a model of the human jaw is produced and a sensitivity analysis is performed to validate the results. The model is built using the ADAMS multibody dynamic simulation package incorporating the major occlusive muscles of mastication (temporalis, masseter, medial and lateral pterygoids) as well as a highly mobile temporomandibular joint. This model is used to predict the peak three-dimensional bite forces at each teeth location, joint reaction forces, and the contributions made by each individual muscle. The results for occlusive bite-force (1080N at M1) match those previously published suggesting the model is valid. The sensitivity analysis was performed by sampling the input parameters from likely ranges and running the simulation many times rather than using single, best estimate values. This analysis shows that the magnitudes of the peak retractive forces on the lower teeth were highly sensitive to the chosen origin (and hence fibre direction) of the temporalis and masseter muscles as well as the laxity of the TMJ. Peak protrusive force was also sensitive to the masseter origin. These result shows that the model is insufficiently complex to estimate these values reliably although the much lower sensitivity values obtained for the bite forces in the other directions and also for the joint reaction forces suggest that these predictions are sound. Without the sensitivity analysis it would not have been possible to identify these weaknesses which strongly supports the use of sensitivity analysis as a validation technique for biomechanical modelling.
Application of design sensitivity analysis for greater improvement on machine structural dynamics
NASA Technical Reports Server (NTRS)
Yoshimura, Masataka
1987-01-01
Methodologies are presented for greatly improving machine structural dynamics by using design sensitivity analyses and evaluative parameters. First, design sensitivity coefficients and evaluative parameters of structural dynamics are described. Next, the relations between the design sensitivity coefficients and the evaluative parameters are clarified. Then, design improvement procedures of structural dynamics are proposed for the following three cases: (1) addition of elastic structural members, (2) addition of mass elements, and (3) substantial charges of joint design variables. Cases (1) and (2) correspond to the changes of the initial framework or configuration, and (3) corresponds to the alteration of poor initial design variables. Finally, numerical examples are given for demonstrating the availability of the methods proposed.
Sensitivity and uncertainty analysis of reactivities for UO2 and MOX fueled PWR cells
Foad, Basma; Takeda, Toshikazu
2015-12-31
The purpose of this paper is to apply our improved method for calculating sensitivities and uncertainties of reactivity responses for UO{sub 2} and MOX fueled pressurized water reactor cells. The improved method has been used to calculate sensitivity coefficients relative to infinite dilution cross-sections, where the self-shielding effect is taken into account. Two types of reactivities are considered: Doppler reactivity and coolant void reactivity, for each type of reactivity, the sensitivities are calculated for small and large perturbations. The results have demonstrated that the reactivity responses have larger relative uncertainty than eigenvalue responses. In addition, the uncertainty of coolant void reactivity is much greater than Doppler reactivity especially for large perturbations. The sensitivity coefficients and uncertainties of both reactivities were verified by comparing with SCALE code results using ENDF/B-VII library and good agreements have been found.
Sensitivity and uncertainty analysis of reactivities for UO2 and MOX fueled PWR cells
NASA Astrophysics Data System (ADS)
Foad, Basma; Takeda, Toshikazu
2015-12-01
The purpose of this paper is to apply our improved method for calculating sensitivities and uncertainties of reactivity responses for UO2 and MOX fueled pressurized water reactor cells. The improved method has been used to calculate sensitivity coefficients relative to infinite dilution cross-sections, where the self-shielding effect is taken into account. Two types of reactivities are considered: Doppler reactivity and coolant void reactivity, for each type of reactivity, the sensitivities are calculated for small and large perturbations. The results have demonstrated that the reactivity responses have larger relative uncertainty than eigenvalue responses. In addition, the uncertainty of coolant void reactivity is much greater than Doppler reactivity especially for large perturbations. The sensitivity coefficients and uncertainties of both reactivities were verified by comparing with SCALE code results using ENDF/B-VII library and good agreements have been found.
NASA Technical Reports Server (NTRS)
Hou, Gene
2004-01-01
The focus of this research is on the development of analysis and sensitivity analysis equations for nonlinear, transient heat transfer problems modeled by p-version, time discontinuous finite element approximation. The resulting matrix equation of the state equation is simply in the form ofA(x)x = c, representing a single step, time marching scheme. The Newton-Raphson's method is used to solve the nonlinear equation. Examples are first provided to demonstrate the accuracy characteristics of the resultant finite element approximation. A direct differentiation approach is then used to compute the thermal sensitivities of a nonlinear heat transfer problem. The report shows that only minimal coding effort is required to enhance the analysis code with the sensitivity analysis capability.
Comparison of Applying FOUR Reduced Order Models to a Global Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Zhang, Y.; Oladyshkin, S.; Liu, Y.; Pau, G. S. H.
2014-12-01
This study focuses on the comparison of applying four reduced order models (ROMs) to global sensitivity analysis (GSA). ROM is one way to improve computational efficiency in many-query applications such as optimization, uncertainty quantification, sensitivity analysis, inverse modeling where the computational demand can become large. The four ROM methods are: arbitrary Polynomial Chaos (aPC), Gaussian process regression (GPR), cut high dimensional model representation (HDMR), and random sample HDMR. The discussion is mainly based on a global sensitivity analysis performed for a hypothetical large-scale CO2 storage project. Pros and cons of each method will be discussed and suggestions on how each method should be applied individually or combined will be made.
Design sensitivity analysis and optimization tool (DSO) for sizing design applications
NASA Technical Reports Server (NTRS)
Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa
1992-01-01
The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.
Loophole-free Bell test using electron spins in diamond: second experiment and additional analysis
Hensen, B.; Kalb, N.; Blok, M. S.; Dréau, A. E.; Reiserer, A.; Vermeulen, R. F. L.; Schouten, R. N.; Markham, M.; Twitchen, D. J.; Goodenough, K.; Elkouss, D.; Wehner, S.; Taminiau, T. H.; Hanson, R.
2016-01-01
The recently reported violation of a Bell inequality using entangled electronic spins in diamonds (Hensen et al., Nature 526, 682–686) provided the first loophole-free evidence against local-realist theories of nature. Here we report on data from a second Bell experiment using the same experimental setup with minor modifications. We find a violation of the CHSH-Bell inequality of 2.35 ± 0.18, in agreement with the first run, yielding an overall value of S = 2.38 ± 0.14. We calculate the resulting P-values of the second experiment and of the combined Bell tests. We provide an additional analysis of the distribution of settings choices recorded during the two tests, finding that the observed distributions are consistent with uniform settings for both tests. Finally, we analytically study the effect of particular models of random number generator (RNG) imperfection on our hypothesis test. We find that the winning probability per trial in the CHSH game can be bounded knowing only the mean of the RNG bias. This implies that our experimental result is robust for any model underlying the estimated average RNG bias, for random bits produced up to 690 ns too early by the random number generator. PMID:27509823
NASA Astrophysics Data System (ADS)
Chan, Kwai S.
2015-12-01
Rectangular plates of Ti-6Al-4V with extra low interstitial (ELI) were fabricated by layer-by-layer deposition techniques that included electron beam melting (EBM) and laser beam melting (LBM). The surface conditions of these plates were characterized using x-ray micro-computed tomography. The depth and radius of surface notch-like features on the LBM and EBM plates were measured from sectional images of individual virtual slices of the rectangular plates. The stress concentration factors of individual surface notches were computed and analyzed statistically to determine the appropriate distributions for the notch depth, notch radius, and stress concentration factor. These results were correlated with the fatigue life of the Ti-6Al-4V ELI alloys from an earlier investigation. A surface notch analysis was performed to assess the debit in the fatigue strength due to the surface notches. The assessment revealed that the fatigue lives of the additively manufactured plates with rough surface topographies and notch-like features are dominated by the fatigue crack growth of large cracks for both the LBM and EBM materials. The fatigue strength reduction due to the surface notches can be as large as 60%-75%. It is concluded that for better fatigue performance, the surface notches on EBM and LBM materials need to be removed by machining and the surface roughness be improved to a surface finish of about 1 μm.
Huo, Jinxing; Dérand, Per; Rännar, Lars-Erik; Hirsch, Jan-Michaél; Gamstedt, E Kristofer
2015-09-01
In order to reconstruct a patient with a bone defect in the mandible, a porous scaffold attached to a plate, both in a titanium alloy, was designed and manufactured using additive manufacturing. Regrettably, the implant fractured in vivo several months after surgery. The aim of this study was to investigate the failure of the implant and show a way of predicting the mechanical properties of the implant before surgery. All computed tomography data of the patient were preprocessed to remove metallic artefacts with metal deletion technique before mandible geometry reconstruction. The three-dimensional geometry of the patient's mandible was also reconstructed, and the implant was fixed to the bone model with screws in Mimics medical imaging software. A finite element model was established from the assembly of the mandible and the implant to study stresses developed during mastication. The stress distribution in the load-bearing plate was computed, and the location of main stress concentration in the plate was determined. Comparison between the fracture region and the location of the stress concentration shows that finite element analysis could serve as a tool for optimizing the design of mandible implants.
Loophole-free Bell test using electron spins in diamond: second experiment and additional analysis
NASA Astrophysics Data System (ADS)
Hensen, B.; Kalb, N.; Blok, M. S.; Dréau, A. E.; Reiserer, A.; Vermeulen, R. F. L.; Schouten, R. N.; Markham, M.; Twitchen, D. J.; Goodenough, K.; Elkouss, D.; Wehner, S.; Taminiau, T. H.; Hanson, R.
2016-08-01
The recently reported violation of a Bell inequality using entangled electronic spins in diamonds (Hensen et al., Nature 526, 682–686) provided the first loophole-free evidence against local-realist theories of nature. Here we report on data from a second Bell experiment using the same experimental setup with minor modifications. We find a violation of the CHSH-Bell inequality of 2.35 ± 0.18, in agreement with the first run, yielding an overall value of S = 2.38 ± 0.14. We calculate the resulting P-values of the second experiment and of the combined Bell tests. We provide an additional analysis of the distribution of settings choices recorded during the two tests, finding that the observed distributions are consistent with uniform settings for both tests. Finally, we analytically study the effect of particular models of random number generator (RNG) imperfection on our hypothesis test. We find that the winning probability per trial in the CHSH game can be bounded knowing only the mean of the RNG bias. This implies that our experimental result is robust for any model underlying the estimated average RNG bias, for random bits produced up to 690 ns too early by the random number generator.
Wan, Debin; Yang, Jun; Barnych, Bogdan; Hwang, Sung Hee; Lee, Kin Sing Stephen; Cui, Yongliang; Niu, Jun; Watsky, Mitchell A; Hammock, Bruce D
2017-04-01
There is an increased demand for comprehensive analysis of vitamin D metabolites. This is a major challenge, especially for 1α,25-dihydroxyvitamin D [1α,25(OH)2VitD], because it is biologically active at picomolar concentrations. 4-Phenyl-1,2,4-triazoline-3,5-dione (PTAD) was a revolutionary reagent in dramatically increasing sensitivity of all diene metabolites and allowing the routine analysis of the bioactive, but minor, vitamin D metabolites. A second generation of reagents used large fixed charge groups that increased sensitivity at the cost of a deterioration in chromatographic separation of the vitamin D derivatives. This precludes a survey of numerous vitamin D metabolites without redesigning the chromatographic system used. 2-Nitrosopyridine (PyrNO) demonstrates that one can improve ionization and gain higher sensitivity over PTAD. The resulting vitamin D derivatives facilitate high-resolution chromatographic separation of the major metabolites. Additionally, a liquid-liquid extraction followed by solid-phase extraction (LLE-SPE) was developed to selectively extract 1α,25(OH)2VitD, while reducing 2- to 4-fold ion suppression compared with SPE alone. LLE-SPE followed by PyrNO derivatization and LC/MS/MS analysis is a promising new method for quantifying vitamin D metabolites in a smaller sample volume (100 µL of serum) than previously reported methods. The PyrNO derivatization method is based on the Diels-Alder reaction and thus is generally applicable to a variety diene analytes.
Pujol-Vila, F; Vigués, N; Díaz-González, M; Muñoz-Berbel, X; Mas, J
2015-05-15
Global urban and industrial growth, with the associated environmental contamination, is promoting the development of rapid and inexpensive general toxicity methods. Current microbial methodologies for general toxicity determination rely on either bioluminescent bacteria and specific medium solution (i.e. Microtox(®)) or low sensitivity and diffusion limited protocols (i.e. amperometric microbial respirometry). In this work, fast and sensitive optical toxicity bioassay based on dual wavelength analysis of bacterial ferricyanide reduction kinetics is presented, using Escherichia coli as a bacterial model. Ferricyanide reduction kinetic analysis (variation of ferricyanide absorption with time), much more sensitive than single absorbance measurements, allowed for direct and fast toxicity determination without pre-incubation steps (assay time=10 min) and minimizing biomass interference. Dual wavelength analysis at 405 (ferricyanide and biomass) and 550 nm (biomass), allowed for ferricyanide monitoring without interference of biomass scattering. On the other hand, refractive index (RI) matching with saccharose reduced bacterial light scattering around 50%, expanding the analytical linear range in the determination of absorbent molecules. With this method, different toxicants such as metals and organic compounds were analyzed with good sensitivities. Half maximal effective concentrations (EC50) obtained after 10 min bioassay, 2.9, 1.0, 0.7 and 18.3 mg L(-1) for copper, zinc, acetic acid and 2-phenylethanol respectively, were in agreement with previously reported values for longer bioassays (around 60 min). This method represents a promising alternative for fast and sensitive water toxicity monitoring, opening the possibility of quick in situ analysis.
Liang, Yan; Guan, Tianye; Zhou, Yuanyuan; Liu, Yanna; Xing, Lu; Zheng, Xiao; Dai, Chen; Du, Ping; Rao, Tai; Zhou, Lijun; Yu, Xiaoyi; Hao, Kun; Xie, Lin; Wang, Guangji
2013-07-05
This study was to systematically investigate the effect of mobile phase additives, including ammonia water, formic acid, acetic acid, ammonium chloride and water (as a control), on qualitative and quantitative analysis of fifteen representative ginsenosides based on liquid chromatography hybrid quadrupole-time of flight mass spectrometry (LC-Q-TOF/MS). To evaluate the influence of mobile phase additives on qualitative performance, the quality of the negative mode MS/MS spectra of ginsenosides produced by online LC-Q-TOF/MS analyses, particularly the numbers and intensities of fragment ions, were compared under different adduct ion states, and found to be strongly affected by the mobile phase additives. When 0.02% acetic acid was added in the mobile phase, the deprotonated ginsenosides ions produced the most abundant product ions, while almost no product ion was observed for the chlorinated ginsenoside ions when 0.1mM ammonium chloride was used as the mobile phase additive. On the other hand, sensitivity, linear range and precision were adopted to investigate the quantitative performance affected by different mobile phase additives. Validation results of the LC-Q-TOF/MS-based quantitative performance for ginsenosides showed that ammonium chloride not only provided the highest sensitivity for all the target analytes, but also dramatically improved the linear ranges, the intra-day and inter-day precisions comparing to the results obtained using other mobile phase additives. Importantly, the validated method, using 0.1mM ammonium chloride as the mobile phase additive, was successfully applied to the quantitative analysis of ginsenosides in rat plasma after intragastric administration of Ginsenoside Extract at 200mg/kg. In conclusion, 0.02% acetic acid was deemed to be the most suitable mobile phase additive for qualitative analysis of ginsenosides, and 0.1mM ammonium chloride in mobile phase could lead to the best quantitative performance. Our results reveal that
Multiobjective sensitivity analysis and optimization of a distributed hydrologic model MOBIDIC
NASA Astrophysics Data System (ADS)
Yang, J.; Castelli, F.; Chen, Y.
2014-03-01
Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives which arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for a distributed hydrologic model MOBIDIC, which combines two sensitivity analysis techniques (Morris method and State Dependent Parameter method) with a multiobjective optimization (MOO) approach ϵ-NSGAII. This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina with three objective functions, i.e., standardized root mean square error of logarithmic transformed discharge, water balance index, and mean absolute error of logarithmic transformed flow duration curve, and its results were compared with those with a single objective optimization (SOO) with the traditional Nelder-Mead Simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show: (1) the two sensitivity analysis techniques are effective and efficient to determine the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization; (2) both MOO and SOO lead to acceptable simulations, e.g., for MOO, average Nash-Sutcliffe is 0.75 in the calibration period and 0.70 in the validation period; (3) evaporation and surface runoff shows similar importance to watershed water balance while the contribution of baseflow can be ignored; (4) compared to SOO which was dependent of initial starting location, MOO provides more insight on parameter sensitivity and conflicting characteristics of these objective functions. Multiobjective sensitivity analysis and optimization
Sensitivity analysis of DSMC parameters for an 11-species air hypersonic flow
NASA Astrophysics Data System (ADS)
Higdon, Kyle J.; Goldstein, David B.; Varghese, Philip L.
2016-11-01
This research investigates the influence of input parameters in the direct simulation Monte Carlo (DSMC) method for the simulation of a hypersonic flow scenario. Simulations are performed using the Computation of Hypersonic Ionizing Particles in Shocks (CHIPS) code to reproduce NASA Ames Electric Arc Shock Tube (EAST) experimental results for a 10.26 km/s, 0.2 Torr scenario. Since the chosen nominal simulation involves an energetic flow, an electronic excitation model is introduced into CHIPS to complement the pre-existing 11-species air models. A global Monte Carlo sensitivity analysis was completed for this chosen scenario and three quantities of interest (QoIs) were investigated: translational temperature, electronic temperature, and electron number density. The electron impact ionization reaction, N + e- ⇌ N+ + e- + e-, was determined to have the greatest effect on all three QoIs as it defines the electron cascade that occurs post-shock. In addition, molecular nitrogen dissociation, associative ionization, and the N + NO+ ⇌ N+ + NO charge exchange reaction were all found to be important for these QoIs.
Strong, Mark; Oakley, Jeremy E.; Brennan, Alan
2013-01-01
The partial expected value of perfect information (EVPI) quantifies the expected benefit of learning the values of uncertain parameters in a decision model. Partial EVPI is commonly estimated via a 2-level Monte Carlo procedure in which parameters of interest are sampled in an outer loop, and then conditional on these, the remaining parameters are sampled in an inner loop. This is computationally demanding and may be difficult if correlation between input parameters results in conditional distributions that are hard to sample from. We describe a novel nonparametric regression-based method for estimating partial EVPI that requires only the probabilistic sensitivity analysis sample (i.e., the set of samples drawn from the joint distribution of the parameters and the corresponding net benefits). The method is applicable in a model of any complexity and with any specification of input parameter distribution. We describe the implementation of the method via 2 nonparametric regression modeling approaches, the Generalized Additive Model and the Gaussian process. We demonstrate in 2 case studies the superior efficiency of the regression method over the 2-level Monte Carlo method. R code is made available to implement the method. PMID:24246566
Bi-directional exchange of ammonia in a pine forest ecosystem - a model sensitivity analysis
NASA Astrophysics Data System (ADS)
Moravek, Alexander; Hrdina, Amy; Murphy, Jennifer
2016-04-01
Ammonia (NH3) is a key component in the global nitrogen cycle and of great importance for atmospheric chemistry, neutralizing atmospheric acids and leading to the formation of aerosol particles. For understanding the role of NH3 in both natural and anthropogenically influenced environments, the knowledge of processes regulating its exchange between ecosystems and the atmosphere is essential. A two-layer canopy compensation point model is used to evaluate the NH3 exchange in a pine forest in the Colorado Rocky Mountains. The net flux comprises the NH3 exchange of leaf stomata, its deposition to leaf cuticles and exchange with the forest ground. As key parameters the model uses in-canopy NH3 mixing ratios as well as leaf and soil emission potentials measured at the site in summer 2015. A sensitivity analysis is performed to evaluate the major exchange pathways as well as the model's constraints. In addition, the NH3 exchange is examined for an extended range of environmental conditions, such as droughts or varying concentrations of atmospheric pollutants, in order to investigate their influence on the overall net exchange.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1992-01-01
Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1993-01-01
In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.
Highly sensitive and ultrafast read mapping for RNA-seq analysis.
Medina, I; Tárraga, J; Martínez, H; Barrachina, S; Castillo, M I; Paschall, J; Salavert-Torres, J; Blanquer-Espert, I; Hernández-García, V; Quintana-Ortí, E S; Dopazo, J
2016-04-01
As sequencing technologies progress, the amount of data produced grows exponentially, shifting the bottleneck of discovery towards the data analysis phase. In particular, currently available mapping solutions for RNA-seq leave room for improvement in terms of sensitivity and performance, hindering an efficient analysis of transcriptomes by massive sequencing. Here, we present an innovative approach that combines re-engineering, optimization and parallelization. This solution results in a significant increase of mapping sensitivity over a wide range of read lengths and substantial shorter runtimes when compared with current RNA-seq mapping methods available.
Highly sensitive and ultrafast read mapping for RNA-seq analysis
Medina, I.; Tárraga, J.; Martínez, H.; Barrachina, S.; Castillo, M. I.; Paschall, J.; Salavert-Torres, J.; Blanquer-Espert, I.; Hernández-García, V.; Quintana-Ortí, E. S.; Dopazo, J.
2016-01-01
As sequencing technologies progress, the amount of data produced grows exponentially, shifting the bottleneck of discovery towards the data analysis phase. In particular, currently available mapping solutions for RNA-seq leave room for improvement in terms of sensitivity and performance, hindering an efficient analysis of transcriptomes by massive sequencing. Here, we present an innovative approach that combines re-engineering, optimization and parallelization. This solution results in a significant increase of mapping sensitivity over a wide range of read lengths and substantial shorter runtimes when compared with current RNA-seq mapping methods available. PMID:26740642
Lee, Haw-Long; Chang, Win-Jin
2016-01-01
The modified couple stress theory is adopted to study the sensitivity of a rectangular atomic force microscope (AFM) cantilever immersed in acetone, water, carbon tetrachloride (CCl4), and 1-butanol. The theory contains a material length scale parameter and considers the size effect in the analysis. However, this parameter is difficult to obtain via experimental measurements. In this study, a conjugate gradient method for the parameter estimation of the frequency equation is presented. The optimal method provides a quantitative approach for estimating the material length scale parameter based on the modified couple stress theory. The results show that the material length scale parameter of the AFM cantilever immersed in acetone, CCl4, water, and 1-butanol is 0, 25, 116.3, and 471 nm, respectively. In addition, the vibration sensitivities of the AFM cantilever immersed in these liquids are investigated. The results are useful for the design of AFM cantilevers immersed in liquids.
NASA Astrophysics Data System (ADS)
Bastidas, L. A.; Hogue, T. S.; Sorooshian, S.; Gupta, H. V.; Shuttleworth, W. J.
2006-10-01
A multicriteria algorithm, the MultiObjective Generalized Sensitivity Analysis (MOGSA), was used to investigate the parameter sensitivity of five different land surface models with increasing levels of complexity in the physical representation of the vegetation (BUCKET, CHASM, BATS 1, Noah, and BATS 2) at five different sites representing crop land/pasture, grassland, rain forest, cropland, and semidesert areas. The methodology allows for the inclusion of parameter interaction and does not require assumptions of independence between parameters, while at the same time allowing for the ranking of several single-criterion and a global multicriteria sensitivity indices. The analysis required on the order of 50 thousand model runs. The results confirm that parameters with similar "physical meaning" across different model structures behave in different ways depending on the model and the locations. It is also shown that after a certain level an increase in model structure complexity does not necessarily lead to better parameter identifiability, i.e., higher sensitivity, and that a certain level of overparameterization is observed. For the case of the BATS 1 and BATS 2 models, with essentially the same model structure but a more sophisticated vegetation model, paradoxically, the effect on parameter sensitivity is mainly reflected in the sensitivity of the soil-related parameters.
Development of a sensitivity analysis technique for multiloop flight control systems
NASA Technical Reports Server (NTRS)
Vaillard, A. H.; Paduano, J.; Downing, D. R.
1985-01-01
This report presents the development and application of a sensitivity analysis technique for multiloop flight control systems. This analysis yields very useful information on the sensitivity of the relative-stability criteria of the control system, with variations or uncertainties in the system and controller elements. The sensitivity analysis technique developed is based on the computation of the singular values and singular-value gradients of a feedback-control system. The method is applicable to single-input/single-output as well as multiloop continuous-control systems. Application to sampled-data systems is also explored. The sensitivity analysis technique was applied to a continuous yaw/roll damper stability augmentation system of a typical business jet, and the results show that the analysis is very useful in determining the system elements which have the largest effect on the relative stability of the closed-loop system. As a secondary product of the research reported here, the relative stability criteria based on the concept of singular values were explored.
Sensitive quantitative analysis of murine LINE1 DNA methylation using high resolution melt analysis.
Newman, Michelle; Blyth, Benjamin J; Hussey, Damian J; Jardine, Daniel; Sykes, Pamela J; Ormsby, Rebecca J
2012-01-01
We present here the first high resolution melt (HRM) assay to quantitatively analyze differences in murine DNA methylation levels utilizing CpG methylation of Long Interspersed Elements-1 (LINE1 or L1). By calculating the integral difference in melt temperature between samples and a methylated control, and biasing PCR primers for unmethylated CpGs, the assay demonstrates enhanced sensitivity to detect changes in methylation in a cell line treated with low doses of 5-aza-2'-deoxycytidine (5-aza). The L1 assay was confirmed to be a good marker of changes in DNA methylation of L1 elements at multiple regions across the genome when compared with total 5-methyl-cytosine content, measured by Liquid Chromatography-Mass Spectrometry (LC-MS). The assay design was also used to detect changes in methylation at other murine repeat elements (B1 and Intracisternal-A-particle Long-terminal Repeat elements). Pyrosequencing analysis revealed that L1 methylation changes were non-uniform across the CpGs within the L1-HRM target region, demonstrating that the L1 assay can detect small changes in CpG methylation among a large pool of heterogeneously methylated DNA templates. Application of the assay to various tissues from Balb/c and CBA mice, including previously unreported peripheral blood (PB), revealed a tissue hierarchy (from hypermethylated to hypomethylated) of PB > kidney > liver > prostate > spleen. CBA mice demonstrated overall greater methylation than Balb/c mice, and male mice demonstrated higher tissue methylation compared with female mice in both strains. Changes in DNA methylation have been reported to be an early and fundamental event in the pathogenesis of many human diseases, including cancer. Mouse studies designed to identify modulators of DNA methylation, the critical doses, relevant time points and the tissues affected are limited by the low throughput nature and exorbitant cost of many DNA methylation assays. The L1 assay provides a high throughput, inexpensive
Eigenvalue sensitivity analysis of planar frames with variable joint and support locations
NASA Technical Reports Server (NTRS)
Chuang, Ching H.; Hou, Gene J. W.
1991-01-01
Two sensitivity equations are derived in this study based upon the continuum approach for eigenvalue sensitivity analysis of planar frame structures with variable joint and support locations. A variational form of an eigenvalue equation is first derived in which all of the quantities are expressed in the local coordinate system attached to each member. Material derivative of this variational equation is then sought to account for changes in member's length and orientation resulting form the perturbation of joint and support locations. Finally, eigenvalue sensitivity equations are formulated in either domain quantities (by the domain method) or boundary quantities (by the boundary method). It is concluded that the sensitivity equation derived by the boundary method is more efficient in computation but less accurate than that of the domain method. Nevertheless, both of them in terms of computational efficiency are superior to the conventional direct differentiation method and the finite difference method.
Nguyen, Tony B.; Pai, M. A.
2014-07-10
Real time stability evaluation and preventive scheduling in power systems offer many challenges in a stressed power system. Trajectory sensitivity analysis (TSA) is a useful tool for this and other applications in the emerging smart grid area. In this chapter we outline the basic approach of TSA, to extract suitable information from the data and develop reliable metrics or indices to evaluate proximity of the system to an unstable condition. Trajectory sensitivities can be used to compute critical parameters such as clearing time of circuit breakers, tie line flow, etc. in a power system by developing suitable norms for ease of interpretation. The TSA technique has the advantage that model complexity is not a limitation, and the sensitivities can be computed numerically. Suitable metrics are developed from these sensitivities. The TSA technique can be extended to do preventive rescheduling. A brief discussion of other applications of TSA in placement of distributed generation is indicated.
2011-01-01
Background Verbal autopsies provide valuable information for studying mortality patterns in populations that lack reliable vital registration data. Methods for transforming verbal autopsy results into meaningful information for health workers and policymakers, however, are often costly or complicated to use. We present a simple additive algorithm, the Tariff Method (termed Tariff), which can be used for assigning individual cause of death and for determining cause-specific mortality fractions (CSMFs) from verbal autopsy data. Methods Tariff calculates a score, or "tariff," for each cause, for each sign/symptom, across a pool of validated verbal autopsy data. The tariffs are summed for a given response pattern in a verbal autopsy, and this sum (score) provides the basis for predicting the cause of death in a dataset. We implemented this algorithm and evaluated the method's predictive ability, both in terms of chance-corrected concordance at the individual cause assignment level and in terms of CSMF accuracy at the population level. The analysis was conducted separately for adult, child, and neonatal verbal autopsies across 500 pairs of train-test validation verbal autopsy data. Results Tariff is capable of outperforming physician-certified verbal autopsy in most cases. In terms of chance-corrected concordance, the method achieves 44.5% in adults, 39% in children, and 23.9% in neonates. CSMF accuracy was 0.745 in adults, 0.709 in children, and 0.679 in neonates. Conclusions Verbal autopsies can be an efficient means of obtaining cause of death data, and Tariff provides an intuitive, reliable method for generating individual cause assignment and CSMFs. The method is transparent and flexible and can be readily implemented by users without training in statistics or computer science. PMID:21816107
Busschaert, Pieter; Geeraerd, Annemie H; Uyttendaele, Mieke; Van Impe, Jan F
2011-08-01
The aim of quantitative microbiological risk assessment is to estimate the risk of illness caused by the presence of a pathogen in a food type, and to study the impact of interventions. Because of inherent variability and uncertainty, risk assessments are generally conducted stochastically, and if possible it is advised to characterize variability separately from uncertainty. Sensitivity analysis allows to indicate to which of the input variables the outcome of a quantitative microbiological risk assessment is most sensitive. Although a number of methods exist to apply sensitivity analysis to a risk assessment with probabilistic input variables (such as contamination, storage temperature, storage duration, etc.), it is challenging to perform sensitivity analysis in the case where a risk assessment includes a separate characterization of variability and uncertainty of input variables. A procedure is proposed that focuses on the relation between risk estimates obtained by Monte Carlo simulation and the location of pseudo-randomly sampled input variables within the uncertainty and variability distributions. Within this procedure, two methods are used-that is, an ANOVA-like model and Sobol sensitivity indices-to obtain and compare the impact of variability and of uncertainty of all input variables, and of model uncertainty and scenario uncertainty. As a case study, this methodology is applied to a risk assessment to estimate the risk of contracting listeriosis due to consumption of deli meats.
NASA Astrophysics Data System (ADS)
Park, Jihoon; Yang, Guang; Satija, Addy; Scheidt, Céline; Caers, Jef
2016-12-01
Sensitivity analysis plays an important role in geoscientific computer experiments, whether for forecasting, data assimilation or model calibration. In this paper we focus on an extension of a method of regionalized sensitivity analysis (RSA) to applications typical in the Earth Sciences. Such applications involve the building of large complex spatial models, the application of computationally extensive forward modeling codes and the integration of heterogeneous sources of model uncertainty. The aim of this paper is to be practical: 1) provide a Matlab code, 2) provide novel visualization methods to aid users in getting a better understanding in the sensitivity 3) provide a method based on kernel principal component analysis (KPCA) and self-organizing maps (SOM) to account for spatial uncertainty typical in Earth Science applications and 4) provide an illustration on a real field case where the above mentioned complexities present themselves. We present methods that extend the original RSA method in several ways. First we present the calculation of conditional effects, defined as the sensitivity of a parameter given a level of another parameters. Second, we show how this conditional effect can be used to choose nominal values or ranges to fix insensitive parameters aiming to minimally affect uncertainty in the response. Third, we develop a method based on KPCA and SOM to assign a rank to spatial models in order to calculate the sensitivity on spatial variability in the models. A large oil/gas reservoir case is used as illustration of these ideas.
Multimedia Environmental Pollutant Assessment System (MEPAS) sensitivity analysis of computer codes
Doctor, P.G.; Miley, T.B.; Cowan, C.E.
1990-04-01
The Multimedia Environmental Pollutant Assessment System (MEPAS) is a computer-based methodology developed by the Pacific Northwest Laboratory (PNL) for the US Department of Energy (DOE) to estimate health impacts from the release of hazardous chemicals and radioactive materials. The health impacts are estimated from the environmental inventory and release or emission rate, constituent transport, constituent uptake and toxicity, and exposure route parameters. As part of MEPAS development and evaluation, PNL performed a formal parametric sensitivity analysis to determine the sensitivity of the model output to the input parameters, and to provide a systematic and objective method for determining the relative importance of the input parameters. The sensitivity analysis determined the sensitivity of the Hazard Potential Index (HPI) values to combinations of transport pathway and exposure routes important to evaluating environmental problems at DOE sites. Two combinations of transport pathways and exposure routes were evaluated. The sensitivity analysis focused on evaluating the effect of variation in user-specified parameters, such as constituent inventory, release and emission rates, and parameters describing the transport and exposure routes. The constituents used were strontium-90, yttrium-90, tritium, arsenic, mercury, polychlorinated biphenyls, toluene, and perchloroethylene. 28 refs., 3 figs., 46 tabs.
Application of a sensitivity analysis technique to high-order digital flight control systems
NASA Technical Reports Server (NTRS)
Paduano, James D.; Downing, David R.
1987-01-01
A sensitivity analysis technique for multiloop flight control systems is studied. This technique uses the scaled singular values of the return difference matrix as a measure of the relative stability of a control system. It then uses the gradients of these singular values with respect to system and controller parameters to judge sensitivity. The sensitivity analysis technique is first reviewed; then it is extended to include digital systems, through the derivation of singular-value gradient equations. Gradients with respect to parameters which do not appear explicitly as control-system matrix elements are also derived, so that high-order systems can be studied. A complete review of the integrated technique is given by way of a simple example: the inverted pendulum problem. The technique is then demonstrated on the X-29 control laws. Results show linear models of real systems can be analyzed by this sensitivity technique, if it is applied with care. A computer program called SVA was written to accomplish the singular-value sensitivity analysis techniques. Thus computational methods and considerations form an integral part of many of the discussions. A user's guide to the program is included. The SVA is a fully public domain program, running on the NASA/Dryden Elxsi computer.
NASA Technical Reports Server (NTRS)
Rais-Rohani, Masoud
2003-01-01
This report discusses the development and application of two alternative strategies in the form of global and sequential local response surface (RS) techniques for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS) whereas the local technique uses multiple first-order RS models with each applied to a small subregion of FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, thickness and orientation angle of each ply, cylinder diameter and length, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy with the sequential local RS technique having a considerably better computational efficiency.
Beccali, Marco; Cellura, Maurizio; Iudicello, Maria; Mistretta, Marina
2010-07-01
Though many studies concern the agro-food sector in the EU and Italy, and its environmental impacts, literature is quite lacking in works regarding LCA application on citrus products. This paper represents one of the first studies on the environmental impacts of citrus products in order to suggest feasible strategies and actions to improve their environmental performance. In particular, it is part of a research aimed to estimate environmental burdens associated with the production of the following citrus-based products: essential oil, natural juice and concentrated juice from oranges and lemons. The life cycle assessment of these products, published in a previous paper, had highlighted significant environmental issues in terms of energy consumption, associated CO(2) emissions, and water consumption. Starting from such results the authors carry out an improvement analysis of the assessed production system, whereby sustainable scenarios for saving water and energy are proposed to reduce environmental burdens of the examined production system. In addition, a sensitivity analysis to estimate the effects of the chosen methods will be performed, giving data on the outcome of the study. Uncertainty related to allocation methods, secondary data sources, and initial assumptions on cultivation, transport modes, and waste management is analysed. The results of the performed analyses allow stating that every assessed eco-profile is differently influenced by the uncertainty study. Different assumptions on initial data and methods showed very sensible variations in the energy and environmental performances of the final products. Besides, the results show energy and environmental benefits that clearly state the improvement of the products eco-profile, by reusing purified water use for irrigation, using the railway mode for the delivery of final products, when possible, and adopting efficient technologies, as the mechanical vapour recompression, in the pasteurisation and
Application of Temperature Sensitivities During Iterative Strain-Gage Balance Calibration Analysis
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2011-01-01
A new method is discussed that may be used to correct wind tunnel strain-gage balance load predictions for the influence of residual temperature effects at the location of the strain-gages. The method was designed for the iterative analysis technique that is used in the aerospace testing community to predict balance loads from strain-gage outputs during a wind tunnel test. The new method implicitly applies temperature corrections to the gage outputs during the load iteration process. Therefore, it can use uncorrected gage outputs directly as input for the load calculations. The new method is applied in several steps. First, balance calibration data is analyzed in the usual manner assuming that the balance temperature was kept constant during the calibration. Then, the temperature difference relative to the calibration temperature is introduced as a new independent variable for each strain--gage output. Therefore, sensors must exist near the strain--gages so that the required temperature differences can be measured during the wind tunnel test. In addition, the format of the regression coefficient matrix needs to be extended so that it can support the new independent variables. In the next step, the extended regression coefficient matrix of the original calibration data is modified by using the manufacturer specified temperature sensitivity of each strain--gage as the regression coefficient of the corresponding temperature difference variable. Finally, the modified regression coefficient matrix is converted to a data reduction matrix that the iterative analysis technique needs for the calculation of balance loads. Original calibration data and modified check load data of NASA's MC60D balance are used to illustrate the new method.
A meta-analysis comparing the sensitivity of bees to pesticides.
Arena, Maria; Sgolastra, Fabio
2014-04-01
The honey bee Apis mellifera, the test species used in the current environmental risk assessment procedure, is generally considered as extremely sensitive to pesticides when compared to other bee species, although a quantitative approach for comparing the difference in sensitivity among bees has not yet been reported. A systematic review of the relevant literature on the topic followed by a meta-analysis has been performed. Both the contact and oral acute LD50 and the chronic LC50 reported in laboratory studies for as many substances as possible have been extracted from the papers in order to compare the sensitivity to pesticides of honey bees and other bee species (Apiformes). The sensitivity ratio R between the endpoint for the species a (A. mellifera) and the species s (bees other than A. mellifera) was calculated for a total of 150 case studies including 19 bee species. A ratio higher than 1 indicated that the species s was more sensitive to pesticides than honey bees. The meta-analysis showed a high variability of sensitivity among bee species (R from 0.001 to 2085.7), however, in approximately 95 % of the cases the sensitivity ratio was below 10. The effect of pesticides in domestic and wild bees is dependent on the intrinsic sensitivity of single bee species as well as their specific life cycle, nesting activity and foraging behaviour. Current data indicates a need for more comparative information between honey bees and non-Apis bees as well as separate pesticide risk assessment procedures for non-Apis bees.
Analysis of synthetic motor oils for additive elements by ICP-AES
Williams, M.C.; Salmon, S.G.
1995-12-31
Standard motor oils are made by blending paraffinic or naphthenic mineral oil base stocks with additive packages containing anti-wear agents, dispersants, corrosion inhibitors, and viscosity index improvers. The blender can monitor the correct addition of the additives by determining the additive elements in samples dissolved in a solvent by ICP-AES. Internal standardization is required to control sample transport interferences due to differences in viscosity between samples and standards. Synthetic motor oils, made with poly-alpha-olefins and trimethylol propane esters, instead of mineral oils, pose an additional challenge since these compounds affect the plasma as well as having sample transport interference considerations. The synthetic lubricant base stocks add significant oxygen to the sample matrix, which makes the samples behave differently than standards prepared in mineral oil. Determination of additive elements in synthetic motor oils will be discussed.
First- and second-order sensitivity analysis of linear and nonlinear structures
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Mroz, Z.
1986-01-01
This paper employs the principle of virtual work to derive sensitivity derivatives of structural response with respect to stiffness parameters using both direct and adjoint approaches. The computations required are based on additional load conditions characterized by imposed initial strains, body forces, or surface tractions. As such, they are equally applicable to numerical or analytical solution techniques. The relative efficiency of various approaches for calculating first and second derivatives is assessed. It is shown that for the evaluation of second derivatives the most efficient approach is one that makes use of both the first-order sensitivities and adjoint vectors. Two example problems are used for demonstrating the various approaches.
NASA Astrophysics Data System (ADS)
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
2017-01-01
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.
Hoffmann, Max J; Engelmann, Felix; Matera, Sebastian
2017-01-28
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.
NASA Astrophysics Data System (ADS)
Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye
2016-03-01
Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.
Sensitivity analysis of add-on price estimate for select silicon wafering technologies
NASA Technical Reports Server (NTRS)
Mokashi, A. R.
1982-01-01
The cost of producing wafers from silicon ingots is a major component of the add-on price of silicon sheet. Economic analyses of the add-on price estimates and their sensitivity internal-diameter (ID) sawing, multiblade slurry (MBS) sawing and fixed-abrasive slicing technique (FAST) are presented. Interim price estimation guidelines (IPEG) are used for estimating a process add-on price. Sensitivity analysis of price is performed with respect to cost parameters such as equipment, space, direct labor, materials (blade life) and utilities, and the production parameters such as slicing rate, slices per centimeter and process yield, using a computer program specifically developed to do sensitivity analysis with IPEG. The results aid in identifying the important cost parameters and assist in deciding the direction of technology development efforts.
Practical implementation of an accurate method for multilevel design sensitivity analysis
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.
1987-01-01
Solution techniques for handling large scale engineering optimization problems are reviewed. Potentials for practical applications as well as their limited capabilities are discussed. A new solution algorithm for design sensitivity is proposed. The algorithm is based upon the multilevel substructuring concept to be coupled with the adjoint method of sensitivity analysis. There are no approximations involved in the present algorithm except the usual approximations introduced due to the discretization of the finite element model. Results from the six- and thirty-bar planar truss problems show that the proposed multilevel scheme for sensitivity analysis is more effective (in terms of computer incore memory and the total CPU time) than a conventional (one level) scheme even on small problems. The new algorithm is expected to perform better for larger problems and its applications on the new generation of computer hardwares with 'parallel processing' capability is very promising.
Polarization sensitivity analysis of an earth remote sensing instrument - The MODIS-N phase B study
NASA Technical Reports Server (NTRS)
Waluschka, E.; Silverglate, P.; Ftaclas, C.; Turner, A.
1992-01-01
Polarization analysis software that employs Jones matrix formalism to calculate the polarization sensitivity of an instrument design was developed at Hughes Danbury Optical Systems. The code is capable of analyzing the full ray bundle at its angles of incidence for each optical surface. Input is based on the system ray trace and the thin film coating design at each surface. The MODIS-N (Moderate Resolution Imaging Spectrometer) system is used to demonstrate that it is possible to meet stringent requirements on polarization insensitivity associated with planned remote sensing instruments. Analysis indicates that a polarization sensitivity less than or equal to 2 percent was achieved in all desired spectral bands at all pointing angles, per specification. Polarization sensitivities were as high as 10 percent in similar remote sensing instruments.
Dudek, Bartłomiej; Krzyżewska, Eva; Kapczyńska, Katarzyna; Rybka, Jacek; Pawlak, Aleksandra; Korzekwa, Kamila; Klausa, Elżbieta; Bugla-Płoskońska, Gabriela
2016-01-01
Differential analysis of outer membrane composition of S. Enteritidis strains, resistant to 50% normal human serum (NHS) was performed in order to find factors influencing the resistance to higher concentrations of NHS. Ten S. Enteritidis clinical strains, resistant to 50% NHS, all producing very long lipopolysaccharide, were subjected to the challenge of 75% NHS. Five extreme strains: two resistant and three sensitive to 75% NHS, were chosen for the further analysis of outer membrane proteins composition. Substantial differences were found in the levels of particular outer membrane proteins between resistant and sensitive strains, i.e. outer membrane protease E (PgtE) was present mainly in resistant strains, while sensitive strains possessed a high level of flagellar hook-associated protein 2 (FliD) and significantly higher levels of outer membrane protein A (OmpA). PMID:27695090
Malaguerra, Flavio; Chambon, Julie C; Bjerg, Poul L; Scheutz, Charlotte; Binning, Philip J
2011-10-01
A fully kinetic biogeochemical model of sequential reductive dechlorination (SERD) occurring in conjunction with lactate and propionate fermentation, iron reduction, sulfate reduction, and methanogenesis was developed. Production and consumption of molecular hydrogen (H(2)) by microorganisms have been modeled using modified Michaelis-Menten kinetics and has been implemented in the geochemical code PHREEQC. The model have been calibrated using a Shuffled Complex Evolution Metropolis algorithm to observations of chlorinated solvents, organic acids, and H(2) concentrations in laboratory batch experiments of complete trichloroethene (TCE) degradation in natural sediments. Global sensitivity analysis was performed using the Morris method and Sobol sensitivity indices to identify the most influential model parameters. Results show that the sulfate concentration and fermentation kinetics are the most important factors influencing SERD. The sensitivity analysis also suggests that it is not possible to simplify the model description if all system behaviors are to be well described.
Disclosure of sensitive behaviors across self-administered survey modes: a meta-analysis.
Gnambs, Timo; Kaspar, Kai
2015-12-01
In surveys, individuals tend to misreport behaviors that are in contrast to prevalent social norms or regulations. Several design features of the survey procedure have been suggested to counteract this problem; particularly, computerized surveys are supposed to elicit more truthful responding. This assumption was tested in a meta-analysis of survey experiments reporting 460 effect sizes (total N =125,672). Self-reported prevalence rates of several sensitive behaviors for which motivated misreporting has been frequently observed were compared across self-administered paper-and-pencil versus computerized surveys. The results revealed that computerized surveys led to significantly more reporting of socially undesirable behaviors than comparable surveys administered on paper. This effect was strongest for highly sensitive behaviors and surveys administered individually to respondents. Moderator analyses did not identify interviewer effects or benefits of audio-enhanced computer surveys. The meta-analysis highlighted the advantages of computerized survey modes for the assessment of sensitive topics.
Strategies for cost-effective carbon reductions: A sensitivity analysis of alternative scenarios
Gumerman, Etan; Koomey, Jonathan G.; Brown, Marilyn
2001-07-11
Analyses of alternative futures often present results for a limited set of scenarios, with little if any sensitivity analysis to identify the factors affecting the scenario results. This approach creates an artificial impression of certainty associated with the scenarios considered, and inhibits understanding of the underlying forces. This paper summarizes the economic and carbon savings sensitivity analysis completed for the Scenarios for a Clean Energy Future