Science.gov

Sample records for design sensitivity analysis

  1. Design sensitivity analysis of nonlinear structural response

    NASA Technical Reports Server (NTRS)

    Cardoso, J. B.; Arora, J. S.

    1987-01-01

    A unified theory is described of design sensitivity analysis of linear and nonlinear structures for shape, nonshape and material selection problems. The concepts of reference volume and adjoint structure are used to develop the unified viewpoint. A general formula for design sensitivity analysis is derived. Simple analytical linear and nonlinear examples are used to interpret various terms of the formula and demonstrate its use.

  2. Design sensitivity analysis using EAL. Part 1: Conventional design parameters

    NASA Technical Reports Server (NTRS)

    Dopker, B.; Choi, Kyung K.; Lee, J.

    1986-01-01

    A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.

  3. Sensitivity analysis of Stirling engine design parameters

    SciTech Connect

    Naso, V.; Dong, W.; Lucentini, M.; Capata, R.

    1998-07-01

    In the preliminary Stirling engine design process, the values of some design parameters (temperature ratio, swept volume ratio, phase angle and dead volume ratio) have to be assumed; as a matter of fact it can be difficult to determine the best values of these parameters for a particular engine design. In this paper, a mathematical model is developed to analyze the sensitivity of engine's performance variations corresponding to variations of these parameters.

  4. Design sensitivity analysis of rotorcraft airframe structures for vibration reduction

    NASA Technical Reports Server (NTRS)

    Murthy, T. Sreekanta

    1987-01-01

    Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.

  5. Design sensitivity analysis and optimization tool (DSO) for sizing design applications

    NASA Technical Reports Server (NTRS)

    Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa

    1992-01-01

    The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.

  6. Aeroacoustic sensitivity analysis and optimal aeroacoustic design of turbomachinery blades

    NASA Technical Reports Server (NTRS)

    Hall, Kenneth C.

    1994-01-01

    During the first year of the project, we have developed a theoretical analysis - and wrote a computer code based on this analysis - to compute the sensitivity of unsteady aerodynamic loads acting on airfoils in cascades due to small changes in airfoil geometry. The steady and unsteady flow though a cascade of airfoils is computed using the full potential equation. Once the nominal solutions have been computed, one computes the sensitivity. The analysis takes advantage of the fact that LU decomposition is used to compute the nominal steady and unsteady flow fields. If the LU factors are saved, then the computer time required to compute the sensitivity of both the steady and unsteady flows to changes in airfoil geometry is quite small. The results to date are quite encouraging, and may be summarized as follows: (1) The sensitivity procedure has been validated by comparing the results obtained by 'finite difference' techniques, that is, computing the flow using the nominal flow solver for two slightly different airfoils and differencing the results. The 'analytic' solution computed using the method developed under this grant and the finite difference results are found to be in almost perfect agreement. (2) The present sensitivity analysis is computationally much more efficient than finite difference techniques. We found that using a 129 by 33 node computational grid, the present sensitivity analysis can compute the steady flow sensitivity about ten times more efficiently that the finite difference approach. For the unsteady flow problem, the present sensitivity analysis is about two and one-half times as fast as the finite difference approach. We expect that the relative efficiencies will be even larger for the finer grids which will be used to compute high frequency aeroacoustic solutions. Computational results show that the sensitivity analysis is valid for small to moderate sized design perturbations. (3) We found that the sensitivity analysis provided important

  7. Design Parameters Influencing Reliability of CCGA Assembly: A Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Tasooji, Amaneh; Ghaffarian, Reza; Rinaldi, Antonio

    2006-01-01

    Area Array microelectronic packages with small pitch and large I/O counts are now widely used in microelectronics packaging. The impact of various package design and materials/process parameters on reliability has been studied through extensive literature review. Reliability of Ceramic Column Grid Array (CCGA) package assemblies has been evaluated using JPL thermal cycle test results (-50(deg)/75(deg)C, -55(deg)/100(deg)C, and -55(deg)/125(deg)C), as well as those reported by other investigators. A sensitivity analysis has been performed using the literature da to study the impact of design parameters and global/local stress conditions on assembly reliability. The applicability of various life-prediction models for CCGA design has been investigated by comparing model's predictions with the experimental thermal cycling data. Finite Element Method (FEM) analysis has been conducted to assess the state of the stress/strain in CCGA assembly under different thermal cycling, and to explain the different failure modes and locations observed in JPL test assemblies.

  8. Treatment of body forces in boundary element design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Saigal, Sunil; Kane, James H.; Aithal, R.; Cheng, Jizu

    1989-01-01

    The inclusion of body forces has received a good deal of attention in boundary element research. The consideration of such forces is essential in the desgin of high performance components such as fan and turbine disks in a gas turbine engine. Due to their critical performance requirements, optimal shapes are often desired for these components. The boundary element method (BEM) offers the possibility of being an efficient method for such iterative analysis as shape optimization. The implicit-differentiation of the boundary integral equations is performed to obtain the sensitivity equations. The body forces are accounted for by either the particular integrals for uniform body forces or by a surface integration for non-uniform body forces. The corresponding sensitivity equations for both these cases are presented. The validity of present formulations is established through a close agreement with exact analytical results.

  9. Observations Regarding Use of Advanced CFD Analysis, Sensitivity Analysis, and Design Codes in MDO

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Hou, Gene J. W.; Taylor, Arthur C., III

    1996-01-01

    Observations regarding the use of advanced computational fluid dynamics (CFD) analysis, sensitivity analysis (SA), and design codes in gradient-based multidisciplinary design optimization (MDO) reflect our perception of the interactions required of CFD and our experience in recent aerodynamic design optimization studies using CFD. Sample results from these latter studies are summarized for conventional optimization (analysis - SA codes) and simultaneous analysis and design optimization (design code) using both Euler and Navier-Stokes flow approximations. The amount of computational resources required for aerodynamic design using CFD via analysis - SA codes is greater than that required for design codes. Thus, an MDO formulation that utilizes the more efficient design codes where possible is desired. However, in the aerovehicle MDO problem, the various disciplines that are involved have different design points in the flight envelope; therefore, CFD analysis - SA codes are required at the aerodynamic 'off design' points. The suggested MDO formulation is a hybrid multilevel optimization procedure that consists of both multipoint CFD analysis - SA codes and multipoint CFD design codes that perform suboptimizations.

  10. Design component method for sensitivity analysis of built-up structures

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.; Seong, Hwai G.

    1986-01-01

    A 'design component method' that provides a unified and systematic organization of design sensitivity analysis for built-up structures is developed and implemented. Both conventional design variables, such as thickness and cross-sectional area, and shape design variables of components of built-up structures are considered. It is shown that design of components of built-up structures can be characterized and system design sensitivity expressions obtained by simply adding contributions from each component. The method leads to a systematic organization of computations for design sensitivity analysis that is similar to the way in which computations are organized within a finite element code.

  11. Sensitivity analysis for aeroacoustic and aeroelastic design of turbomachinery blades

    NASA Technical Reports Server (NTRS)

    Lorence, Christopher B.; Hall, Kenneth C.

    1995-01-01

    A new method for computing the effect that small changes in the airfoil shape and cascade geometry have on the aeroacoustic and aeroelastic behavior of turbomachinery cascades is presented. The nonlinear unsteady flow is assumed to be composed of a nonlinear steady flow plus a small perturbation unsteady flow that is harmonic in time. First, the full potential equation is used to describe the behavior of the nonlinear mean (steady) flow through a two-dimensional cascade. The small disturbance unsteady flow through the cascade is described by the linearized Euler equations. Using rapid distortion theory, the unsteady velocity is split into a rotational part that contains the vorticity and an irrotational part described by a scalar potential. The unsteady vorticity transport is described analytically in terms of the drift and stream functions computed from the steady flow. Hence, the solution of the linearized Euler equations may be reduced to a single inhomogeneous equation for the unsteady potential. The steady flow and small disturbance unsteady flow equations are discretized using bilinear quadrilateral isoparametric finite elements. The nonlinear mean flow solution and streamline computational grid are computed simultaneously using Newton iteration. At each step of the Newton iteration, LU decomposition is used to solve the resulting set of linear equations. The unsteady flow problem is linear, and is also solved using LU decomposition. Next, a sensitivity analysis is performed to determine the effect small changes in cascade and airfoil geometry have on the mean and unsteady flow fields. The sensitivity analysis makes use of the nominal steady and unsteady flow LU decompositions so that no additional matrices need to be factored. Hence, the present method is computationally very efficient. To demonstrate how the sensitivity analysis may be used to redesign cascades, a compressor is redesigned for improved aeroelastic stability and two different fan exit guide

  12. Results of an integrated structure-control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1988-01-01

    Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.

  13. Design tradeoff studies and sensitivity analysis, appendix B

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Further work was performed on the Near Term Hybrid Passenger Vehicle Development Program. Fuel economy on the order of 2 to 3 times that of a conventional vehicle, with a comparable life cycle cost, is possible. The two most significant factors in keeping the life cycle cost down are the retail price increment and the ratio of battery replacement cost to battery life. Both factors can be reduced by reducing the power rating of the electric drive portion of the system relative to the system power requirements. The type of battery most suitable for the hybrid, from the point of view of minimizing life cycle cost, is nickel-iron. The hybrid is much less sensitive than a conventional vehicle is, in terms of the reduction in total fuel consumption and resultant decreases in operating expense, to reductions in vehicle weight, tire rolling resistance, etc., and to propulsion system and drivetrain improvements designed to improve the brake specific fuel consumption of the engine under low road load conditions. It is concluded that modifications to package the propulsion system and battery pack can be easily accommodated within the confines of a modified carryover body such as the Ford Ltd.

  14. Design sensitivity analysis with Applicon IFAD using the adjoint variable method

    NASA Technical Reports Server (NTRS)

    Frederick, Marjorie C.; Choi, Kyung K.

    1984-01-01

    A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.

  15. On 3-D modeling and automatic regridding in shape design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.; Yao, Tse-Min

    1987-01-01

    The material derivative idea of continuum mechanics and the adjoint variable method of design sensitivity analysis are used to obtain a computable expression for the effect of shape variations on measures of structural performance of three-dimensional elastic solids.

  16. Sensitivity analysis of physiological factors in space habitat design

    NASA Technical Reports Server (NTRS)

    Billingham, J.

    1982-01-01

    The costs incurred by design conservatism in space habitat design are discussed from a structural standpoint, and areas of physiological research into less than earth-normal conditions that offer the greatest potential decrease in habitat construction and operating costs are studied. The established range of human tolerance limits is defined for those physiological conditions which directly affect habitat structural design. These entire ranges or portions thereof are set as habitat design constraints as a function of habitat population and degree of ecological closure. Calculations are performed to determine the structural weight and cost associated with each discrete population size and its selected environmental conditions, on the basis of habitable volume equivalence for four basic habitat configurations: sphere, cylinder with hemispherical ends, torus, and crystal palace.

  17. Sensitivity analysis and multidisciplinary optimization for aircraft design - Recent advances and results

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    Optimization by decomposition, complex system sensitivity analysis, and a rapid growth of disciplinary sensitivity analysis are some of the recent developments that hold promise of a quantum jump in the support engineers receive from computers in the quantitative aspects of design. Review of the salient points of these techniques is given and illustrated by examples from aircraft design as a process that combines the best of human intellect and computer power to manipulate data.

  18. Sensitivity analysis and multidisciplinary optimization for aircraft design: Recent advances and results

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    Optimization by decomposition, complex system sensitivity analysis, and a rapid growth of disciplinary sensitivity analysis are some of the recent developments that hold promise of a quantum jump in the support engineers receive from computers in the quantitative aspects of design. Review of the salient points of these techniques is given and illustrated by examples from aircraft design as a process that combines the best of human intellect and computer power to manipulate data.

  19. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  20. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).

  1. Parallel-vector design sensitivity analysis in structural dynamics

    NASA Technical Reports Server (NTRS)

    Zhang, Y.; Nguyen, D. T.

    1992-01-01

    This paper presents a parallel-vector algorithm for sensitivity calculations in linear structural dynamics. The proposed alternative formulation works efficiently with the reduced system of dynamic equations, since it eliminates the need for expensive and complicated based-vector derivatives, which are required in the conventional reduced system formulation. The relationship between the alternative formulation and the conventional reduced system formulation has been established, and it has been proven analytically that the two approaches are identical when all the mode shapes are included. This paper validates the proposed alternative algorithm through numerical experiments, where only a small number of mode shapes are used. In addition, a modified mode acceleration method is presented, thus not only the displacements but also the velocities and accelerations are shown to be improved.

  2. Automatic differentiation for design sensitivity analysis of structural systems using multiple processors

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi

    1994-01-01

    An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.

  3. Variational Methods in Design Optimization and Sensitivity Analysis for Two-Dimensional Euler Equations

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.

    1997-01-01

    Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.

  4. Results of an integrated structure/control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1989-01-01

    A design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations is discussed. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changes in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient than finite difference methods for the computation of the equivalent sensitivity information.

  5. Methodology for Sensitivity Analysis, Approximate Analysis, and Design Optimization in CFD for Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1996-01-01

    An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.

  6. Manufacturing error sensitivity analysis and optimal design method of cable-network antenna structures

    NASA Astrophysics Data System (ADS)

    Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye

    2016-03-01

    Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.

  7. Geometrically nonlinear design sensitivity analysis on parallel-vector high-performance computers

    NASA Technical Reports Server (NTRS)

    Baddourah, Majdi A.; Nguyen, Duc T.

    1993-01-01

    Parallel-vector solution strategies for generation and assembly of element matrices, solution of the resulted system of linear equations, calculations of the unbalanced loads, displacements, stresses, and design sensitivity analysis (DSA) are all incorporated into the Newton Raphson (NR) procedure for nonlinear finite element analysis and DSA. Numerical results are included to show the performance of the proposed method for structural analysis and DSA in a parallel-vector computer environment.

  8. Value-Driven Design and Sensitivity Analysis of Hybrid Energy Systems using Surrogate Modeling

    SciTech Connect

    Wenbo Du; Humberto E. Garcia; William R. Binder; Christiaan J. J. Paredis

    2001-10-01

    A surrogate modeling and analysis methodology is applied to study dynamic hybrid energy systems (HES). The effect of battery size on the smoothing of variability in renewable energy generation is investigated. Global sensitivity indices calculated using surrogate models show the relative sensitivity of system variability to dynamic properties of key components. A value maximization approach is used to consider the tradeoff between system variability and required battery size. Results are found to be highly sensitive to the renewable power profile considered, demonstrating the importance of accurate renewable resource modeling and prediction. The documented computational framework and preliminary results represent an important step towards a comprehensive methodology for HES evaluation, design, and optimization.

  9. Three-Dimensional Simulation And Design Sensitivity Analysis Of The Injection Molding Process

    NASA Astrophysics Data System (ADS)

    Ilinca, Florin; Hétu, Jean-François

    2004-06-01

    Getting the proper combination of different process parameters such as injection speed, melt temperature and mold temperature is important in getting a part that minimizes warpage and has the desired mechanical properties. Very often a successful design in injection molding comes at the end of a long trial and error process. Design Sensitivity Analysis (DSA) can help molders improve the design and can produce substantial investment savings in both time and money. This paper investigates the ability of the sensitivity analysis to drive an optimization tool in order to improve the quality of the injected part. The paper presents the solution of the filling stage of the injection molding process by a 3D finite element solution algorithm. The sensitivity of the solution with respect to different process parameters is computed using the continuous sensitivity equation method. Solutions are shown for the non-isothermal filling of a rectangular plate with a polymer melt behaving as a non-Newtonian fluid. The paper presents the equations for the sensitivity of the velocity, pressure and temperature and their solution by finite elements. Sensitivities of the solution with respect to the injection speed, the melt and mold temperatures are shown.

  10. Adjoint design sensitivity analysis of reduced atomic systems using generalized Langevin equation for lattice structures

    SciTech Connect

    Kim, Min-Geun; Jang, Hong-Lae; Cho, Seonho

    2013-05-01

    An efficient adjoint design sensitivity analysis method is developed for reduced atomic systems. A reduced atomic system and the adjoint system are constructed in a locally confined region, utilizing generalized Langevin equation (GLE) for periodic lattice structures. Due to the translational symmetry of lattice structures, the size of time history kernel function that accounts for the boundary effects of the reduced atomic systems could be reduced to a single atom’s degrees of freedom. For the problems of highly nonlinear design variables, the finite difference method is impractical for its inefficiency and inaccuracy. However, the adjoint method is very efficient regardless of the number of design variables since one additional time integration is required for the adjoint GLE. Through numerical examples, the derived adjoint sensitivity turns out to be accurate and efficient through the comparison with finite difference sensitivity.

  11. Sensitivity Analysis of Design Variables to Optimize the Performance of the USV

    NASA Astrophysics Data System (ADS)

    Cao, Xue; Wei, Zifan; Yang, Songlin; Wen, Yiyan

    Optimization is an important part of the design on Unmanned Surface Vehicle (USV). In this paper, considering the rapidity, maneuverability, seakeeping and rollover resistance performance of the USV, the design variables of the optimization system of the USV have been determined a mathematical model for comprehensive optimization of USV has been established. Integrated optimization design of multi-target and multi-constrain is achieved by computer programs. However, the influence degree of each design variable are different on the final optimization results, in order to determine the degree of influence of each design variables, find out the key variables for a further optimization analysis and sensitivity studies of the design variables to optimization will be crucial. For solving this problem, a C++ program has been written by genetic algorithm and five discrete variables have been selected which are used to study the sensitivity of optimization. The results showed that different design variables have different effects on the optimization. The length of the ship and the speed of propeller have the greatest effect on the total objective function. The speed of propeller has a greater impact on both rapidity and seakeeping. The length of ship L, the molded breadth of ship B, the draft of ship T and design speed Vs have a greater sensitivity to maneuverability. Also, molded breadth B has the greatest effect on the rollover resistance.

  12. Stratospheric Airship Design Sensitivity

    NASA Astrophysics Data System (ADS)

    Smith, Ira Steve; Fortenberry, Michael; Noll, . James; Perry, William

    2012-07-01

    The concept of a stratospheric or high altitude powered platform has been around almost as long as stratospheric free balloons. Airships are defined as Lighter-Than-Air (LTA) vehicles with propulsion and steering systems. Over the past five (5) years there has been an increased interest by the U. S. Department of Defense as well as commercial enterprises in airships at all altitudes. One of these interests is in the area of stratospheric airships. Whereas DoD is primarily interested in things that look down, such platforms offer a platform for science applications, both downward and outward looking. Designing airships to operate in the stratosphere is very challenging due to the extreme high altitude environment. It is significantly different than low altitude airship designs such as observed in the familiar advertising or tourism airships or blimps. The stratospheric airship design is very dependent on the specific application and the particular requirements levied on the vehicle with mass and power limits. The design is a complex iterative process and is sensitive to many factors. In an effort to identify the key factors that have the greatest impacts on the design, a parametric analysis of a simplified airship design has been performed. The results of these studies will be presented.

  13. Generalized Timoshenko modelling of composite beam structures: sensitivity analysis and optimal design

    NASA Astrophysics Data System (ADS)

    Augusta Neto, Maria; Yu, Wenbin; Pereira Leal, Rogerio

    2008-10-01

    This article describes a new approach to design the cross-section layer orientations of composite laminated beam structures. The beams are modelled with realistic cross-sectional geometry and material properties instead of a simplified model. The VABS (the variational asymptotic beam section analysis) methodology is used to compute the cross-sectional model for a generalized Timoshenko model, which was embedded in the finite element solver FEAP. Optimal design is performed with respect to the layers' orientation. The design sensitivity analysis is analytically formulated and implemented. The direct differentiation method is used to evaluate the response sensitivities with respect to the design variables. Thus, the design sensitivities of the Timoshenko stiffness computed by VABS methodology are imbedded into the modified VABS program and linked to the beam finite element solver. The modified method of feasible directions and sequential quadratic programming algorithms are used to seek the optimal continuous solution of a set of numerical examples. The buckling load associated with the twist-bend instability of cantilever composite beams, which may have several cross-section geometries, is improved in the optimization procedure.

  14. System Sensitivity Analysis Applied to the Conceptual Design of a Dual-Fuel Rocket SSTO

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1994-01-01

    This paper reports the results of initial efforts to apply the System Sensitivity Analysis (SSA) optimization method to the conceptual design of a single-stage-to-orbit (SSTO) launch vehicle. SSA is an efficient, calculus-based MDO technique for generating sensitivity derivatives in a highly multidisciplinary design environment. The method has been successfully applied to conceptual aircraft design and has been proven to have advantages over traditional direct optimization methods. The method is applied to the optimization of an advanced, piloted SSTO design similar to vehicles currently being analyzed by NASA as possible replacements for the Space Shuttle. Powered by a derivative of the Russian RD-701 rocket engine, the vehicle employs a combination of hydrocarbon, hydrogen, and oxygen propellants. Three primary disciplines are included in the design - propulsion, performance, and weights & sizing. A complete, converged vehicle analysis depends on the use of three standalone conceptual analysis computer codes. Efforts to minimize vehicle dry (empty) weight are reported in this paper. The problem consists of six system-level design variables and one system-level constraint. Using SSA in a 'manual' fashion to generate gradient information, six system-level iterations were performed from each of two different starting points. The results showed a good pattern of convergence for both starting points. A discussion of the advantages and disadvantages of the method, possible areas of improvement, and future work is included.

  15. A sensitivity analysis of hazardous waste disposal site climatic and soil design parameters using HELP3

    SciTech Connect

    Adelman, D.D.; Stansbury, J.

    1997-12-31

    The Resource Conservation and Recovery Act (RCRA) Subtitle C, Comprehensive Environmental Response, Compensation, And Liability Act (CERCLA), and subsequent amendments have formed a comprehensive framework to deal with hazardous wastes on the national level. Key to this waste management is guidance on design (e.g., cover and bottom leachate control systems) of hazardous waste landfills. The objective of this research was to investigate the sensitivity of leachate volume at hazardous waste disposal sites to climatic, soil cover, and vegetative cover (Leaf Area Index) conditions. The computer model HELP3 which has the capability to simulate double bottom liner systems as called for in hazardous waste disposal sites was used in the analysis. HELP3 was used to model 54 combinations of climatic conditions, disposal site soil surface curve numbers, and leaf area index values to investigate how sensitive disposal site leachate volume was to these three variables. Results showed that leachate volume from the bottom double liner system was not sensitive to these parameters. However, the cover liner system leachate volume was quite sensitive to climatic conditions and less sensitive to Leaf Area Index and curve number values. Since humid locations had considerably more cover liner system leachate volume than and locations, different design standards may be appropriate for humid conditions than for and conditions.

  16. Design of a smart magnetic sensor by sensitivity based covariance analysis

    NASA Astrophysics Data System (ADS)

    Krishna Kumar, P. T.

    2001-08-01

    We use the technique of sensitivity based covariance analysis to design a smart magnetic sensor for depth profile studies where a NMR flux meter is used as the sensor in a Van de Graff accelerator (VGA). The minimum detection limit of any sensor tends to the systematic uncertainty, and, using this phenomenology, we estimated the upper and lower bounds for the correlated systematic uncertainties for the proton energy accelerated by the VGA by the technique of determinant inequalities. Knowledge of the bounds would help in the design of a smart magnetic sensor with reduced correlated systematic uncertainty.

  17. Sensitivity Analysis of the Thermal Response of 9975 Packaging Using Factorial Design Methods

    SciTech Connect

    Gupta, Narendra K.

    2005-10-31

    A method is presented for using the statistical design of experiment (2{sup k} Factorial Design) technique in the sensitivity analysis of the thermal response (temperature) of the 9975 radioactive material packaging where multiple thermal properties of the impact absorbing and fire insulating material Celotex and certain boundary conditions are subject to uncertainty. 2{sup k} Factorial Design method is very efficient in the use of available data and is capable of analyzing the impact of main variables (Factors) and their interactions on the component design. The 9975 design is based on detailed finite element (FE) analyses and extensive proof testing to meet the design requirements given in 10CFR71 [1]. However, the FE analyses use Celotex thermal properties that are based on published data and limited experiments. Celotex is an orthotropic material that is used in the home building industry. Its thermal properties are prone to variation due to manufacturing and fabrication processes, and due to long environmental exposure. This paper will evaluate the sensitivity of variations in thermal conductivity of the Celotex, convection coefficient at the drum surface, and drum emissivity (herein called Factors) on the thermal response of 9975 packaging under Normal Conditions of Transport (NCT). Application of this methodology will ascertain the robustness of the 9975 design and it can lead to more specific and useful understanding of the effects of various Factors on 9975 performance.

  18. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  19. A wideband FMBEM for 2D acoustic design sensitivity analysis based on direct differentiation method

    NASA Astrophysics Data System (ADS)

    Chen, Leilei; Zheng, Changjun; Chen, Haibo

    2013-09-01

    This paper presents a wideband fast multipole boundary element method (FMBEM) for two dimensional acoustic design sensitivity analysis based on the direct differentiation method. The wideband fast multipole method (FMM) formed by combining the original FMM and the diagonal form FMM is used to accelerate the matrix-vector products in the boundary element analysis. The Burton-Miller formulation is used to overcome the fictitious frequency problem when using a single Helmholtz boundary integral equation for exterior boundary-value problems. The strongly singular and hypersingular integrals in the sensitivity equations can be evaluated explicitly and directly by using the piecewise constant discretization. The iterative solver GMRES is applied to accelerate the solution of the linear system of equations. A set of optimal parameters for the wideband FMBEM design sensitivity analysis are obtained by observing the performances of the wideband FMM algorithm in terms of computing time and memory usage. Numerical examples are presented to demonstrate the efficiency and validity of the proposed algorithm.

  20. Aerodynamic Shape Sensitivity Analysis and Design Optimization of Complex Configurations Using Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Newman, James C., III; Barnwell, Richard W.

    1997-01-01

    A three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed and is extended to model geometrically complex configurations. The advantage of unstructured grids (when compared with a structured-grid approach) is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional geometry and a Gauss-Seidel algorithm for the three-dimensional; similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Simple parameterization techniques are utilized for demonstrative purposes. Once the surface has been deformed, the unstructured grid is adapted by considering the mesh as a system of interconnected springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR (which is an advanced automatic-differentiation software tool). To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for a two-dimensional high-lift multielement airfoil and for a three-dimensional Boeing 747-200 aircraft.

  1. Design of 3-D Nacelle near Flat-Plate Wing Using Multiblock Sensitivity Analysis (ADOS)

    NASA Technical Reports Server (NTRS)

    Eleshaky, Mohamed E.; Baysal, Oktay

    1994-01-01

    One of the major design tasks involved in reducing aircraft drag is the integration of the engine nacelles and airframe. With this impetus, nacelle shapes with and without the presence of a flat-plate wing nearby were optimized. This also served as a demonstration of the 3-D version of the recently developed aerodynamic design optimization methodology using sensitivity analysis, ADOS. The required flow analyses were obtained by solving the three-dimensional, compressible, thin-layer Navier-Stokes equations using an implicit, upwind-biased, finite volume scheme. The sensitivity analyses were performed using the preconditioned version of the SADD scheme (sensitivity analysis on domain decomposition). In addition to demonstrating the present method's capability for automatic optimization, the results offered some insight into two important issues related to optimizing the shapes of multicomponent configurations in close proximity. First, inclusion of the mutual interference between the components resulted in a different shape as opposed to shaping an isolated component. Secondly, exclusion of the viscous effects compromised not only the flow physics but also the optimized shapes even for isolated components.

  2. Design-oriented thermoelastic analysis, sensitivities, and approximations for shape optimization of aerospace vehicles

    NASA Astrophysics Data System (ADS)

    Bhatia, Manav

    Aerospace structures operate under extreme thermal environments. Hot external aerothermal environment at high Mach number flight leads to high structural temperatures. At the same time, cold internal cryogenic-fuel-tanks and thermal management concepts like Thermal Protection System (TPS) and active cooling result in a high temperature gradient through the structure. Multidisciplinary Design Optimization (MDO) of such structures requires a design-oriented approach to this problem. The broad goal of this research effort is to advance the existing state of the art towards MDO of large scale aerospace structures. The components required for this work are the sensitivity analysis formulation encompassing the scope of the physical phenomena being addressed, a set of efficient approximations to cut-down the required CPU cost, and a general purpose design-oriented numerical analysis tool capable of handling problems of this scope. In this work finite element discretization has been used to solve the conduction partial differential equations and the Poljak method has been used to discretize the integral equations for internal cavity radiation. A methodology has been established to couple the conduction finite element analysis to the internal radiation analysis. This formulation is then extended for sensitivity analysis of heat transfer and coupled thermal-structural problems. The most CPU intensive operations in the overall analysis have been identified, and approximation methods have been proposed to reduce the associated CPU cost. Results establish the effectiveness of these approximation methods, which lead to very high savings in CPU cost without any deterioration in the results. The results presented in this dissertation include two cases: a hexahedral cavity with internal and external radiation with conducting walls, and a wing box which is geometrically similar to the orbiter wing.

  3. Mesoscale ensemble sensitivity analysis for predictability studies and observing network design in complex terrain

    NASA Astrophysics Data System (ADS)

    Hacker, Joshua

    2013-04-01

    Ensemble sensitivity analysis (ESA) is emerging as a viable alternative to adjoint sensitivity. Several open issues face ESA for forecasts dominated by mesoscale phenomena, including (1) sampling error arising from finite-sized ensembles causing over-estimated sensitivities, and (2) violation of linearity assumptions for strongly nonlinear flows. In an effort to use ESA for predictability studies and observing network design in complex terrain, we present results from experiments designed to address these open issues. Sampling error in ESA arises in two places. First, when hypothetical observations are introduced to test the sensitivity estimates for linearity. Here the same localization that was used in the filter itself can be simply applied. Second and more critical, localization should be considered within the sensitivity calculations. Sensitivity to hypothetical observations, estimated without re-running the ensemble, includes regression of a sample of a final-time (forecast) metric onto a sample of initial states. Derivation to include localization results in two localization coefficients (or factors) applied in separate regression steps. Because the forecast metric is usually a sum, and can also include a sum over a spatial region and multiple physical variables, a spatial localization function is difficult to specify. We present results from experiments to empirically estimate localization factors for ESA to test hypothetical observations for mesoscale data assimilation in complex terrain. Localization factors are first derived for an ensemble filter following the empirical localization methodology. Sensitivities for a fog event over Salt Lake City, and a Colorado downslope wind event, are tested for linearity by approximating assimilation of perfect observations at points of maximum sensitivity, both with and without localization. Observation sensitivity is then estimated, with and without localization, and tested for linearity. The validity of the

  4. Sensitivity Analysis of Wind Plant Performance to Key Turbine Design Parameters: A Systems Engineering Approach; Preprint

    SciTech Connect

    Dykes, K.; Ning, A.; King, R.; Graf, P.; Scott, G.; Veers, P.

    2014-02-01

    This paper introduces the development of a new software framework for research, design, and development of wind energy systems which is meant to 1) represent a full wind plant including all physical and nonphysical assets and associated costs up to the point of grid interconnection, 2) allow use of interchangeable models of varying fidelity for different aspects of the system, and 3) support system level multidisciplinary analyses and optimizations. This paper describes the design of the overall software capability and applies it to a global sensitivity analysis of wind turbine and plant performance and cost. The analysis was performed using three different model configurations involving different levels of fidelity, which illustrate how increasing fidelity can preserve important system interactions that build up to overall system performance and cost. Analyses were performed for a reference wind plant based on the National Renewable Energy Laboratory's 5-MW reference turbine at a mid-Atlantic offshore location within the United States.

  5. Integrated multidisciplinary design optimization using discrete sensitivity analysis for geometrically complex aeroelastic configurations

    NASA Astrophysics Data System (ADS)

    Newman, James Charles, III

    1997-10-01

    The first two steps in the development of an integrated multidisciplinary design optimization procedure capable of analyzing the nonlinear fluid flow about geometrically complex aeroelastic configurations have been accomplished in the present work. For the first step, a three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed. The advantage of unstructured grids, when compared with a structured-grid approach, is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the time-dependent, nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional cases and a Gauss-Seidel algorithm for the three-dimensional; at steady-state, similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Various surface parameterization techniques have been employed in the current study to control the shape of the design surface. Once this surface has been deformed, the interior volume of the unstructured grid is adapted by considering the mesh as a system of interconnected tension springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR, an advanced automatic-differentiation software tool. To demonstrate the ability of this procedure to analyze and design complex configurations of

  6. Sensitivity analysis of a dry-processed Candu fuel pellet's design parameters

    SciTech Connect

    Choi, Hangbok; Ryu, Ho Jin

    2007-07-01

    Sensitivity analysis was carried out in order to investigate the effect of a fuel pellet's design parameters on the performance of a dry-processed Canada deuterium uranium (CANDU) fuel and to suggest the optimum design modifications. Under a normal operating condition, a dry-processed fuel has a higher internal pressure and plastic strain due to a higher fuel centerline temperature when compared with a standard natural uranium CANDU fuel. Under a condition that the fuel bundle dimensions do not change, sensitivity calculations were performed on a fuel's design parameters such as the axial gap, dish depth, gap clearance and plenum volume. The results showed that the internal pressure and plastic strain of the cladding were most effectively reduced if a fuel's element plenum volume was increased. More specifically, the internal pressure and plastic strain of the dry-processed fuel satisfied the design limits of a standard CANDU fuel when the plenum volume was increased by one half a pellet, 0.5 mm{sup 3}/K. (authors)

  7. Design and implementation of a context-sensitive, flow-sensitive activity analysis algorithm for automatic differentiation.

    SciTech Connect

    Shin, J.; Malusare, P.; Hovland, P. D.; Mathematics and Computer Science

    2008-01-01

    Automatic differentiation (AD) has been expanding its role in scientific computing. While several AD tools have been actively developed and used, a wide range of problems remain to be solved. Activity analysis allows AD tools to generate derivative code for fewer variables, leading to a faster run time of the output code. This paper describes a new context-sensitive, flow-sensitive (CSFS) activity analysis, which is developed by extending an existing context-sensitive, flow-insensitive (CSFI) activity analysis. Our experiments with eight benchmarks show that the new CSFS activity analysis is more than 27 times slower but reduces 8 overestimations for the MIT General Circulation Model (MITgcm) and 1 for an ODE solver (c2) compared with the existing CSFI activity analysis implementation. Although the number of reduced overestimations looks small, the additionally identified passive variables may significantly reduce tedious human effort in maintaining a large code base such as MITgcm.

  8. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1994-01-01

    The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.

  9. Design sensitivity analysis of three-dimensional body by boundary element method and its application to shape optimization

    NASA Astrophysics Data System (ADS)

    Yamazaki, Koetsu; Sakamoto, Jiro; Kitano, Masami

    1993-02-01

    A design sensitivity calculation technique based on the implicit differentiation method is formulated for isoparametric boundary elements for three-dimensional (3D) shape optimization problems. The practical sensitivity equations for boundary displacements and stresses are derived, and the efficiency and accuracy of the technique are compared with the semi-analytic method by implementing the sensitivity analysis of typical and basic shape design problems numerically. The sensitivity calculation technique is then applied to the minimum weight design problems of 3D bodies under stress constraints, such as the shape optimization of the ellipsoidal cavity in a cube and the connecting rod, where the Taylor series approximation, based on the boundary element sensitivity analysis at current design point, is adopted for the efficient implementation of the optimization.

  10. Application of design sensitivity analysis for greater improvement on machine structural dynamics

    NASA Technical Reports Server (NTRS)

    Yoshimura, Masataka

    1987-01-01

    Methodologies are presented for greatly improving machine structural dynamics by using design sensitivity analyses and evaluative parameters. First, design sensitivity coefficients and evaluative parameters of structural dynamics are described. Next, the relations between the design sensitivity coefficients and the evaluative parameters are clarified. Then, design improvement procedures of structural dynamics are proposed for the following three cases: (1) addition of elastic structural members, (2) addition of mass elements, and (3) substantial charges of joint design variables. Cases (1) and (2) correspond to the changes of the initial framework or configuration, and (3) corresponds to the alteration of poor initial design variables. Finally, numerical examples are given for demonstrating the availability of the methods proposed.

  11. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1993-01-01

    In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.

  12. Reliability Sensitivity Analysis and Design Optimization of Composite Structures Based on Response Surface Methodology

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2003-01-01

    This report discusses the development and application of two alternative strategies in the form of global and sequential local response surface (RS) techniques for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS) whereas the local technique uses multiple first-order RS models with each applied to a small subregion of FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, thickness and orientation angle of each ply, cylinder diameter and length, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy with the sequential local RS technique having a considerably better computational efficiency.

  13. Installation methods to perform subsea tie-in of pipelines: Sensitivity analysis versus design parameters

    SciTech Connect

    Radicioni, A.; Corbetta, G.; D`Aloisio, G.; Bjoerset, A.

    1995-12-31

    The development of a subsea field requires among other things a definition of the methods for pipeline, flow-lines and umbilicals laying and tie-in. The selection of a particular method is the result of a detailed analysis where advantages and drawbacks are highlighted arid weighted versus global costs and reliability. During the conceptual study for the assessment of possible different solutions, engineering tools can be used as a time saving solution for the selection of the best installation method. Different installation methods to perform tie-in have been analyzed and simplified mathematical models have been used to better understand the behavior of the pipeline during its installation. The following methods were considered: first-end pull-in, second-end pull-in, deflect-to-connect. This paper summarizes the results achieved and describes the design tools prepared during the development of the subject sensitivity analysis; such tools can be used as a helpful basis for the selection or design of a tie-in system.

  14. Sensitivity Analysis in Engineering

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)

    1987-01-01

    The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.

  15. Sensitivity Analysis and Mitigation with Applications to Ballistic and Low-thrust Trajectory Design

    NASA Astrophysics Data System (ADS)

    Alizadeh, Iman

    The ever increasing desire to expand space mission capabilities within the limited budgets of space industries requires new approaches to the old problem of spacecraft trajectory design. For example, recent initiatives for space exploration involve developing new tools to design low-cost, fail-safe trajectories to visit several potential destinations beyond our celestial neighborhood such as Jupiter's moons, asteroids, etc. Designing and navigating spacecraft trajectories to reach these destinations safely are complex and challenging. In particular, fundamental questions of orbital stability imposed by planetary protection requirements are not easily taken into account by standard optimal control schemes. The event of temporary engine loss or an unexpected missed thrust can indeed quickly lead to impact with planetary bodies or other unrecoverable trajectories. While electric propulsion technology provides superior efficiency compared to chemical engines, the very low-control authority and engine performance degradation can impose higher risk to the mission in strongly perturbed orbital environments. The risk is due to the complex gravitational field and its associated chaotic dynamics which causes large navigation dispersions in a short time if left un-controlled. Moreover, in these situations it can be outside the low-thrust propulsion system capability to correct the spacecraft trajectory in a reasonable time frame. These concerns can lead to complete or partial mission failure or even an infeasible mission concept at the early design stage. The goal of this research is to assess and increase orbital stability of ballistic and low-thrust transfer trajectories in multi-body systems. In particular, novel techniques are presented to characterize sensitivity and improve recovery characteristics of ballistic and low-thrust trajectories in unstable orbital environments. The techniques developed are based on perturbation analysis around ballistic trajectories to

  16. Automated divertor target design by adjoint shape sensitivity analysis and a one-shot method

    SciTech Connect

    Dekeyser, W.; Reiter, D.; Baelmans, M.

    2014-12-01

    As magnetic confinement fusion progresses towards the development of first reactor-scale devices, computational tokamak divertor design is a topic of high priority. Presently, edge plasma codes are used in a forward approach, where magnetic field and divertor geometry are manually adjusted to meet design requirements. Due to the complex edge plasma flows and large number of design variables, this method is computationally very demanding. On the other hand, efficient optimization-based design strategies have been developed in computational aerodynamics and fluid mechanics. Such an optimization approach to divertor target shape design is elaborated in the present paper. A general formulation of the design problems is given, and conditions characterizing the optimal designs are formulated. Using a continuous adjoint framework, design sensitivities can be computed at a cost of only two edge plasma simulations, independent of the number of design variables. Furthermore, by using a one-shot method the entire optimization problem can be solved at an equivalent cost of only a few forward simulations. The methodology is applied to target shape design for uniform power load, in simplified edge plasma geometry.

  17. Three-dimensional aerodynamic design optimization using discrete sensitivity analysis and parallel computing

    NASA Astrophysics Data System (ADS)

    Oloso, Amidu Olawale

    A hybrid automatic differentiation/incremental iterative method was implemented in the general purpose advanced computational fluid dynamics code (CFL3D Version 4.1) to yield a new code (CFL3D.ADII) that is capable of computing consistently discrete first order sensitivity derivatives for complex geometries. With the exception of unsteady problems, the new code retains all the useful features and capabilities of the original CFL3D flow analysis code. The superiority of the new code over a carefully applied method of finite-differences is demonstrated. A coarse grain, scalable, distributed-memory, parallel version of CFL3D.ADII was developed based on "derivative stripmining". In this data-parallel approach, an identical copy of CFL3D.ADII is executed on each processor with different derivative input files. The effect of communication overhead on the overall parallel computational efficiency is negligible. However, the fraction of CFL3D.ADII duplicated on all processors has significant impact on the computational efficiency. To reduce the large execution time associated with the sequential 1-D line search in gradient-based aerodynamic optimization, an alternative parallel approach was developed. The execution time of the new approach was reduced effectively to that of one flow analysis, regardless of the number of function evaluations in the 1-D search. The new approach was found to yield design results that are essentially identical to those obtained from the traditional sequential approach but at much smaller execution time. The parallel CFL3D.ADII and the parallel 1-D line search are demonstrated in shape improvement studies of a realistic High Speed Civil Transport (HSCT) wing/body configuration represented by over 100 design variables and 200,000 grid points in inviscid supersonic flow on the 16 node IBM SP2 parallel computer at the Numerical Aerospace Simulation (NAS) facility, NASA Ames Research Center. In addition to making the handling of such a large

  18. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis

    PubMed Central

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736

  19. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis.

    SciTech Connect

    Eldred, Michael Scott; Vigil, Dena M.; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Lefantzi, Sophia; Hough, Patricia Diane; Eddy, John P.

    2011-12-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.

  20. Pricing index-based catastrophe bonds: Part 2: Object-oriented design issues and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Unger, André J. A.

    2010-02-01

    This work is the second installment in a two-part series, and focuses on object-oriented programming methods to implement an augmented-state variable approach to aggregate the PCS index and introduce the Bermudan-style call feature into the proposed CAT bond model. The PCS index is aggregated quarterly using a discrete Asian running-sum formulation. The resulting aggregate PCS index augmented-state variable is used to specify the payoff (principle) on the CAT bond based on reinsurance layers. The purpose of the Bermudan-style call option is to allow the reinsurer to minimize their interest rate risk exposure on making fixed coupon payments under prevailing interest rates. A sensitivity analysis is performed to determine the impact of uncertainty in the frequency and magnitude of hurricanes on the price of the CAT bond. Results indicate that while the CAT bond is highly sensitive to the natural variability in the frequency of landfalling hurricanes between El Ninõ and non-El Ninõ years, it remains relatively insensitive to uncertainty in the magnitude of damages. In addition, results indicate that the maximum price of the CAT bond is insensitive to whether it is engineered to cover low frequency high magnitude events in a 'high' reinsurance layer relative to high frequency low magnitude events in a 'low' reinsurance layer. Also, while it is possible for the reinsurer to minimize their interest rate risk exposure on the fixed coupon payments, the impact of this risk on the price of the CAT bond appears small relative to the natural variability in the CAT bond price, and consequently catastrophic risk, due to uncertainty in the frequency and magnitude of landfalling hurricanes.

  1. Physicochemical design and analysis of self-propelled objects that are characteristically sensitive to environments.

    PubMed

    Nakata, Satoshi; Nagayama, Masaharu; Kitahata, Hiroyuki; Suematsu, Nobuhiko J; Hasegawa, Takeshi

    2015-04-28

    The development of self-propelled motors that mimic biological motors is an important challenge for the transport of either themselves or some material in a small space, since biological systems exhibit high autonomy and various types of responses, such as taxis and swarming. In this perspective, we review non-living systems that behave like living matter. We especially focus on nonlinearity to enhance autonomy and the response of the system, since characteristic nonlinear phenomena, such as oscillation, synchronization, pattern formation, bifurcation, and hysteresis, are coupled to self-motion of which driving force is the difference in the interfacial tension. Mathematical modelling based on reaction-diffusion equations and equations of motion as well as physicochemical analysis from the point of view of the molecular structure are also important for the design of non-living motors that mimic living motors. PMID:25826144

  2. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    SciTech Connect

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.; Jakeman, John Davis; Swiler, Laura Painton; Stephens, John Adam; Vigil, Dena M.; Wildey, Timothy Michael; Bohnhoff, William J.; Eddy, John P.; Hu, Kenneth T.; Dalbey, Keith R.; Bauman, Lara E; Hough, Patricia Diane

    2014-05-01

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

  3. Sensitivity Test Analysis

    Energy Science and Technology Software Center (ESTSC)

    1992-02-20

    SENSIT,MUSIG,COMSEN is a set of three related programs for sensitivity test analysis. SENSIT conducts sensitivity tests. These tests are also known as threshold tests, LD50 tests, gap tests, drop weight tests, etc. SENSIT interactively instructs the experimenter on the proper level at which to stress the next specimen, based on the results of previous responses. MUSIG analyzes the results of a sensitivity test to determine the mean and standard deviation of the underlying population bymore » computing maximum likelihood estimates of these parameters. MUSIG also computes likelihood ratio joint confidence regions and individual confidence intervals. COMSEN compares the results of two sensitivity tests to see if the underlying populations are significantly different. COMSEN provides an unbiased method of distinguishing between statistical variation of the estimates of the parameters of the population and true population difference.« less

  4. LISA Telescope Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Waluschka, Eugene; Krebs, Carolyn (Technical Monitor)

    2001-01-01

    The results of a LISA telescope sensitivity analysis will be presented, The emphasis will be on the outgoing beam of the Dall-Kirkham' telescope and its far field phase patterns. The computed sensitivity analysis will include motions of the secondary with respect to the primary, changes in shape of the primary and secondary, effect of aberrations of the input laser beam and the effect the telescope thin film coatings on polarization. An end-to-end optical model will also be discussed.

  5. Use of Sensitivity and Uncertainty Analysis in the Design of Reactor Physics and Criticality Benchmark Experiments for Advanced Nuclear Fuel

    SciTech Connect

    Rearden, B.T.; Anderson, W.J.; Harms, G.A.

    2005-08-15

    Framatome ANP, Sandia National Laboratories (SNL), Oak Ridge National Laboratory (ORNL), and the University of Florida are cooperating on the U.S. Department of Energy Nuclear Energy Research Initiative (NERI) project 2001-0124 to design, assemble, execute, analyze, and document a series of critical experiments to validate reactor physics and criticality safety codes for the analysis of commercial power reactor fuels consisting of UO{sub 2} with {sup 235}U enrichments {>=}5 wt%. The experiments will be conducted at the SNL Pulsed Reactor Facility.Framatome ANP and SNL produced two series of conceptual experiment designs based on typical parameters, such as fuel-to-moderator ratios, that meet the programmatic requirements of this project within the given restraints on available materials and facilities. ORNL used the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) to assess, from a detailed physics-based perspective, the similarity of the experiment designs to the commercial systems they are intended to validate. Based on the results of the TSUNAMI analysis, one series of experiments was found to be preferable to the other and will provide significant new data for the validation of reactor physics and criticality safety codes.

  6. Fusion-neutron-yield, activation measurements at the Z accelerator: Design, analysis, and sensitivity

    NASA Astrophysics Data System (ADS)

    Hahn, K. D.; Cooper, G. W.; Ruiz, C. L.; Fehl, D. L.; Chandler, G. A.; Knapp, P. F.; Leeper, R. J.; Nelson, A. J.; Smelser, R. M.; Torres, J. A.

    2014-04-01

    We present a general methodology to determine the diagnostic sensitivity that is directly applicable to neutron-activation diagnostics fielded on a wide variety of neutron-producing experiments, which include inertial-confinement fusion (ICF), dense plasma focus, and ion beam-driven concepts. This approach includes a combination of several effects: (1) non-isotropic neutron emission; (2) the 1/r2 decrease in neutron fluence in the activation material; (3) the spatially distributed neutron scattering, attenuation, and energy losses due to the fielding environment and activation material itself; and (4) temporally varying neutron emission. As an example, we describe the copper-activation diagnostic used to measure secondary deuterium-tritium fusion-neutron yields on ICF experiments conducted on the pulsed-power Z Accelerator at Sandia National Laboratories. Using this methodology along with results from absolute calibrations and Monte Carlo simulations, we find that for the diagnostic configuration on Z, the diagnostic sensitivity is 0.037% ± 17% counts/neutron per cm2 and is ˜ 40% less sensitive than it would be in an ideal geometry due to neutron attenuation, scattering, and energy-loss effects.

  7. Fusion-neutron-yield, activation measurements at the Z accelerator: design, analysis, and sensitivity.

    PubMed

    Hahn, K D; Cooper, G W; Ruiz, C L; Fehl, D L; Chandler, G A; Knapp, P F; Leeper, R J; Nelson, A J; Smelser, R M; Torres, J A

    2014-04-01

    We present a general methodology to determine the diagnostic sensitivity that is directly applicable to neutron-activation diagnostics fielded on a wide variety of neutron-producing experiments, which include inertial-confinement fusion (ICF), dense plasma focus, and ion beam-driven concepts. This approach includes a combination of several effects: (1) non-isotropic neutron emission; (2) the 1/r(2) decrease in neutron fluence in the activation material; (3) the spatially distributed neutron scattering, attenuation, and energy losses due to the fielding environment and activation material itself; and (4) temporally varying neutron emission. As an example, we describe the copper-activation diagnostic used to measure secondary deuterium-tritium fusion-neutron yields on ICF experiments conducted on the pulsed-power Z Accelerator at Sandia National Laboratories. Using this methodology along with results from absolute calibrations and Monte Carlo simulations, we find that for the diagnostic configuration on Z, the diagnostic sensitivity is 0.037% ± 17% counts/neutron per cm(2) and is ∼ 40% less sensitive than it would be in an ideal geometry due to neutron attenuation, scattering, and energy-loss effects. PMID:24784607

  8. Fusion-neutron-yield, activation measurements at the Z accelerator: Design, analysis, and sensitivity

    SciTech Connect

    Hahn, K. D. Ruiz, C. L.; Fehl, D. L.; Chandler, G. A.; Knapp, P. F.; Smelser, R. M.; Torres, J. A.; Cooper, G. W.; Nelson, A. J.; Leeper, R. J.

    2014-04-15

    We present a general methodology to determine the diagnostic sensitivity that is directly applicable to neutron-activation diagnostics fielded on a wide variety of neutron-producing experiments, which include inertial-confinement fusion (ICF), dense plasma focus, and ion beam-driven concepts. This approach includes a combination of several effects: (1) non-isotropic neutron emission; (2) the 1/r{sup 2} decrease in neutron fluence in the activation material; (3) the spatially distributed neutron scattering, attenuation, and energy losses due to the fielding environment and activation material itself; and (4) temporally varying neutron emission. As an example, we describe the copper-activation diagnostic used to measure secondary deuterium-tritium fusion-neutron yields on ICF experiments conducted on the pulsed-power Z Accelerator at Sandia National Laboratories. Using this methodology along with results from absolute calibrations and Monte Carlo simulations, we find that for the diagnostic configuration on Z, the diagnostic sensitivity is 0.037% ± 17% counts/neutron per cm{sup 2} and is ∼ 40% less sensitive than it would be in an ideal geometry due to neutron attenuation, scattering, and energy-loss effects.

  9. Sensitivity Analysis Without Assumptions

    PubMed Central

    VanderWeele, Tyler J.

    2016-01-01

    Unmeasured confounding may undermine the validity of causal inference with observational studies. Sensitivity analysis provides an attractive way to partially circumvent this issue by assessing the potential influence of unmeasured confounding on causal conclusions. However, previous sensitivity analysis approaches often make strong and untestable assumptions such as having an unmeasured confounder that is binary, or having no interaction between the effects of the exposure and the confounder on the outcome, or having only one unmeasured confounder. Without imposing any assumptions on the unmeasured confounder or confounders, we derive a bounding factor and a sharp inequality such that the sensitivity analysis parameters must satisfy the inequality if an unmeasured confounder is to explain away the observed effect estimate or reduce it to a particular level. Our approach is easy to implement and involves only two sensitivity parameters. Surprisingly, our bounding factor, which makes no simplifying assumptions, is no more conservative than a number of previous sensitivity analysis techniques that do make assumptions. Our new bounding factor implies not only the traditional Cornfield conditions that both the relative risk of the exposure on the confounder and that of the confounder on the outcome must satisfy but also a high threshold that the maximum of these relative risks must satisfy. Furthermore, this new bounding factor can be viewed as a measure of the strength of confounding between the exposure and the outcome induced by a confounder. PMID:26841057

  10. Sensitivity analysis of SPURR

    SciTech Connect

    Witholder, R.E.

    1980-04-01

    The Solar Energy Research Institute has conducted a limited sensitivity analysis on a System for Projecting the Utilization of Renewable Resources (SPURR). The study utilized the Domestic Policy Review scenario for SPURR agricultural and industrial process heat and utility market sectors. This sensitivity analysis determines whether variations in solar system capital cost, operation and maintenance cost, and fuel cost (biomass only) correlate with intuitive expectations. The results of this effort contribute to a much larger issue: validation of SPURR. Such a study has practical applications for engineering improvements in solar technologies and is useful as a planning tool in the R and D allocation process.

  11. Design tradeoff studies and sensitivity analysis, appendices B1 - B4. [hybrid electric vehicles

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Documentation is presented for a program which separately computes fuel and energy consumption for the two modes of operation of a hybrid electric vehicle. The distribution of daily travel is specified as input data as well as the weights which the component driving cycles are given in each of the composite cycles. The possibility of weight reduction through the substitution of various materials is considered as well as the market potential for hybrid vehicles. Data relating to battery compartment weight distribution and vehicle handling analysis is tabulated.

  12. Characterizing Wheel-Soil Interaction Loads Using Meshfree Finite Element Methods: A Sensitivity Analysis for Design Trade Studies

    NASA Technical Reports Server (NTRS)

    Contreras, Michael T.; Trease, Brian P.; Bojanowski, Cezary; Kulakx, Ronald F.

    2013-01-01

    A wheel experiencing sinkage and slippage events poses a high risk to planetary rover missions as evidenced by the mobility challenges endured by the Mars Exploration Rover (MER) project. Current wheel design practice utilizes loads derived from a series of events in the life cycle of the rover which do not include (1) failure metrics related to wheel sinkage and slippage and (2) performance trade-offs based on grouser placement/orientation. Wheel designs are rigorously tested experimentally through a variety of drive scenarios and simulated soil environments; however, a robust simulation capability is still in development due to myriad of complex interaction phenomena that contribute to wheel sinkage and slippage conditions such as soil composition, large deformation soil behavior, wheel geometry, nonlinear contact forces, terrain irregularity, etc. For the purposes of modeling wheel sinkage and slippage at an engineering scale, meshfree nite element approaches enable simulations that capture su cient detail of wheel-soil interaction while remaining computationally feasible. This study implements the JPL wheel-soil benchmark problem in the commercial code environment utilizing the large deformation modeling capability of Smooth Particle Hydrodynamics (SPH) meshfree methods. The nominal, benchmark wheel-soil interaction model that produces numerically stable and physically realistic results is presented and simulations are shown for both wheel traverse and wheel sinkage cases. A sensitivity analysis developing the capability and framework for future ight applications is conducted to illustrate the importance of perturbations to critical material properties and parameters. Implementation of the proposed soil-wheel interaction simulation capability and associated sensitivity framework has the potential to reduce experimentation cost and improve the early stage wheel design proce

  13. Optimizing the design and analysis of cryogenic semiconductor dark matter detectors for maximum sensitivity

    SciTech Connect

    Pyle, Matt Christopher

    2012-01-01

    In this thesis, we illustrate how the complex E- field geometry produced by interdigitated electrodes at alternating voltage biases naturally encodes 3D fiducial volume information into the charge and phonon signals and thus is a natural geometry for our next generation dark matter detectors. Secondly, we will study in depth the physics of import to our devices including transition edge sensor dynamics, quasi- particle dynamics in our Al collection fins, and phonon physics in the crystal itself so that we can both understand the performance of our previous CDMS II device as well as optimize the design of our future devices. Of interest to the broader physics community is the derivation of the ideal athermal phonon detector resolution and it's T3 c scaling behavior which suggests that the athermal phonon detector technology developed by CDMS could also be used to discover coherent neutrino scattering and search for non-standard neutrino interaction and sterile neutrinos. These proposed resolution optimized devices can also be used in searches for exotic MeV-GeV dark matter as well as novel background free searches for 8GeV light WIMPs.

  14. Reducing Production Basis Risk through Rainfall Intensity Frequency (RIF) Indexes: Global Sensitivity Analysis' Implication on Policy Design

    NASA Astrophysics Data System (ADS)

    Muneepeerakul, Chitsomanus; Huffaker, Ray; Munoz-Carpena, Rafael

    2016-04-01

    The weather index insurance promises financial resilience to farmers struck by harsh weather conditions with swift compensation at affordable premium thanks to its minimal adverse selection and moral hazard. Despite these advantages, the very nature of indexing causes the presence of "production basis risk" that the selected weather indexes and their thresholds do not correspond to actual damages. To reduce basis risk without additional data collection cost, we propose the use of rain intensity and frequency as indexes as it could offer better protection at the lower premium by avoiding basis risk-strike trade-off inherent in the total rainfall index. We present empirical evidences and modeling results that even under the similar cumulative rainfall and temperature environment, yield can significantly differ especially for drought sensitive crops. We further show that deriving the trigger level and payoff function from regression between historical yield and total rainfall data may pose significant basis risk owing to their non-unique relationship in the insured range of rainfall. Lastly, we discuss the design of index insurance in terms of contract specifications based on the results from global sensitivity analysis.

  15. RESRAD parameter sensitivity analysis

    SciTech Connect

    Cheng, J.J.; Yu, C.; Zielen, A.J.

    1991-08-01

    Three methods were used to perform a sensitivity analysis of RESRAD code input parameters -- enhancement of RESRAD by the Gradient Enhanced Software System (GRESS) package, direct parameter perturbation, and graphic comparison. Evaluation of these methods indicated that (1) the enhancement of RESRAD by GRESS has limitations and should be used cautiously, (2) direct parameter perturbation is tedious to implement, and (3) the graphics capability of RESRAD 4.0 is the most direct and convenient method for performing sensitivity analyses. This report describes procedures for implementing these methods and presents a comparison of results. 3 refs., 9 figs., 8 tabs.

  16. Naval Waste Package Design Sensitivity

    SciTech Connect

    T. Schmitt

    2006-12-13

    The purpose of this calculation is to determine the sensitivity of the structural response of the Naval waste packages to varying inner cavity dimensions when subjected to a comer drop and tip-over from elevated surface. This calculation will also determine the sensitivity of the structural response of the Naval waste packages to the upper bound of the naval canister masses. The scope of this document is limited to reporting the calculation results in terms of through-wall stress intensities in the outer corrosion barrier. This calculation is intended for use in support of the preliminary design activities for the license application design of the Naval waste package. It examines the effects of small changes between the naval canister and the inner vessel, and in these dimensions, the Naval Long waste package and Naval Short waste package are similar. Therefore, only the Naval Long waste package is used in this calculation and is based on the proposed potential designs presented by the drawings and sketches in References 2.1.10 to 2.1.17 and 2.1.20. All conclusions are valid for both the Naval Long and Naval Short waste packages.

  17. A sensitivity analysis for the F100 turbofan engine using the multivariable Nyquist array. [feedback control design

    NASA Technical Reports Server (NTRS)

    Leininger, G. G.; Borysiak, M. L.

    1978-01-01

    In the feedback control design of multivariable systems, closed loop performance evaluations must include the dynamic behavior of variables unavailable to the feedback controller. For the multivariable Nyquist array method, a set of sensitivity functions are proposed to simplify the adjustment of compensator parameters when the dynamic response of the unmeasurable output variables is unacceptable. A sensitivity study to improve thrust and turbine temperature responses for the Pratt-Whitney F100 turbofan engine demonstrates the utility of the proposed method.

  18. Scaling in sensitivity analysis

    USGS Publications Warehouse

    Link, W.A.; Doherty, P.F., Jr.

    2002-01-01

    Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.

  19. LISA Telescope Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Waluschka, Eugene; Krebs, Carolyn (Technical Monitor)

    2002-01-01

    The Laser Interferometer Space Antenna (LISA) for the detection of Gravitational Waves is a very long baseline interferometer which will measure the changes in the distance of a five million kilometer arm to picometer accuracies. As with any optical system, even one with such very large separations between the transmitting and receiving, telescopes, a sensitivity analysis should be performed to see how, in this case, the far field phase varies when the telescope parameters change as a result of small temperature changes.

  20. Sensitivity testing and analysis

    SciTech Connect

    Neyer, B.T.

    1991-01-01

    New methods of sensitivity testing and analysis are proposed. The new test method utilizes Maximum Likelihood Estimates to pick the next test level in order to maximize knowledge of both the mean, {mu}, and the standard deviation, {sigma} of the population. Simulation results demonstrate that this new test provides better estimators (less bias and smaller variance) of both {mu} and {sigma} than the other commonly used tests (Probit, Bruceton, Robbins-Monro, Langlie). A new method of analyzing sensitivity tests is also proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions, for {mu}, {sigma}, and arbitrary percentiles. Unlike presently used methods, such as the program ASENT which is based on the Cramer-Rao theorem, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The new test and analysis methods will be explained and compared to the presently used methods. 19 refs., 12 figs.

  1. WASTE PACKAGE DESIGN SENSITIVITY REPORT

    SciTech Connect

    P. Mecharet

    2001-03-09

    The purpose of this technical report is to present the current designs for waste packages and determine which designs will be evaluated for the Site Recommendation (SR) or Licence Application (LA), to demonstrate how the design will be shown to comply with the applicable design criteria. The evaluations to support SR or LA are based on system description document criteria. The objective is to determine those system description document criteria for which compliance is to be demonstrated for SR; and, having identified the criteria, to refer to the documents that show compliance. In addition, those system description document criteria for which compliance will be addressed for LA are identified, with a distinction made between two steps of the LA process: the LA-Construction Authorization (LA-CA) phase on one hand, and the LA-Receive and Possess (LA-R&P) phase on the other hand. The scope of this work encompasses the Waste Package Project disciplines for criticality, shielding, structural, and thermal analysis.

  2. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, user's reference manual.

    SciTech Connect

    Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.

    2010-05-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.

  3. DAKOTA, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 reference manual

    SciTech Connect

    Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane; Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Guinta, Anthony A.; Brown, Shannon L.

    2006-10-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.

  4. Sensitivity analysis in computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.

    1984-01-01

    Information on sensitivity analysis in computational aerodynamics is given in outline, graphical, and chart form. The prediction accuracy if the MCAERO program, a perturbation analysis method, is discussed. A procedure for calculating perturbation matrix, baseline wing paneling for perturbation analysis test cases and applications of an inviscid sensitivity matrix are among the topics covered.

  5. Solid rocket motor nozzle flexseal design sensitivity

    NASA Astrophysics Data System (ADS)

    Donat, James R.

    1993-02-01

    On solid rocket motors, direction is controlled by controlling the thrust vector. To achieve this, the nozzle usually incorporates a flexseal that allows the nozzle to vector (or rotate) in any direction. The flexseal has a core of alternating layers of elastomer pads and metal or composite shims. Flexseal core design is an iterative process. An estimate of the flexseal core geometry is made. The core is then analyzed for performance characteristics such as stress, weight, and the torque required to vector the core. Based on a comparison between the requirements/constraints and analysis results, another estimate of the geometry is then made. Understanding the effects changes in the core geometry have on the performance characteristics greatly decreases the number of iterations and time required to optimize the design. This paper documents a study undertaken to better understand these effects and how sensitive performance characteristics are to core geometry changes.

  6. Sensitivity and Uncertainty Analysis Shell

    Energy Science and Technology Software Center (ESTSC)

    1999-04-20

    SUNS (Sensitivity and Uncertainty Analysis Shell) is a 32-bit application that runs under Windows 95/98 and Windows NT. It is designed to aid in statistical analyses for a broad range of applications. The class of problems for which SUNS is suitable is generally defined by two requirements: 1. A computer code is developed or acquired that models some processes for which input is uncertain and the user is interested in statistical analysis of the outputmore » of that code. 2. The statistical analysis of interest can be accomplished using the Monte Carlo analysis. The implementation then requires that the user identify which input to the process model is to be manipulated for statistical analysis. With this information, the changes required to loosely couple SUNS with the process model can be completed. SUNS is then used to generate the required statistical sample and the user-supplied process model analyses the sample. The SUNS post processor displays statistical results from any existing file that contains sampled input and output values.« less

  7. Accurate adjoint design sensitivities for nano metal optics.

    PubMed

    Hansen, Paul; Hesselink, Lambertus

    2015-09-01

    We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics. PMID:26368483

  8. One-Dimensional, Multigroup Cross Section and Design Sensitivity and Uncertainty Analysis Code System - Generalized Perturbation Theory.

    Energy Science and Technology Software Center (ESTSC)

    1981-02-02

    Version: 00 SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections (of standard multigroup cross-section sets) and for secondary energy distributions (SED's) of multigroup scattering matrices.

  9. Sensitivity Analysis of Neutron Cross-Sections Considered for Design and Safety Studies of Lfr and SFR Generation IV Systems

    NASA Astrophysics Data System (ADS)

    Tucek, Kamil; Carlsson, Johan; Wider, Hartmut

    2006-04-01

    We evaluated the sensitivity of several design and safety parameters with regard to five different nuclear data libraries, JEF2.2, JEFF3.0, ENDF/B-VI.8, JENDL3.2, and JENDL3.3. More specifically, the effective multiplication factor, burn-up reactivity swing and decay heat generation in available LFR and SFR designs were estimated. Monte Carlo codes MCNP and MCB were used in the analyses of the neutronic and burn-up performance of the systems. Thermo-hydraulic safety calculations were performed by the STAR-CD CFD code. For the LFR, ENDF/B-VI.8 and JEF2.2 showed to give a harder neutron spectrum than JEFF3.0, JENDL3.2, and JENDL3.3 data due to the lower inelastic scattering cross-section of lead in these libraries. Hence, the neutron economy of the system becomes more favourable and keff is higher when calculated with ENDF/B-VI.8 and JEF2.2 data. As for actinide cross-section data, the uncertainties in the keff values appeared to be mainly due to 239Pu, 240Pu and 241Am. Differences in the estimated burn-up reactivity swings proved to be significant, for an SFR as large as a factor of three (when comparing ENDF/B-VI.8 results to those of JENDL3.2). Uncertainties in the evaluation of short-term decay heat generation showed to be of the order of several per cent. Significant differences were, understandably, observed between decay heat generation data quoted in literature for LWR-UOX and those calculated for an LFR (U,TRU)O2 spent fuel. A corresponding difference in calculated core parameters (outlet coolant temperature) during protected total Loss-of-Power was evaluated.

  10. Further comments on sensitivities, parameter estimation, and sampling design in one-dimensional analysis of solute transport in porous media

    USGS Publications Warehouse

    Knopman, D.S.; Voss, C.I.

    1988-01-01

    Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. -from Authors

  11. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 developers manual.

    SciTech Connect

    Griffin, Joshua D. (Sandia National lababoratory, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson (Sandia National lababoratory, Livermore, CA); Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane (Sandia National lababoratory, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.

    2006-10-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

  12. DAKOTA, a multilevel parellel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 uers's manual.

    SciTech Connect

    Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J. (Sandai National Labs, Livermore, CA); Hough, Patricia Diane (Sandai National Labs, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.

    2006-10-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

  13. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, user's manual.

    SciTech Connect

    Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.

    2010-05-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

  14. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, developers manual.

    SciTech Connect

    Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.

    2010-05-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

  15. A measurement system analysis with design of experiments: Investigation of the adhesion performance of a pressure sensitive adhesive with the probe tack test.

    PubMed

    Michaelis, Marc; Leopold, Claudia S

    2015-12-30

    The tack of a pressure sensitive adhesive (PSA) is not an inherent material property and strongly depends on the measurement conditions. Following the concept of a measurement system analysis (MSA), influencing factors of the probe tack test were investigated by a design of experiments (DoE) approach. A response surface design with 38 runs was built to evaluate the influence of detachment speed, dwell time, contact force, adhesive film thickness and API content on tack, determined as the maximum of the stress strain curve (σmax). It could be shown that all investigated factors have a significant effect on the response and that the DoE approach allowed to detect two-factorial interactions between the dwell time, the contact force, the adhesive film thickness and the API content. Surprisingly, it was found that tack increases with decreasing and not with increasing adhesive film thickness. PMID:26428630

  16. Adjoint sensitivity analysis of an ultrawideband antenna

    SciTech Connect

    Stephanson, M B; White, D A

    2011-07-28

    The frequency domain finite element method using H(curl)-conforming finite elements is a robust technique for full-wave analysis of antennas. As computers become more powerful, it is becoming feasible to not only predict antenna performance, but also to compute sensitivity of antenna performance with respect to multiple parameters. This sensitivity information can then be used for optimization of the design or specification of manufacturing tolerances. In this paper we review the Adjoint Method for sensitivity calculation, and apply it to the problem of optimizing a Ultrawideband antenna.

  17. Design and Vibration Sensitivity Analysis of a MEMS Tuning Fork Gyroscope with an Anchored Diamond Coupling Mechanism

    PubMed Central

    Guan, Yanwei; Gao, Shiqiao; Liu, Haipeng; Jin, Lei; Niu, Shaohua

    2016-01-01

    In this paper, a new micromachined tuning fork gyroscope (TFG) with an anchored diamond coupling mechanism is proposed while the mode ordering and the vibration sensitivity are also investigated. The sense-mode of the proposed TFG was optimized through use of an anchored diamond coupling spring, which enables the in-phase mode frequency to be 108.3% higher than the anti-phase one. The frequencies of the in- and anti-phase modes in the sense direction are 9799.6 Hz and 4705.3 Hz, respectively. The analytical solutions illustrate that the stiffness difference ratio of the in- and anti-phase modes is inversely proportional to the output induced by the vibration from the sense direction. Additionally, FEM simulations demonstrate that the stiffness difference ratio of the anchored diamond coupling TFG is 16.08 times larger than the direct coupling one while the vibration output is reduced by 94.1%. Consequently, the proposed new anchored diamond coupling TFG can structurally increase the stiffness difference ratio to improve the mode ordering and considerably reduce the vibration sensitivity without sacrificing the scale factor. PMID:27049385

  18. Design and Vibration Sensitivity Analysis of a MEMS Tuning Fork Gyroscope with an Anchored Diamond Coupling Mechanism.

    PubMed

    Guan, Yanwei; Gao, Shiqiao; Liu, Haipeng; Jin, Lei; Niu, Shaohua

    2016-01-01

    In this paper, a new micromachined tuning fork gyroscope (TFG) with an anchored diamond coupling mechanism is proposed while the mode ordering and the vibration sensitivity are also investigated. The sense-mode of the proposed TFG was optimized through use of an anchored diamond coupling spring, which enables the in-phase mode frequency to be 108.3% higher than the anti-phase one. The frequencies of the in- and anti-phase modes in the sense direction are 9799.6 Hz and 4705.3 Hz, respectively. The analytical solutions illustrate that the stiffness difference ratio of the in- and anti-phase modes is inversely proportional to the output induced by the vibration from the sense direction. Additionally, FEM simulations demonstrate that the stiffness difference ratio of the anchored diamond coupling TFG is 16.08 times larger than the direct coupling one while the vibration output is reduced by 94.1%. Consequently, the proposed new anchored diamond coupling TFG can structurally increase the stiffness difference ratio to improve the mode ordering and considerably reduce the vibration sensitivity without sacrificing the scale factor. PMID:27049385

  19. An analysis of sensitivity tests

    SciTech Connect

    Neyer, B.T.

    1992-03-06

    A new method of analyzing sensitivity tests is proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions for the parameters of the distribution (e.g., the mean, {mu}, and the standard deviation, {sigma}) as well as various percentiles. Unlike presently used methods, such as those based on asymptotic analysis, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The main disadvantage of this method is that it requires much more computation to calculate the confidence regions. However, these calculations can be easily and quickly performed on most computers.

  20. Design oriented structural analysis

    NASA Technical Reports Server (NTRS)

    Giles, Gary L.

    1994-01-01

    Desirable characteristics and benefits of design oriented analysis methods are described and illustrated by presenting a synoptic description of the development and uses of the Equivalent Laminated Plate Solution (ELAPS) computer code. ELAPS is a design oriented structural analysis method which is intended for use in the early design of aircraft wing structures. Model preparation is minimized by using a few large plate segments to model the wing box structure. Computational efficiency is achieved by using a limited number of global displacement functions that encompass all segments over the wing planform. Coupling with other codes is facilitated since the output quantities such as deflections and stresses are calculated as continuous functions over the plate segments. Various aspects of the ELAPS development are discussed including the analytical formulation, verification of results by comparison with finite element analysis results, coupling with other codes, and calculation of sensitivity derivatives. The effectiveness of ELAPS for multidisciplinary design application is illustrated by describing its use in design studies of high speed civil transport wing structures.

  1. Sensitivity analysis and application in exploration geophysics

    NASA Astrophysics Data System (ADS)

    Tang, R.

    2013-12-01

    In exploration geophysics, the usual way of dealing with geophysical data is to form an Earth model describing underground structure in the area of investigation. The resolved model, however, is based on the inversion of survey data which is unavoidable contaminated by various noises and is sampled in a limited number of observation sites. Furthermore, due to the inherent non-unique weakness of inverse geophysical problem, the result is ambiguous. And it is not clear that which part of model features is well-resolved by the data. Therefore the interpretation of the result is intractable. We applied a sensitivity analysis to address this problem in magnetotelluric(MT). The sensitivity, also named Jacobian matrix or the sensitivity matrix, is comprised of the partial derivatives of the data with respect to the model parameters. In practical inversion, the matrix can be calculated by direct modeling of the theoretical response for the given model perturbation, or by the application of perturbation approach and reciprocity theory. We now acquired visualized sensitivity plot by calculating the sensitivity matrix and the solution is therefore under investigation that the less-resolved part is indicated and should not be considered in interpretation, while the well-resolved parameters can relatively be convincing. The sensitivity analysis is hereby a necessary and helpful tool for increasing the reliability of inverse models. Another main problem of exploration geophysics is about the design strategies of joint geophysical survey, i.e. gravity, magnetic & electromagnetic method. Since geophysical methods are based on the linear or nonlinear relationship between observed data and subsurface parameters, an appropriate design scheme which provides maximum information content within a restricted budget is quite difficult. Here we firstly studied sensitivity of different geophysical methods by mapping the spatial distribution of different survey sensitivity with respect to the

  2. Precision of Sensitivity in the Design Optimization of Indeterminate Structures

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Hopkins, Dale A.

    2006-01-01

    Design sensitivity is central to most optimization methods. The analytical sensitivity expression for an indeterminate structural design optimization problem can be factored into a simple determinate term and a complicated indeterminate component. Sensitivity can be approximated by retaining only the determinate term and setting the indeterminate factor to zero. The optimum solution is reached with the approximate sensitivity. The central processing unit (CPU) time to solution is substantially reduced. The benefit that accrues from using the approximate sensitivity is quantified by solving a set of problems in a controlled environment. Each problem is solved twice: first using the closed-form sensitivity expression, then using the approximation. The problem solutions use the CometBoards testbed as the optimization tool with the integrated force method as the analyzer. The modification that may be required, to use the stiffener method as the analysis tool in optimization, is discussed. The design optimization problem of an indeterminate structure contains many dependent constraints because of the implicit relationship between stresses, as well as the relationship between the stresses and displacements. The design optimization process can become problematic because the implicit relationship reduces the rank of the sensitivity matrix. The proposed approximation restores the full rank and enhances the robustness of the design optimization method.

  3. Involute composite design evaluation using global design sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Hart, J. K.; Stanton, E. L.

    1989-01-01

    An optimization capability for involute structures has been developed. Its key feature is the use of global material geometry variables which are so chosen that all combinations of design variables within a set of lower and upper bounds correspond to manufacturable designs. A further advantage of global variables is that their number does not increase with increasing mesh density. The accuracy of the sensitivity derivatives has been verified both through finite difference tests and through the successful use of the derivatives by an optimizer. The state of the art in composite design today is still marked by point design algorithms linked together using ad hoc methods not directly related to a manufacturing procedure. The global design sensitivity approach presented here for involutes can be applied to filament wound shells and other composite constructions using material form features peculiar to each construction. The present involute optimization technology is being applied to the Space Shuttle SRM nozzle boot ring redesigns by PDA Engineering.

  4. Stellarator Coil Design and Plasma Sensitivity

    SciTech Connect

    Long-Poe Ku and Allen H. Boozer

    2010-11-03

    The rich information contained in the plasma response to external magnetic perturbations can be used to help design stellarator coils more effectively. We demonstrate the feasibility by first devel- oping a simple, direct method to study perturbations in stellarators that do not break stellarator symmetry and periodicity. The method applies a small perturbation to the plasma boundary and evaluates the resulting perturbed free-boundary equilibrium to build up a sensitivity matrix for the important physics attributes of the underlying configuration. Using this sensitivity information, design methods for better stellarator coils are then developed. The procedure and a proof-of-principle application are given that (1) determine the spatial distributions of external normal magnetic field at the location of the unperturbed plasma boundary to which the plasma properties are most sen- sitive, (2) determine the distributions of external normal magnetic field that can be produced most efficiently by distant coils, (3) choose the ratios of the magnitudes of the the efficiently produced magnetic distributions so the sensitive plasma properties can be controlled. Using these methods, sets of modular coils are found for the National Compact Stellarator Experiment (NCSX) that are either smoother or can be located much farther from the plasma boundary than those of the present design.

  5. Sensitivity Analysis in the Model Web

    NASA Astrophysics Data System (ADS)

    Jones, R.; Cornford, D.; Boukouvalas, A.

    2012-04-01

    The Model Web, and in particular the Uncertainty enabled Model Web being developed in the UncertWeb project aims to allow model developers and model users to deploy and discover models exposed as services on the Web. In particular model users will be able to compose model and data resources to construct and evaluate complex workflows. When discovering such workflows and models on the Web it is likely that the users might not have prior experience of the model behaviour in detail. It would be particularly beneficial if users could undertake a sensitivity analysis of the models and workflows they have discovered and constructed to allow them to assess the sensitivity to their assumptions and parameters. This work presents a Web-based sensitivity analysis tool which provides computationally efficient sensitivity analysis methods for models exposed on the Web. In particular the tool is tailored to the UncertWeb profiles for both information models (NetCDF and Observations and Measurements) and service specifications (WPS and SOAP/WSDL). The tool employs emulation technology where this is found to be possible, constructing statistical surrogate models for the models or workflows, to allow very fast variance based sensitivity analysis. Where models are too complex for emulation to be possible, or evaluate too fast for this to be necessary the original models are used with a carefully designed sampling strategy. A particular benefit of constructing emulators of the models or workflow components is that within the framework these can be communicated and evaluated at any physical location. The Web-based tool and backend API provide several functions to facilitate the process of creating an emulator and performing sensitivity analysis. A user can select a model exposed on the Web and specify the input ranges. Once this process is complete, they are able to perform screening to discover important inputs, train an emulator, and validate the accuracy of the trained emulator. In

  6. A numerical comparison of sensitivity analysis techniques

    SciTech Connect

    Hamby, D.M.

    1993-12-31

    Engineering and scientific phenomena are often studied with the aid of mathematical models designed to simulate complex physical processes. In the nuclear industry, modeling the movement and consequence of radioactive pollutants is extremely important for environmental protection and facility control. One of the steps in model development is the determination of the parameters most influential on model results. A {open_quotes}sensitivity analysis{close_quotes} of these parameters is not only critical to model validation but also serves to guide future research. A previous manuscript (Hamby) detailed many of the available methods for conducting sensitivity analyses. The current paper is a comparative assessment of several methods for estimating relative parameter sensitivity. Method practicality is based on calculational ease and usefulness of the results. It is the intent of this report to demonstrate calculational rigor and to compare parameter sensitivity rankings resulting from various sensitivity analysis techniques. An atmospheric tritium dosimetry model (Hamby) is used here as an example, but the techniques described can be applied to many different modeling problems. Other investigators (Rose; Dalrymple and Broyd) present comparisons of sensitivity analyses methodologies, but none as comprehensive as the current work.

  7. Rotary absorption heat pump sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Bamberger, J. A.; Zalondek, F. R.

    1990-03-01

    Conserve Resources, Incorporated is currently developing an innovative, patented absorption heat pump. The heat pump uses rotation and thin film technology to enhance the absorption process and to provide a more efficient, compact system. The results are presented of a sensitivity analysis of the rotary absorption heat pump (RAHP) performance conducted to further the development of a 1-ton RAHP. The objective of the uncertainty analysis was to determine the sensitivity of RAHP steady state performance to uncertainties in design parameters. Prior to conducting the uncertainty analysis, a computer model was developed to describe the performance of the RAHP thermodynamic cycle. The RAHP performance is based on many interrelating factors, not all of which could be investigated during the sensitivity analysis. Confirmatory measurements of LiBr/H2O properties during absorber/generator operation will provide experimental verification that the system is operating as it was designed to operate. Quantities to be measured include: flow rate in the absorber and generator, film thickness, recirculation rate, and the effects of rotational speed on these parameters.

  8. Visualization of the Invisible, Explanation of the Unknown, Ruggedization of the Unstable: Sensitivity Analysis, Virtual Tryout and Robust Design through Systematic Stochastic Simulation

    SciTech Connect

    Zwickl, Titus; Carleer, Bart; Kubli, Waldemar

    2005-08-05

    In the past decade, sheet metal forming simulation became a well established tool to predict the formability of parts. In the automotive industry, this has enabled significant reduction in the cost and time for vehicle design and development, and has helped to improve the quality and performance of vehicle parts. However, production stoppages for troubleshooting and unplanned die maintenance, as well as production quality fluctuations continue to plague manufacturing cost and time. The focus therefore has shifted in recent times beyond mere feasibility to robustness of the product and process being engineered. Ensuring robustness is the next big challenge for the virtual tryout / simulation technology.We introduce new methods, based on systematic stochastic simulations, to visualize the behavior of the part during the whole forming process -- in simulation as well as in production. Sensitivity analysis explains the response of the part to changes in influencing parameters. Virtual tryout allows quick exploration of changed designs and conditions. Robust design and manufacturing guarantees quality and process capability for the production process. While conventional simulations helped to reduce development time and cost by ensuring feasible processes, robustness engineering tools have the potential for far greater cost and time savings.Through examples we illustrate how expected and unexpected behavior of deep drawing parts may be tracked down, identified and assigned to the influential parameters. With this knowledge, defects can be eliminated or springback can be compensated e.g.; the response of the part to uncontrollable noise can be predicted and minimized. The newly introduced methods enable more reliable and predictable stamping processes in general.

  9. Orbit determination error analysis and comparison of station-keeping costs for Lissajous and halo-type libration point orbits and sensitivity analysis using experimental design techniques

    NASA Technical Reports Server (NTRS)

    Gordon, Steven C.

    1993-01-01

    Spacecraft in orbit near libration point L1 in the Sun-Earth system are excellent platforms for research concerning solar effects on the terrestrial environment. One spacecraft mission launched in 1978 used an L1 orbit for nearly 4 years, and future L1 orbital missions are also being planned. Orbit determination and station-keeping are, however, required for these orbits. In particular, orbit determination error analysis may be used to compute the state uncertainty after a predetermined tracking period; the predicted state uncertainty levels then will impact the control costs computed in station-keeping simulations. Error sources, such as solar radiation pressure and planetary mass uncertainties, are also incorporated. For future missions, there may be some flexibility in the type and size of the spacecraft's nominal trajectory, but different orbits may produce varying error analysis and station-keeping results. The nominal path, for instance, can be (nearly) periodic or distinctly quasi-periodic. A periodic 'halo' orbit may be constructed to be significantly larger than a quasi-periodic 'Lissajous' path; both may meet mission requirements, but perhaps the required control costs for these orbits are probably different. Also for this spacecraft tracking and control simulation problem, experimental design methods can be used to determine the most significant uncertainties. That is, these methods can determine the error sources in the tracking and control problem that most impact the control cost (output); it also produces an equation that gives the approximate functional relationship between the error inputs and the output.

  10. Design and performance of a combined secondary ion mass spectrometry-scanning probe microscopy instrument for high sensitivity and high-resolution elemental three-dimensional analysis

    SciTech Connect

    Wirtz, Tom; Fleming, Yves; Gerard, Mathieu; Gysin, Urs; Glatzel, Thilo; Meyer, Ernst; Wegmann, Urs; Maier, Urs; Odriozola, Aitziber Herrero; Uehli, Daniel

    2012-06-15

    State-of-the-art secondary ion mass spectrometry (SIMS) instruments allow producing 3D chemical mappings with excellent sensitivity and spatial resolution. Several important artifacts however arise from the fact that SIMS 3D mapping does not take into account the surface topography of the sample. In order to correct these artifacts, we have integrated a specially developed scanning probe microscopy (SPM) system into a commercial Cameca NanoSIMS 50 instrument. This new SPM module, which was designed as a DN200CF flange-mounted bolt-on accessory, includes a new high-precision sample stage, a scanner with a range of 100 {mu}m in x and y direction, and a dedicated SPM head which can be operated in the atomic force microscopy (AFM) and Kelvin probe force microscopy modes. Topographical information gained from AFM measurements taken before, during, and after SIMS analysis as well as the SIMS data are automatically compiled into an accurate 3D reconstruction using the software program 'SARINA,' which was developed for this first combined SIMS-SPM instrument. The achievable lateral resolutions are 6 nm in the SPM mode and 45 nm in the SIMS mode. Elemental 3D images obtained with our integrated SIMS-SPM instrument on Al/Cu and polystyrene/poly(methyl methacrylate) samples demonstrate the advantages of the combined SIMS-SPM approach.

  11. Using Dynamic Sensitivity Analysis to Assess Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey; Morell, Larry; Miller, Keith

    1990-01-01

    This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.

  12. Stiff DAE integrator with sensitivity analysis capabilities

    Energy Science and Technology Software Center (ESTSC)

    2007-11-26

    IDAS is a general purpose (serial and parallel) solver for differential equation (ODE) systems with senstivity analysis capabilities. It provides both forward and adjoint sensitivity analysis options.

  13. Point Source Location Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Cox, J. Allen

    1986-11-01

    This paper presents the results of an analysis of point source location accuracy and sensitivity as a function of focal plane geometry, optical blur spot, and location algorithm. Five specific blur spots are treated: gaussian, diffraction-limited circular aperture with and without central obscuration (obscured and clear bessinc, respectively), diffraction-limited rectangular aperture, and a pill box distribution. For each blur spot, location accuracies are calculated for square, rectangular, and hexagonal detector shapes of equal area. The rectangular detectors are arranged on a hexagonal lattice. The two location algorithms consist of standard and generalized centroid techniques. Hexagonal detector arrays are shown to give the best performance under a wide range of conditions.

  14. Grid sensitivity for aerodynamic optimization and flow analysis

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1993-01-01

    After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.

  15. Nursing-sensitive indicators: a concept analysis

    PubMed Central

    Heslop, Liza; Lu, Sai

    2014-01-01

    Aim To report a concept analysis of nursing-sensitive indicators within the applied context of the acute care setting. Background The concept of ‘nursing sensitive indicators’ is valuable to elaborate nursing care performance. The conceptual foundation, theoretical role, meaning, use and interpretation of the concept tend to differ. The elusiveness of the concept and the ambiguity of its attributes may have hindered research efforts to advance its application in practice. Design Concept analysis. Data sources Using ‘clinical indicators’ or ‘quality of nursing care’ as subject headings and incorporating keyword combinations of ‘acute care’ and ‘nurs*’, CINAHL and MEDLINE with full text in EBSCOhost databases were searched for English language journal articles published between 2000–2012. Only primary research articles were selected. Methods A hybrid approach was undertaken, incorporating traditional strategies as per Walker and Avant and a conceptual matrix based on Holzemer's Outcomes Model for Health Care Research. Results The analysis revealed two main attributes of nursing-sensitive indicators. Structural attributes related to health service operation included: hours of nursing care per patient day, nurse staffing. Outcome attributes related to patient care included: the prevalence of pressure ulcer, falls and falls with injury, nosocomial selective infection and patient/family satisfaction with nursing care. Conclusion This concept analysis may be used as a basis to advance understandings of the theoretical structures that underpin both research and practical application of quality dimensions of nursing care performance. PMID:25113388

  16. Longitudinal Genetic Analysis of Anxiety Sensitivity

    ERIC Educational Resources Information Center

    Zavos, Helena M. S.; Gregory, Alice M.; Eley, Thalia C.

    2012-01-01

    Anxiety sensitivity is associated with both anxiety and depression and has been shown to be heritable. Little, however, is known about the role of genetic influence on continuity and change of symptoms over time. The authors' aim was to examine the stability of anxiety sensitivity during adolescence. By using a genetically sensitive design, the…

  17. Sensitivity Analysis of Impacts of Natural Internal Fault Zones and Well Design on Fluid Flow and Heat Transfer in a Deep Geothermal Reservoir

    NASA Astrophysics Data System (ADS)

    Wong, Li Wah; Watanabe, Norihiro; Fuchs, Sven; Bauer, Klaus; Cacace, Mauro; Blöcher, Guido; Kastner, Oliver; Zimmermann, Günter

    2013-04-01

    In order to show the impacts of natural internal fault zones and well design on geothermal energy production, two main deep geothermal reservoir sites in Germany, Groß Schönebeck (GrSk) and Berlin Tempelhof which are part of the North German Basin (NGB) are investigated. Groß Schönebeck is located at about 40km away from the Berlin centre whereas Berlin Tempelhof is situated in the south-central Berlin. Hydrothermal power plant shows complex coupling between four major components, the deep geothermal reservoir, the boreholes, the heat exchangers of the primary thermal water cycle and the power plant unit. In order to study the lifetime behavior of the overall Enhanced Geothermal System (EGS), it is mandatory to develop a combined transient model representing all relevant components as whole and their inter-relations. In this regards, the framework of Groß Schönebeck (GrSk) project is posed as the first scenario. The hydrothermal power plant is subdivided logically into components modeled separately and subsequently a standalone 3D transient hydro-thermal FEM (finite element method) reservoir model which consists of reservoir, fractures, wells and fault zones is weighted in the first place and its hydro-thermal processes are simulated for a period of 35 years. Using COMSOL Multiphysics, two significant objectives are achieved. Deviated geometries such as the production well and dipping geometries such as natural internal fault zones are successfully implemented into the 3D transient reservoir model which is constructed with the integration of hydraulically induced fractures and reservoir rock layers which are conducive to geothermal power production. Using OpenGeoSys (OGS), sensitivity analysis of varied conductivity of natural internal fault zones due to different permeability and apertures, on current fluid flow and heat transfer pattern is carried out. Study shows that natural internal fault zones play a significant role in the generation of production

  18. A review of sensitivity analysis techniques

    SciTech Connect

    Hamby, D.M.

    1993-12-31

    Mathematical models are utilized to approximate various highly complex engineering, physical, environmental, social, and economic phenomena. Model parameters exerting the most influence on model results are identified through a {open_quotes}sensitivity analysis.{close_quotes} A comprehensive review is presented of more than a dozen sensitivity analysis methods. The most fundamental of sensitivity techniques utilizes partial differentiation whereas the simplest approach requires varying parameter values one-at-a-time. Correlation analysis is used to determine relationships between independent and dependent variables. Regression analysis provides the most comprehensive sensitivity measure and is commonly utilized to build response surfaces that approximate complex models.

  19. Geothermal power, policy, and design: Using levelized cost of energy and sensitivity analysis to target improved policy incentives for the U.S. geothermal market

    NASA Astrophysics Data System (ADS)

    Richard, Christopher L.

    At the core of the geothermal industry is a need to identify how policy incentives can better be applied for optimal return. Literature from Bloomquist (1999), Doris et al. (2009), and McIlveen (2011) suggest that a more tailored approach to crafting geothermal policy is warranted. In this research the guiding theory is based on those suggestions and is structured to represent a policy analysis approach using analytical methods. The methods being used are focus on qualitative and quantitative results. To address the qualitative sections of this research an extensive review of contemporary literature is used to identify the frequency of use for specific barriers, and is followed upon with an industry survey to determine existing gaps. As a result there is support for certain barriers and justification for expanding those barriers found within the literature. This method of inquiry is an initial point for structuring modeling tools to further quantify the research results as part of the theoretical framework. Analytical modeling utilizes the levelized cost of energy as a foundation for comparative assessment of policy incentives. Model parameters use assumptions to draw conclusions from literature and survey results to reflect unique attributes held by geothermal power technologies. Further testing by policy option provides an opportunity to assess the sensitivity of each variable with respect to applied policy. Master limited partnerships, feed in tariffs, RD&D, and categorical exclusions all result as viable options for mitigating specific barriers associated to developing geothermal power. The results show reductions of levelized cost based upon the model's exclusive parameters. These results are also compared to contemporary policy options highlighting the need for tailored policy, as discussed by Bloomquist (1999), Doris et al. (2009), and McIlveen (2011). It is the intent of this research to provide the reader with a descriptive understanding of the role of

  20. Sensitivity Analysis of Wing Aeroelastic Responses

    NASA Technical Reports Server (NTRS)

    Issac, Jason Cherian

    1995-01-01

    Design for prevention of aeroelastic instability (that is, the critical speeds leading to aeroelastic instability lie outside the operating range) is an integral part of the wing design process. Availability of the sensitivity derivatives of the various critical speeds with respect to shape parameters of the wing could be very useful to a designer in the initial design phase, when several design changes are made and the shape of the final configuration is not yet frozen. These derivatives are also indispensable for a gradient-based optimization with aeroelastic constraints. In this study, flutter characteristic of a typical section in subsonic compressible flow is examined using a state-space unsteady aerodynamic representation. The sensitivity of the flutter speed of the typical section with respect to its mass and stiffness parameters, namely, mass ratio, static unbalance, radius of gyration, bending frequency, and torsional frequency is calculated analytically. A strip theory formulation is newly developed to represent the unsteady aerodynamic forces on a wing. This is coupled with an equivalent plate structural model and solved as an eigenvalue problem to determine the critical speed of the wing. Flutter analysis of the wing is also carried out using a lifting-surface subsonic kernel function aerodynamic theory (FAST) and an equivalent plate structural model. Finite element modeling of the wing is done using NASTRAN so that wing structures made of spars and ribs and top and bottom wing skins could be analyzed. The free vibration modes of the wing obtained from NASTRAN are input into FAST to compute the flutter speed. An equivalent plate model which incorporates first-order shear deformation theory is then examined so it can be used to model thick wings, where shear deformations are important. The sensitivity of natural frequencies to changes in shape parameters is obtained using ADIFOR. A simple optimization effort is made towards obtaining a minimum weight

  1. Sensitivity analysis of a wing aeroelastic response

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Eldred, Lloyd B.; Barthelemy, Jean-Francois M.

    1991-01-01

    A variation of Sobieski's Global Sensitivity Equations (GSE) approach is implemented to obtain the sensitivity of the static aeroelastic response of a three-dimensional wing model. The formulation is quite general and accepts any aerodynamics and structural analysis capability. An interface code is written to convert one analysis's output to the other's input, and visa versa. Local sensitivity derivatives are calculated by either analytic methods or finite difference techniques. A program to combine the local sensitivities, such as the sensitivity of the stiffness matrix or the aerodynamic kernel matrix, into global sensitivity derivatives is developed. The aerodynamic analysis package FAST, using a lifting surface theory, and a structural package, ELAPS, implementing Giles' equivalent plate model are used.

  2. Recent developments in structural sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Adelman, Howard M.

    1988-01-01

    Recent developments are reviewed in two major areas of structural sensitivity analysis: sensitivity of static and transient response; and sensitivity of vibration and buckling eigenproblems. Recent developments from the standpoint of computational cost, accuracy, and ease of implementation are presented. In the area of static response, current interest is focused on sensitivity to shape variation and sensitivity of nonlinear response. Two general approaches are used for computing sensitivities: differentiation of the continuum equations followed by discretization, and the reverse approach of discretization followed by differentiation. It is shown that the choice of methods has important accuracy and implementation implications. In the area of eigenproblem sensitivity, there is a great deal of interest and significant progress in sensitivity of problems with repeated eigenvalues. In addition to reviewing recent contributions in this area, the paper raises the issue of differentiability and continuity associated with the occurrence of repeated eigenvalues.

  3. Boundary formulations for sensitivity analysis without matrix derivatives

    NASA Technical Reports Server (NTRS)

    Kane, J. H.; Guru Prasad, K.

    1993-01-01

    A new hybrid approach to continuum structural shape sensitivity analysis employing boundary element analysis (BEA) is presented. The approach uses iterative reanalysis to obviate the need to factor perturbed matrices in the determination of surface displacement and traction sensitivities via a univariate perturbation/finite difference (UPFD) step. The UPFD approach makes it possible to immediately reuse existing subroutines for computation of BEA matrix coefficients in the design sensitivity analysis process. The reanalysis technique computes economical response of univariately perturbed models without factoring perturbed matrices. The approach provides substantial computational economy without the burden of a large-scale reprogramming effort.

  4. Sensitivity Analysis of Differential-Algebraic Equations and Partial Differential Equations

    SciTech Connect

    Petzold, L; Cao, Y; Li, S; Serban, R

    2005-08-09

    Sensitivity analysis generates essential information for model development, design optimization, parameter estimation, optimal control, model reduction and experimental design. In this paper we describe the forward and adjoint methods for sensitivity analysis, and outline some of our recent work on theory, algorithms and software for sensitivity analysis of differential-algebraic equation (DAE) and time-dependent partial differential equation (PDE) systems.

  5. Sensitivity Analysis for some Water Pollution Problem

    NASA Astrophysics Data System (ADS)

    Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff

    2014-05-01

    Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .

  6. Extended Forward Sensitivity Analysis for Uncertainty Quantification

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau

    2008-09-01

    This report presents the forward sensitivity analysis method as a means for quantification of uncertainty in system analysis. The traditional approach to uncertainty quantification is based on a “black box” approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. This approach requires large number of simulation runs and therefore has high computational cost. Contrary to the “black box” method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In this approach equations for the propagation of uncertainty are constructed and the sensitivity is solved for as variables in the same simulation. This “glass box” method can generate similar sensitivity information as the above “black box” approach with couples of runs to cover a large uncertainty region. Because only small numbers of runs are required, those runs can be done with a high accuracy in space and time ensuring that the uncertainty of the physical model is being measured and not simply the numerical error caused by the coarse discretization. In the forward sensitivity method, the model is differentiated with respect to each parameter to yield an additional system of the same size as the original one, the result of which is the solution sensitivity. The sensitivity of any output variable can then be directly obtained from these sensitivities by applying the chain rule of differentiation. We extend the forward sensitivity method to include time and spatial steps as special parameters so that the numerical errors can be quantified against other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty analysis. By knowing the relative sensitivity of time and space steps with other

  7. Extended Forward Sensitivity Analysis for Uncertainty Quantification

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau

    2013-01-01

    This paper presents the extended forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed to run at optimized time and space steps without affecting the confidence of the physical parameter sensitivity results. The time and space steps forward sensitivity analysis method can also replace the traditional time step and grid convergence study with much less computational cost. Two well-defined benchmark problems with manufactured solutions are utilized to demonstrate the method.

  8. Updated Chemical Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    2005-01-01

    An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.

  9. A strategy to design novel structure photochromic sensitizers for dye-sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Wu, Wenjun; Wang, Jiaxing; Zheng, Zhiwei; Hu, Yue; Jin, Jiayu; Zhang, Qiong; Hua, Jianli

    2015-02-01

    Two sensitizers with novel structure were designed and synthetized by introducing photochromic bisthienylethene (BTE) group into the conjugated system. Thanks to the photochromic effect the sensitizers have under ultraviolet and visible light, the conjugated bridge can be restructured and the resulting two photoisomers showed different behaviors in photovoltaic devices. This opens up a new research way for the dye-sensitized solar cells (DSSCs).

  10. Extended Forward Sensitivity Analysis for Uncertainty Quantification

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau

    2011-09-01

    Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed

  11. Imaging system sensitivity analysis with NV-IPM

    NASA Astrophysics Data System (ADS)

    Fanning, Jonathan; Teaney, Brian

    2014-05-01

    This paper describes the sensitivity analysis capabilities to be added to version 1.2 of the NVESD imaging sensor model NV-IPM. Imaging system design always involves tradeoffs to design the best system possible within size, weight, and cost constraints. In general, the performance of a well designed system will be limited by the largest, heaviest, and most expensive components. Modeling is used to analyze system designs before the system is built. Traditionally, NVESD models were only used to determine the performance of a given system design. NV-IPM has the added ability to automatically determine the sensitivity of any system output to changes in the system parameters. The component-based structure of NV-IPM tracks the dependence between outputs and inputs such that only the relevant parameters are varied in the sensitivity analysis. This allows sensitivity analysis of an output such as probability of identification to determine the limiting parameters of the system. Individual components can be optimized by doing sensitivity analysis of outputs such as NETD or SNR. This capability will be demonstrated by analyzing example imaging systems.

  12. Aero-Structural Interaction, Analysis, and Shape Sensitivity

    NASA Technical Reports Server (NTRS)

    Newman, James C., III

    1999-01-01

    A multidisciplinary sensitivity analysis technique that has been shown to be independent of step-size selection is examined further. The accuracy of this step-size independent technique, which uses complex variables for determining sensitivity derivatives, has been previously established. The primary focus of this work is to validate the aero-structural analysis procedure currently being used. This validation consists of comparing computed and experimental data obtained for an Aeroelastic Research Wing (ARW-2). Since the aero-structural analysis procedure has the complex variable modifications already included into the software, sensitivity derivatives can automatically be computed. Other than for design purposes, sensitivity derivatives can be used for predicting the solution at nearby conditions. The use of sensitivity derivatives for predicting the aero-structural characteristics of this configuration is demonstrated.

  13. Coal Transportation Rate Sensitivity Analysis

    EIA Publications

    2005-01-01

    On December 21, 2004, the Surface Transportation Board (STB) requested that the Energy Information Administration (EIA) analyze the impact of changes in coal transportation rates on projected levels of electric power sector energy use and emissions. Specifically, the STB requested an analysis of changes in national and regional coal consumption and emissions resulting from adjustments in railroad transportation rates for Wyoming's Powder River Basin (PRB) coal using the National Energy Modeling System (NEMS). However, because NEMS operates at a relatively aggregate regional level and does not represent the costs of transporting coal over specific rail lines, this analysis reports on the impacts of interregional changes in transportation rates from those used in the Annual Energy Outlook 2005 (AEO2005) reference case.

  14. Simultaneous analysis and design

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.

    1984-01-01

    Optimization techniques are increasingly being used for performing nonlinear structural analysis. The development of element by element (EBE) preconditioned conjugate gradient (CG) techniques is expected to extend this trend to linear analysis. Under these circumstances the structural design problem can be viewed as a nested optimization problem. There are computational benefits to treating this nested problem as a large single optimization problem. The response variables (such as displacements) and the structural parameters are all treated as design variables in a unified formulation which performs simultaneously the design and analysis. Two examples are used for demonstration. A seventy-two bar truss is optimized subject to linear stress constraints and a wing box structure is optimized subject to nonlinear collapse constraints. Both examples show substantial computational savings with the unified approach as compared to the traditional nested approach.

  15. Sensitivity analysis of distributed volcanic source inversion

    NASA Astrophysics Data System (ADS)

    Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José

    2016-04-01

    A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep

  16. Tilt-Sensitivity Analysis for Space Telescopes

    NASA Technical Reports Server (NTRS)

    Papalexandris, Miltiadis; Waluschka, Eugene

    2003-01-01

    A report discusses a computational-simulation study of phase-front propagation in the Laser Interferometer Space Antenna (LISA), in which space telescopes would transmit and receive metrological laser beams along 5-Gm interferometer arms. The main objective of the study was to determine the sensitivity of the average phase of a beam with respect to fluctuations in pointing of the beam. The simulations account for the effects of obscurations by a secondary mirror and its supporting struts in a telescope, and for the effects of optical imperfections (especially tilt) of a telescope. A significant innovation introduced in this study is a methodology, applicable to space telescopes in general, for predicting the effects of optical imperfections. This methodology involves a Monte Carlo simulation in which one generates many random wavefront distortions and studies their effects through computational simulations of propagation. Then one performs a statistical analysis of the results of the simulations and computes the functional relations among such important design parameters as the sizes of distortions and the mean value and the variance of the loss of performance. These functional relations provide information regarding position and orientation tolerances relevant to design and operation.

  17. Sensitivity analysis for solar plates

    NASA Astrophysics Data System (ADS)

    Aster, R. W.

    1986-02-01

    Economic evaluation methods and analyses of emerging photovoltaic (PV) technology since 1976 was prepared. This type of analysis was applied to the silicon research portion of the PV Program in order to determine the importance of this research effort in relationship to the successful development of commercial PV systems. All four generic types of PV that use silicon were addressed: crystal ingots grown either by the Czochralski method or an ingot casting method; ribbons pulled directly from molten silicon; an amorphous silicon thin film; and use of high concentration lenses. Three technologies were analyzed: the Union Carbide fluidized bed reactor process, the Hemlock process, and the Union Carbide Komatsu process. The major components of each process were assessed in terms of the costs of capital equipment, labor, materials, and utilities. These assessments were encoded as the probabilities assigned by experts for achieving various cost values or production rates.

  18. Sensitivity analysis for solar plates

    NASA Technical Reports Server (NTRS)

    Aster, R. W.

    1986-01-01

    Economic evaluation methods and analyses of emerging photovoltaic (PV) technology since 1976 was prepared. This type of analysis was applied to the silicon research portion of the PV Program in order to determine the importance of this research effort in relationship to the successful development of commercial PV systems. All four generic types of PV that use silicon were addressed: crystal ingots grown either by the Czochralski method or an ingot casting method; ribbons pulled directly from molten silicon; an amorphous silicon thin film; and use of high concentration lenses. Three technologies were analyzed: the Union Carbide fluidized bed reactor process, the Hemlock process, and the Union Carbide Komatsu process. The major components of each process were assessed in terms of the costs of capital equipment, labor, materials, and utilities. These assessments were encoded as the probabilities assigned by experts for achieving various cost values or production rates.

  19. On the sensitivity analysis of porous material models

    NASA Astrophysics Data System (ADS)

    Ouisse, Morvan; Ichchou, Mohamed; Chedly, Slaheddine; Collet, Manuel

    2012-11-01

    Porous materials are used in many vibroacoustic applications. Different available models describe their behaviors according to materials' intrinsic characteristics. For instance, in the case of porous material with rigid frame, and according to the Champoux-Allard model, five parameters are employed. In this paper, an investigation about this model sensitivity to parameters according to frequency is conducted. Sobol and FAST algorithms are used for sensitivity analysis. A strong parametric frequency dependent hierarchy is shown. Sensitivity investigations confirm that resistivity is the most influent parameter when acoustic absorption and surface impedance of porous materials with rigid frame are considered. The analysis is first performed on a wide category of porous materials, and then restricted to a polyurethane foam analysis in order to illustrate the impact of the reduction of the design space. In a second part, a sensitivity analysis is performed using the Biot-Allard model with nine parameters including mechanical effects of the frame and conclusions are drawn through numerical simulations.

  20. Evolution of Geometric Sensitivity Derivatives from Computer Aided Design Models

    NASA Technical Reports Server (NTRS)

    Jones, William T.; Lazzara, David; Haimes, Robert

    2010-01-01

    The generation of design parameter sensitivity derivatives is required for gradient-based optimization. Such sensitivity derivatives are elusive at best when working with geometry defined within the solid modeling context of Computer-Aided Design (CAD) systems. Solid modeling CAD systems are often proprietary and always complex, thereby necessitating ad hoc procedures to infer parameter sensitivity. A new perspective is presented that makes direct use of the hierarchical associativity of CAD features to trace their evolution and thereby track design parameter sensitivity. In contrast to ad hoc methods, this method provides a more concise procedure following the model design intent and determining the sensitivity of CAD geometry directly to its respective defining parameters.

  1. Multiple predictor smoothing methods for sensitivity analysis.

    SciTech Connect

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  2. Ceramic tubesheet design analysis

    SciTech Connect

    Mallett, R.H.; Swindeman, R.W.

    1996-06-01

    A transport combustor is being commissioned at the Southern Services facility in Wilsonville, Alabama to provide a gaseous product for the assessment of hot-gas filtering systems. One of the barrier filters incorporates a ceramic tubesheet to support candle filters. The ceramic tubesheet, designed and manufactured by Industrial Filter and Pump Manufacturing Company (EF&PM), is unique and offers distinct advantages over metallic systems in terms of density, resistance to corrosion, and resistance to creep at operating temperatures above 815{degrees}C (1500{degrees}F). Nevertheless, the operational requirements of the ceramic tubesheet are severe. The tubesheet is almost 1.5 m in (55 in.) in diameter, has many penetrations, and must support the weight of the ceramic filters, coal ash accumulation, and a pressure drop (one atmosphere). Further, thermal stresses related to steady state and transient conditions will occur. To gain a better understanding of the structural performance limitations, a contract was placed with Mallett Technology, Inc. to perform a thermal and structural analysis of the tubesheet design. The design analysis specification and a preliminary design analysis were completed in the early part of 1995. The analyses indicated that modifications to the design were necessary to reduce thermal stress, and it was necessary to complete the redesign before the final thermal/mechanical analysis could be undertaken. The preliminary analysis identified the need to confirm that the physical and mechanical properties data used in the design were representative of the material in the tubesheet. Subsequently, few exploratory tests were performed at ORNL to evaluate the ceramic structural material.

  3. An analytical sensitivity method for use in integrated aeroservoelastic aircraft design

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1989-01-01

    Interdisciplinary analysis capabilities have been developed for aeroservoelastic aircraft and large flexible spacecraft, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Gaussian (LQG) optimal control laws, enabling the use of LQG techniques in the hierarchal design methodology. The LQG sensitivity analysis method calculates the change in the optimal control law and resulting controlled system responses due to changes in fixed design integration parameters using analytical sensitivity equations. Numerical results of an LQG design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimal control law and aircraft response for various parameters such as wing bending natural frequency is determined. The sensitivity results computed from the analytical expressions are used to estimate changes in response resulting from changes in the parameters. Comparisons of the estimates with exact calculated responses show they are reasonably accurate for + or - 15 percent changes in the parameters. Evaluation of the analytical expressions is computationally faster than equivalent finite difference calculations.

  4. An analytical sensitivity method for use in integrated aeroservoelastic aircraft design

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1989-01-01

    Interdisciplinary analysis capabilities have been developed for aeroservoelastic aircraft and large flexible spacecraft, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Gaussian (LQG) optimal control laws, enabling the use of LQG techniques in the hierarchal design methodology. The LQG sensitivity analysis method calculates the change in the optimal control law and resulting controlled system responses due to changes in fixed design integration parameters using analytical sensitivity equations. Numerical results of a LQG design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimal control law and aircraft response for various parameters such as wing bending natural frequency is determined. The sensitivity results computed from the analytical expressions are used to estimate changes in response resulting from changes in the parameters. Comparisons of the estimates with exact calculated responses show they are reasonably accurate for + or - 15 percent changes in the parameters. Evaluation of the analytical expressions is computationally faster than equivalent finite difference calculations.

  5. Methodology for a stormwater sensitive urban watershed design

    NASA Astrophysics Data System (ADS)

    Romnée, Ambroise; Evrard, Arnaud; Trachte, Sophie

    2015-11-01

    In urban stormwater management, decentralized systems are nowadays worldwide experimented, including stormwater best management practices. However, a watershed-scale approach, relevant for urban hydrology, is almost always neglected when designing a stormwater management plan with best management practices. As a consequence, urban designers fail to convince public authorities of the actual hydrologic effectiveness of such an approach to urban watershed stormwater management. In this paper, we develop a design oriented methodology for studying the morphology of an urban watershed in terms of sustainable stormwater management. The methodology is a five-step method, firstly based on the cartographic analysis of many stormwater relevant indicators regarding the landscape, the urban fabric and the governance. The second step focuses on the identification of many territorial stakes and their corresponding strategies of a decentralized stormwater management. Based on the indicators, the stakes and the strategies, the third step defines many spatial typologies regarding the roadway system and the urban fabric system. The fourth step determines many stormwater management scenarios to be applied to both spatial typologies systems. The fifth step is the design of decentralized stormwater management projects integrating BMPs into each spatial typology. The methodology aims to advise urban designers and engineering offices in the right location and selection of BMPs without given them a hypothetical unique solution. Since every location and every watershed is different due to local guidelines and stakeholders, this paper provide a methodology for a stormwater sensitive urban watershed design that could be reproduced everywhere. As an example, the methodology is applied as a case study to an urban watershed in Belgium, confirming that the method is applicable to any urban watershed. This paper should be helpful for engineering and design offices in urban hydrology to define a

  6. Liquid Acquisition Device Design Sensitivity Study

    NASA Technical Reports Server (NTRS)

    VanDyke, M. K.; Hastings, L. J.

    2012-01-01

    In-space propulsion often necessitates the use of a capillary liquid acquisition device (LAD) to assure that gas-free liquid propellant is available to support engine restarts in microgravity. If a capillary screen-channel device is chosen, then the designer must determine the appropriate combination screen mesh and channel geometry. A screen mesh selection which results in the smallest LAD width when compared to any other screen candidate (for a constant length) is desirable; however, no best screen exists for all LAD design requirements. Flow rate, percent fill, and acceleration are the most influential drivers for determining screen widths. Increased flow rates and reduced percent fills increase the through-the-screen flow pressure losses, which drive the LAD to increased widths regardless of screen choice. Similarly, increased acceleration levels and corresponding liquid head pressures drive the screen mesh selection toward a higher bubble point (liquid retention capability). After ruling out some screens on the basis of acceleration requirements alone, candidates can be identified by examining screens with small flow-loss-to-bubble point ratios for a given condition (i.e., comparing screens at certain flow rates and fill levels). Within the same flow rate and fill level, the screen constants inertia resistance coefficient, void fraction, screen pore or opening diameter, and bubble point can become the driving forces in identifying the smaller flow-loss-to-bubble point ratios.

  7. A strategy to design novel structure photochromic sensitizers for dye-sensitized solar cells.

    PubMed

    Wu, Wenjun; Wang, Jiaxing; Zheng, Zhiwei; Hu, Yue; Jin, Jiayu; Zhang, Qiong; Hua, Jianli

    2015-01-01

    Two sensitizers with novel structure were designed and synthetized by introducing photochromic bisthienylethene (BTE) group into the conjugated system. Thanks to the photochromic effect the sensitizers have under ultraviolet and visible light, the conjugated bridge can be restructured and the resulting two photoisomers showed different behaviors in photovoltaic devices. This opens up a new research way for the dye-sensitized solar cells (DSSCs). PMID:25716204

  8. A strategy to design novel structure photochromic sensitizers for dye-sensitized solar cells

    PubMed Central

    Wu, Wenjun; Wang, Jiaxing; Zheng, Zhiwei; Hu, Yue; Jin, Jiayu; Zhang, Qiong; Hua, Jianli

    2015-01-01

    Two sensitizers with novel structure were designed and synthetized by introducing photochromic bisthienylethene (BTE) group into the conjugated system. Thanks to the photochromic effect the sensitizers have under ultraviolet and visible light, the conjugated bridge can be restructured and the resulting two photoisomers showed different behaviors in photovoltaic devices. This opens up a new research way for the dye-sensitized solar cells (DSSCs). PMID:25716204

  9. SEP thrust subsystem performance sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.

    1973-01-01

    This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.

  10. Probabilistic sensitivity analysis in health economics.

    PubMed

    Baio, Gianluca; Dawid, A Philip

    2015-12-01

    Health economic evaluations have recently become an important part of the clinical and medical research process and have built upon more advanced statistical decision-theoretic foundations. In some contexts, it is officially required that uncertainty about both parameters and observable variables be properly taken into account, increasingly often by means of Bayesian methods. Among these, probabilistic sensitivity analysis has assumed a predominant role. The objective of this article is to review the problem of health economic assessment from the standpoint of Bayesian statistical decision theory with particular attention to the philosophy underlying the procedures for sensitivity analysis. PMID:21930515

  11. A PDE Sensitivity Equation Method for Optimal Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1996-01-01

    The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.

  12. Comparative Sensitivity Analysis of Muscle Activation Dynamics.

    PubMed

    Rockenfeller, Robert; Günther, Michael; Schmitt, Syn; Götz, Thomas

    2015-01-01

    We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379

  13. Comparative Sensitivity Analysis of Muscle Activation Dynamics

    PubMed Central

    Rockenfeller, Robert; Günther, Michael; Schmitt, Syn; Götz, Thomas

    2015-01-01

    We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379

  14. Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)

    1996-01-01

    Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite

  15. Pediatric Pain, Predictive Inference, and Sensitivity Analysis.

    ERIC Educational Resources Information Center

    Weiss, Robert

    1994-01-01

    Coping style and effects of counseling intervention on pain tolerance was studied for 61 elementary school students through immersion of hands in cold water. Bayesian predictive inference tools are able to distinguish between subject characteristics and manipulable treatments. Sensitivity analysis strengthens the certainty of conclusions about…

  16. Design sensitivity derivatives for isoparametric elements by analytical and semi-analytical approaches

    NASA Technical Reports Server (NTRS)

    Zumwalt, Kenneth W.; El-Sayed, Mohamed E. M.

    1990-01-01

    This paper presents an analytical approach for incorporating design sensitivity calculations directly into the finite element analysis. The formulation depends on the implicit differentiation approach and requires few additional calculations to obtain the design sensitivity derivatives. In order to evaluate this approach, it is compared with the semi-analytical approach which is based on commonly used finite difference formulations. Both approaches are implemented to calculate the design sensitivities for continuum and structural isoparametric elements. To demonstrate the accuracy and robustness of the developed analytical approach compared to the semi-analytical approach, some test cases using different structural and continuum element types are presented.

  17. NIR sensitivity analysis with the VANE

    NASA Astrophysics Data System (ADS)

    Carrillo, Justin T.; Goodin, Christopher T.; Baylot, Alex E.

    2016-05-01

    Near infrared (NIR) cameras, with peak sensitivity around 905-nm wavelengths, are increasingly used in object detection applications such as pedestrian detection, occupant detection in vehicles, and vehicle detection. In this work, we present the results of simulated sensitivity analysis for object detection with NIR cameras. The analysis was conducted using high performance computing (HPC) to determine the environmental effects on object detection in different terrains and environmental conditions. The Virtual Autonomous Navigation Environment (VANE) was used to simulate highresolution models for environment, terrain, vehicles, and sensors. In the experiment, an active fiducial marker was attached to the rear bumper of a vehicle. The camera was mounted on a following vehicle that trailed at varying standoff distances. Three different terrain conditions (rural, urban, and forest), two environmental conditions (clear and hazy), three different times of day (morning, noon, and evening), and six different standoff distances were used to perform the sensor sensitivity analysis. The NIR camera that was used for the simulation is the DMK firewire monochrome on a pan-tilt motor. Standoff distance was varied along with environment and environmental conditions to determine the critical failure points for the sensor. Feature matching was used to detect the markers in each frame of the simulation, and the percentage of frames in which one of the markers was detected was recorded. The standoff distance produced the biggest impact on the performance of the camera system, while the camera system was not sensitive to environment conditions.

  18. Geothermal well cost sensitivity analysis: current status

    SciTech Connect

    Carson, C.C.; Lin, Y.T.

    1980-01-01

    The geothermal well-cost model developed by Sandia National Laboratories is being used to analyze the sensitivity of well costs to improvements in geothermal drilling technology. Three interim results from this modeling effort are discussed. The sensitivity of well costs to bit parameters, rig parameters, and material costs; an analysis of the cost reduction potential of an advanced bit; and a consideration of breakeven costs for new cementing technology. All three results illustrate that the well-cost savings arising from any new technology will be highly site-dependent but that in specific wells the advances considered can result in significant cost reductions.

  19. Sensitivity analysis for magnetic induction tomography.

    PubMed

    Soleimani, Manuchehr; Jersey-Willuhn, Karen

    2004-01-01

    This work focuses on sensitivity analysis of magnetic induction tomography in terms of theoretical modelling and numerical implementation. We will explain a new and efficient method to determine the Jacobian matrix, directly from the results of the forward solution. The results presented are for the eddy current approximation, and are given in terms of magnetic vector potential, which is computationally convenient, and which may be extracted directly from the FE solution of the forward problem. Examples of sensitivity maps for an opposite sensor geometry are also shown. PMID:17271947

  20. Fault Tree Reliability Analysis and Design-for-reliability

    Energy Science and Technology Software Center (ESTSC)

    1998-05-05

    WinR provides a fault tree analysis capability for performing systems reliability and design-for-reliability analyses. The package includes capabilities for sensitivity and uncertainity analysis, field failure data analysis, and optimization.

  1. Improving Discrete-Sensitivity-Based Approach for Practical Design Optimization

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Cordero, Yvette; Pandya, Mohagna J.

    1997-01-01

    In developing the automated methodologies for simulation-based optimal shape designs, their accuracy, efficiency and practicality are the defining factors to their success. To that end, four recent improvements to the building blocks of such a methodology, intended for more practical design optimization, have been reported. First, in addition to a polynomial-based parameterization, a partial differential equation (PDE) based parameterization was shown to be a practical tool for a number of reasons. Second, an alternative has been incorporated to one of the tedious phases of developing such a methodology, namely, the automatic differentiation of the computer code for the flow analysis in order to generate the sensitivities. Third, by extending the methodology for the thin-layer Navier-Stokes (TLNS) based flow simulations, the more accurate flow physics was made available. However, the computer storage requirement for a shape optimization of a practical configuration with the -fidelity simulations (TLNS and dense-grid based simulations), required substantial computational resources. Therefore, the final improvement reported herein responded to this point by including the alternating-direct-implicit (ADI) based system solver as an alternative to the preconditioned biconjugate (PbCG) and other direct solvers.

  2. Software Performs Complex Design Analysis

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Designers use computational fluid dynamics (CFD) to gain greater understanding of the fluid flow phenomena involved in components being designed. They also use finite element analysis (FEA) as a tool to help gain greater understanding of the structural response of components to loads, stresses and strains, and the prediction of failure modes. Automated CFD and FEA engineering design has centered on shape optimization, which has been hindered by two major problems: 1) inadequate shape parameterization algorithms, and 2) inadequate algorithms for CFD and FEA grid modification. Working with software engineers at Stennis Space Center, a NASA commercial partner, Optimal Solutions Software LLC, was able to utilize its revolutionary, one-of-a-kind arbitrary shape deformation (ASD) capability-a major advancement in solving these two aforementioned problems-to optimize the shapes of complex pipe components that transport highly sensitive fluids. The ASD technology solves the problem of inadequate shape parameterization algorithms by allowing the CFD designers to freely create their own shape parameters, therefore eliminating the restriction of only being able to use the computer-aided design (CAD) parameters. The problem of inadequate algorithms for CFD grid modification is solved by the fact that the new software performs a smooth volumetric deformation. This eliminates the extremely costly process of having to remesh the grid for every shape change desired. The program can perform a design change in a markedly reduced amount of time, a process that would traditionally involve the designer returning to the CAD model to reshape and then remesh the shapes, something that has been known to take hours, days-even weeks or months-depending upon the size of the model.

  3. Sensitivity Analysis for Coupled Aero-structural Systems

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.

    1999-01-01

    A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.

  4. Microelectromechanical Resonant Accelerometer Designed with a High Sensitivity.

    PubMed

    Zhang, Jing; Su, Yan; Shi, Qin; Qiu, An-Ping

    2015-01-01

    This paper describes the design and experimental evaluation of a silicon micro-machined resonant accelerometer (SMRA). This type of accelerometer works on the principle that a proof mass under acceleration applies force to two double-ended tuning fork (DETF) resonators, and the frequency output of two DETFs exhibits a differential shift. The dies of an SMRA are fabricated using silicon-on-insulator (SOI) processing and wafer-level vacuum packaging. This research aims to design a high-sensitivity SMRA because a high sensitivity allows for the acceleration signal to be easily demodulated by frequency counting techniques and decreases the noise level. This study applies the energy-consumed concept and the Nelder-Mead algorithm in the SMRA to address the design issues and further increase its sensitivity. Using this novel method, the sensitivity of the SMRA has been increased by 66.1%, which attributes to both the re-designed DETF and the reduced energy loss on the micro-lever. The results of both the closed-form and finite-element analyses are described and are in agreement with one another. A resonant frequency of approximately 22 kHz, a frequency sensitivity of over 250 Hz per g, a one-hour bias stability of 55 μg, a bias repeatability (1σ) of 48 μg and the bias-instability of 4.8 μg have been achieved. PMID:26633425

  5. Microelectromechanical Resonant Accelerometer Designed with a High Sensitivity

    PubMed Central

    Zhang, Jing; Su, Yan; Shi, Qin; Qiu, An-Ping

    2015-01-01

    This paper describes the design and experimental evaluation of a silicon micro-machined resonant accelerometer (SMRA). This type of accelerometer works on the principle that a proof mass under acceleration applies force to two double-ended tuning fork (DETF) resonators, and the frequency output of two DETFs exhibits a differential shift. The dies of an SMRA are fabricated using silicon-on-insulator (SOI) processing and wafer-level vacuum packaging. This research aims to design a high-sensitivity SMRA because a high sensitivity allows for the acceleration signal to be easily demodulated by frequency counting techniques and decreases the noise level. This study applies the energy-consumed concept and the Nelder-Mead algorithm in the SMRA to address the design issues and further increase its sensitivity. Using this novel method, the sensitivity of the SMRA has been increased by 66.1%, which attributes to both the re-designed DETF and the reduced energy loss on the micro-lever. The results of both the closed-form and finite-element analyses are described and are in agreement with one another. A resonant frequency of approximately 22 kHz, a frequency sensitivity of over 250 Hz per g, a one-hour bias stability of 55 μg, a bias repeatability (1σ) of 48 μg and the bias-instability of 4.8 μg have been achieved. PMID:26633425

  6. Passive solar design handbook. Volume 3: Passive solar design analysis

    NASA Astrophysics Data System (ADS)

    Jones, R. W.; Bascomb, J. D.; Kosiewicz, C. E.; Lazarus, G. S.; McFarland, R. D.; Wray, W. O.

    1982-07-01

    Simple analytical methods concerning the design of passive solar heating systems are presented with an emphasis on the average annual heating energy consumption. Key terminology and methods are reviewed. The solar load ratio (SLR) is defined, and its relationship to analysis methods is reviewed. The annual calculation, or Load Collector Ratio (LCR) method, is outlined. Sensitivity data are discussed. Information is presented on balancing conservation and passive solar strategies in building design. Detailed analysis data are presented for direct gain and sunspace systems, and details of the systems are described. Key design parameters are discussed in terms of their impact on annual heating performance of the building. These are the sensitivity data. The SLR correlations for the respective system types are described. The monthly calculation, or SLR method, based on the SLR correlations, is reviewed. Performance data are given for 9 direct gain systems and 15 water wall and 42 Trombe wall systems.

  7. Estimating the upper limit of gas production from Class 2 hydrate accumulations in the permafrost: 2. Alternative well designs and sensitivity analysis

    SciTech Connect

    Moridis, G.; Reagan, M.T.

    2011-01-15

    In the second paper of this series, we evaluate two additional well designs for production from permafrost-associated (PA) hydrate deposits. Both designs are within the capabilities of conventional technology. We determine that large volumes of gas can be produced at high rates (several MMSCFD) for long times using either well design. The production approach involves initial fluid withdrawal from the water zone underneath the hydrate-bearing layer (HBL). The production process follows a cyclical pattern, with each cycle composed of two stages: a long stage (months to years) of increasing gas production and decreasing water production, and a short stage (days to weeks) that involves destruction of the secondary hydrate (mainly through warm water injection) that evolves during the first stage, and is followed by a reduction in the fluid withdrawal rate. A well configuration with completion throughout the HBL leads to high production rates, but also the creation of a secondary hydrate barrier around the well that needs to be destroyed regularly by water injection. However, a configuration that initially involves heating of the outer surface of the wellbore and later continuous injection of warm water at low rates (Case C) appears to deliver optimum performance over the period it takes for the exhaustion of the hydrate deposit. Using Case C as the standard, we determine that gas production from PA hydrate deposits increases with the fluid withdrawal rate, the initial hydrate saturation and temperature, and with the formation permeability.

  8. New Methods for Sensitivity Analysis in Chaotic, Turbulent Fluid Flows

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Wang, Qiqi

    2012-11-01

    Computational methods for sensitivity analysis are invaluable tools for fluid mechanics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods break down when applied to long-time averaged quantities in chaotic fluid flowfields, such as those obtained using high-fidelity turbulence simulations. Also, a number of dynamical properties of chaotic fluid flows, most notably the ``Butterfly Effect,'' make the formulation of new sensitivity analysis methods difficult. This talk will outline two chaotic sensitivity analysis methods. The first method, the Fokker-Planck adjoint method, forms a probability density function on the strange attractor associated with the system and uses its adjoint to find gradients. The second method, the Least Squares Sensitivity method, finds some ``shadow trajectory'' in phase space for which perturbations do not grow exponentially. This method is formulated as a quadratic programing problem with linear constraints. This talk is concluded with demonstrations of these new methods on some example problems, including the Lorenz attractor and flow around an airfoil at a high angle of attack.

  9. Spacecraft design sensitivity for a disaster warning satellite system

    NASA Technical Reports Server (NTRS)

    Maloy, J. E.; Provencher, C. E.; Leroy, B. E.; Braley, R. C.; Shumaker, H. A.

    1977-01-01

    A disaster warning satellite (DWS) is described for warning the general public of impending natural catastrophes. The concept is responsive to NOAA requirements and maximizes the use of ATS-6 technology. Upon completion of concept development, the study was extended to establishing the sensitivity of the DWSS spacecraft power, weight, and cost to variations in both warning and conventional communications functions. The results of this sensitivity analysis are presented.

  10. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  11. Sensitivity analysis of the critical speed in railway vehicle dynamics

    NASA Astrophysics Data System (ADS)

    Bigoni, D.; True, H.; Engsig-Karup, A. P.

    2014-05-01

    We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, high-dimensional model representation and total sensitivity indices. It is applied to a half car with a two-axle Cooperrider bogie, in order to study the sensitivity of the critical speed with respect to the suspension parameters. The importance of a certain suspension component is expressed by the variance in critical speed that is ascribable to it. This proves to be useful in the identification of parameters for which the accuracy of their values is critically important. The approach has a general applicability in many engineering fields and does not require the knowledge of the particular solver of the dynamical system. This analysis can be used as part of the virtual homologation procedure and to help engineers during the design phase of complex systems.

  12. Sensitivity analysis of TOPSIS method in water quality assessment: I. Sensitivity to the parameter weights.

    PubMed

    Li, Peiyue; Qian, Hui; Wu, Jianhua; Chen, Jie

    2013-03-01

    Sensitivity analysis is becoming increasingly widespread in many fields of engineering and sciences and has become a necessary step to verify the feasibility and reliability of a model or a method. The sensitivity of the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) method in water quality assessment mainly includes sensitivity to the parameter weights and sensitivity to the index input data. In the present study, the sensitivity of TOPSIS to the parameter weights was discussed in detail. The present study assumed the original parameter weights to be equal to each other, and then each weight was changed separately to see how the assessment results would be affected. Fourteen schemes were designed to investigate the sensitivity to the variation of each weight. The variation ranges that keep the assessment results unchangeable were also derived theoretically. The results show that the final assessment results will change when the weights increase or decrease by ±20 to ±50 %. The feedback of different samples to the variation of a given weight is different, and the feedback of a given sample to the variation of different weights is also different. The final assessment results can keep relatively stable when a given weight is disturbed as long as the initial variation ratios meet one of the eight derived requirements. PMID:22752962

  13. [Biomechanical analysis of different ProDisc-C arthroplasty design parameters after implanted: a numerical sensitivity study based on finite element method].

    PubMed

    Tang, Qiaohong; Mo, Zhongjun; Yao, Jie; Li, Qi; Du, Chenfei; Wang, Lizhen; Fan, Yubo

    2014-12-01

    This study was aimed to estimate the effect of different ProDisc-C arthroplasty designs after it was implanted to C5-C6 cervicalspine. Finite element (FE) model of intact C5-C6 segments including the vertebrae and disc was developed and validated. Ball-and-socket artificial disc prosthesis model (ProDisc-C, Synthes) was implanted into the validated FE model and the curvature of the ProDisc-C prosthesis was varied. All models were loaded with compressed force 74 N and the pure moment of 1.8 Nm along flexion-extension and bilateral bending and axial torsion separately. The results indicated that the variation in the curvature of ball and socket configuration would influence the range of motion in flexion/extension, while there were not apparently differences under other conditions of loads. The method increasing the curvature will solve the stress concentration of the polyethylene, but it will also bring adverse outcomes, such as facet joint force increasing and ligament tension increasing. Therefore, the design of artificial discs should be considered comprehensively to reserve the range of motion as well as to avoid the adverse problems, so as not to affect the long-term clinical results. PMID:25868242

  14. Diagnostic Analysis of Middle Atmosphere Climate Sensitivity

    NASA Astrophysics Data System (ADS)

    Zhu, X.; Cai, M.; Swartz, W. H.; Coy, L.; Yee, J.; Talaat, E. R.

    2013-12-01

    Both the middle atmosphere climate sensitivity associated with the cooling trend and its uncertainty due to a complex system of drivers increase with altitude. Furthermore, the combined effect of middle atmosphere cooling due to long-lived greenhouse gases and ozone is also associated with natural climate variations due to solar activity. To understand and predict climate change from a global perspective, we use the recently developed climate feedback-response analysis method (CFRAM) to identify and isolate the signals from the external forcing and from different feedback processes in the middle atmosphere climate system. By use of the JHU/APL middle atmosphere radiation algorithm, the CFRAM is applied to the model output fields of the high-altitude GEOS-5 climate model in the middle atmosphere to delineate the individual contributions of radiative forcing to middle atmosphere climate sensitivity.

  15. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  16. Sensitivity method for integrated structure/active control law design

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1987-01-01

    The development is described of an integrated structure/active control law design methodology for aeroelastic aircraft applications. A short motivating introduction to aeroservoelasticity is given along with the need for integrated structures/controls design algorithms. Three alternative approaches to development of an integrated design method are briefly discussed with regards to complexity, coordination and tradeoff strategies, and the nature of the resulting solutions. This leads to the formulation of the proposed approach which is based on the concepts of sensitivity of optimum solutions and multi-level decompositions. The concept of sensitivity of optimum is explained in more detail and compared with traditional sensitivity concepts of classical control theory. The analytical sensitivity expressions for the solution of the linear, quadratic cost, Gaussian (LQG) control problem are summarized in terms of the linear regulator solution and the Kalman Filter solution. Numerical results for a state space aeroelastic model of the DAST ARW-II vehicle are given, showing the changes in aircraft responses to variations of a structural parameter, in this case first wing bending natural frequency.

  17. The Theoretical Foundation of Sensitivity Analysis for GPS

    NASA Astrophysics Data System (ADS)

    Shikoska, U.; Davchev, D.; Shikoski, J.

    2008-10-01

    In this paper the equations of sensitivity analysis are derived and established theoretical underpinnings for the analyses. Paper propounds a land-vehicle navigation concepts and definition for sensitivity analysis. Equations of sensitivity analysis are presented for a linear Kalman filter and case study is given to illustrate the use of sensitivity analysis to the reader. At the end of the paper, extensions that are required for this research are made to the basic equations of sensitivity analysis specifically; the equations of sensitivity analysis are re-derived for a linearized Kalman filter.

  18. Efficient sensitivity analysis and optimization of a helicopter rotor

    NASA Technical Reports Server (NTRS)

    Lim, Joon W.; Chopra, Inderjit

    1989-01-01

    Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.

  19. LCA data quality: sensitivity and uncertainty analysis.

    PubMed

    Guo, M; Murphy, R J

    2012-10-01

    Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions. PMID:22854094

  20. High Sensitivity MEMS Strain Sensor: Design and Simulation

    PubMed Central

    Mohammed, Ahmed A. S.; Moussa, Walied A.; Lou, Edmond

    2008-01-01

    In this article, we report on the new design of a miniaturized strain microsensor. The proposed sensor utilizes the piezoresistive properties of doped single crystal silicon. Employing the Micro Electro Mechanical Systems (MEMS) technology, high sensor sensitivities and resolutions have been achieved. The current sensor design employs different levels of signal amplifications. These amplifications include geometric, material and electronic levels. The sensor and the electronic circuits can be integrated on a single chip, and packaged as a small functional unit. The sensor converts input strain to resistance change, which can be transformed to bridge imbalance voltage. An analog output that demonstrates high sensitivity (0.03mV/με), high absolute resolution (1με) and low power consumption (100μA) with a maximum range of ±4000με has been reported. These performance characteristics have been achieved with high signal stability over a wide temperature range (±50°C), which introduces the proposed MEMS strain sensor as a strong candidate for wireless strain sensing applications under harsh environmental conditions. Moreover, this sensor has been designed, verified and can be easily modified to measure other values such as force, torque…etc. In this work, the sensor design is achieved using Finite Element Method (FEM) with the application of the piezoresistivity theory. This design process and the microfabrication process flow to prototype the design have been presented.

  1. A comparison of two sampling methods for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Tarantola, Stefano; Becker, William; Zeitz, Dirk

    2012-05-01

    We compare the convergence properties of two different quasi-random sampling designs - Sobol's quasi-Monte Carlo, and Latin supercube sampling in variance-based global sensitivity analysis. We use the non-monotonic V-function of Sobol' as base case-study, and compare the performance of both sampling strategies at increasing sample size and dimensionality against analytical values. The results indicate that in almost all cases investigated here, the Sobol' design performs better. This, coupled with the fact that effective Latin supercube sampling requires a priori knowledge of the interaction properties of the function, leads us to recommend Sobol' sampling in most practical cases.

  2. Design and Synthesis of a Calcium-Sensitive Photocage.

    PubMed

    Heckman, Laurel M; Grimm, Jonathan B; Schreiter, Eric R; Kim, Charles; Verdecia, Mark A; Shields, Brenda C; Lavis, Luke D

    2016-07-11

    Photolabile protecting groups (or "photocages") enable precise spatiotemporal control of chemical functionality and facilitate advanced biological experiments. Extant photocages exhibit a simple input-output relationship, however, where application of light elicits a photochemical reaction irrespective of the environment. Herein, we refine and extend the concept of photolabile groups, synthesizing the first Ca(2+) -sensitive photocage. This system functions as a chemical coincidence detector, releasing small molecules only in the presence of both light and elevated [Ca(2+) ]. Caging a fluorophore with this ion-sensitive moiety yields an "ion integrator" that permanently marks cells undergoing high Ca(2+) flux during an illumination-defined time period. Our general design concept demonstrates a new class of light-sensitive material for cellular imaging, sensing, and targeted molecular delivery. PMID:27218487

  3. Simple Sensitivity Analysis for Orion GNC

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  4. Bayesian sensitivity analysis of bifurcating nonlinear models

    NASA Astrophysics Data System (ADS)

    Becker, W.; Worden, K.; Rowson, J.

    2013-01-01

    Sensitivity analysis allows one to investigate how changes in input parameters to a system affect the output. When computational expense is a concern, metamodels such as Gaussian processes can offer considerable computational savings over Monte Carlo methods, albeit at the expense of introducing a data modelling problem. In particular, Gaussian processes assume a smooth, non-bifurcating response surface. This work highlights a recent extension to Gaussian processes which uses a decision tree to partition the input space into homogeneous regions, and then fits separate Gaussian processes to each region. In this way, bifurcations can be modelled at region boundaries and different regions can have different covariance properties. To test this method, both the treed and standard methods were applied to the bifurcating response of a Duffing oscillator and a bifurcating FE model of a heart valve. It was found that the treed Gaussian process provides a practical way of performing uncertainty and sensitivity analysis on large, potentially-bifurcating models, which cannot be dealt with by using a single GP, although an open problem remains how to manage bifurcation boundaries that are not parallel to coordinate axes.

  5. A Post-Monte-Carlo Sensitivity Analysis Code

    Energy Science and Technology Software Center (ESTSC)

    2000-04-04

    SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmore » also quantifies the relative importance among the sensitive variables.« less

  6. Multiplexed analysis of chromosome conformation at vastly improved sensitivity

    PubMed Central

    Davies, James O.J.; Telenius, Jelena M.; McGowan, Simon; Roberts, Nigel A.; Taylor, Stephen; Higgs, Douglas R.; Hughes, Jim R.

    2015-01-01

    Since methods for analysing chromosome conformation in mammalian cells are either low resolution or low throughput and are technically challenging they are not widely used outside of specialised laboratories. We have re-designed the Capture-C method producing a new approach, called next generation (NG) Capture-C. This produces unprecedented levels of sensitivity and reproducibility and can be used to analyse many genetic loci and samples simultaneously. Importantly, high-resolution data can be produced on as few as 100,000 cells and SNPs can be used to generate allele specific tracks. The method is straightforward to perform and should therefore greatly facilitate the task of linking SNPs identified by genome wide association studies with the genes they influence. The complete and detailed protocol presented here, with new publicly available tools for library design and data analysis, will allow most laboratories to analyse chromatin conformation at levels of sensitivity and throughput that were previously impossible. PMID:26595209

  7. Sensitivity Analysis of Chaotic Flow around Two-Dimensional Airfoil

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Wang, Qiqi; Nielsen, Eric; Diskin, Boris

    2015-11-01

    Computational methods for sensitivity analysis are invaluable tools for fluid dynamics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods, including the adjoint method, break down when applied to long-time averaged quantities in chaotic fluid flow fields, such as high-fidelity turbulence simulations. This break down is due to the ``Butterfly Effect'' the high sensitivity of chaotic dynamical systems to the initial condition. A new sensitivity analysis method developed by the authors, Least Squares Shadowing (LSS), can compute useful and accurate gradients for quantities of interest in chaotic dynamical systems. LSS computes gradients using the ``shadow trajectory'', a phase space trajectory (or solution) for which perturbations to the flow field do not grow exponentially in time. To efficiently compute many gradients for one objective function, we use an adjoint version of LSS. This talk will briefly outline Least Squares Shadowing and demonstrate it on chaotic flow around a Two-Dimensional airfoil.

  8. Sensitivity Analysis Of Technological And Material Parameters In Roll Forming

    NASA Astrophysics Data System (ADS)

    Gehring, Albrecht; Saal, Helmut

    2007-05-01

    Roll forming is applied for several decades to manufacture thin gauged profiles. However, the knowledge about this technology is still based on empirical approaches. Due to the complexity of the forming process, the main effects on profile properties are difficult to identify. This is especially true for the interaction of technological parameters and material parameters. General considerations for building a finite-element model of the roll forming process are given in this paper. A sensitivity analysis is performed on base of a statistical design approach in order to identify the effects and interactions of different parameters on profile properties. The parameters included in the analysis are the roll diameter, the rolling speed, the sheet thickness, friction between the tools and the sheet and the strain hardening behavior of the sheet material. The analysis includes an isotropic hardening model and a nonlinear kinematic hardening model. All jobs are executed parallel to reduce the overall time as the sensitivity analysis requires much CPU-time. The results of the sensitivity analysis demonstrate the opportunities to improve the properties of roll formed profiles by adjusting technological and material parameters to their optimum interacting performance.

  9. Scalable analysis tools for sensitivity analysis and UQ (3160) results.

    SciTech Connect

    Karelitz, David B.; Ice, Lisa G.; Thompson, David C.; Bennett, Janine C.; Fabian, Nathan; Scott, W. Alan; Moreland, Kenneth D.

    2009-09-01

    The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.

  10. Stochastic Simulations and Sensitivity Analysis of Plasma Flow

    SciTech Connect

    Lin, Guang; Karniadakis, George E.

    2008-08-01

    For complex physical systems with large number of random inputs, it will be very expensive to perform stochastic simulations for all of the random inputs. Stochastic sensitivity analysis is introduced in this paper to rank the significance of random inputs, provide information on which random input has more influence on the system outputs and the coupling or interaction effect among different random inputs. There are two types of numerical methods in stochastic sensitivity analysis: local and global methods. The local approach, which relies on a partial derivative of output with respect to parameters, is used to measure the sensitivity around a local operating point. When the system has strong nonlinearities and parameters fluctuate within a wide range from their nominal values, the local sensitivity does not provide full information to the system operators. On the other side, the global approach examines the sensitivity from the entire range of the parameter variations. The global screening methods, based on One-At-a-Time (OAT) perturbation of parameters, rank the significant parameters and identify their interaction among a large number of parameters. Several screening methods have been proposed in literature, i.e., the Morris method, Cotter's method, factorial experimentation, and iterated fractional factorial design. In this paper, the Morris method, Monte Carlo sampling method, Quasi-Monte Carlo method and collocation method based on sparse grids are studied. Additionally, two MHD examples are presented to demonstrate the capability and efficiency of the stochastic sensitivity analysis, which can be used as a pre-screening technique for reducing the dimensionality and hence the cost in stochastic simulations.

  11. Helicopter Design Analysis

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Design of military and civil helicopters, produced by Bell Helicopter Textron, and aided by the use of COSMIC'S computer program VASP enables performance of more accurate analyses to insure product safety and improved production efficiency.

  12. Global sensitivity analysis of groundwater transport

    NASA Astrophysics Data System (ADS)

    Cvetkovic, V.; Soltani, S.; Vigouroux, G.

    2015-12-01

    In this work we address the model and parametric sensitivity of groundwater transport using the Lagrangian-Stochastic Advection-Reaction (LaSAR) methodology. The 'attenuation index' is used as a relevant and convenient measure of the coupled transport mechanisms. The coefficients of variation (CV) for seven uncertain parameters are assumed to be between 0.25 and 3.5, the highest value being for the lower bound of the mass transfer coefficient k0 . In almost all cases, the uncertainties in the macro-dispersion (CV = 0.35) and in the mass transfer rate k0 (CV = 3.5) are most significant. The global sensitivity analysis using Sobol and derivative-based indices yield consistent rankings on the significance of different models and/or parameter ranges. The results presented here are generic however the proposed methodology can be easily adapted to specific conditions where uncertainty ranges in models and/or parameters can be estimated from field and/or laboratory measurements.

  13. Multicomponent dynamical nucleation theory and sensitivity analysis.

    PubMed

    Kathmann, Shawn M; Schenter, Gregory K; Garrett, Bruce C

    2004-05-15

    Vapor to liquid multicomponent nucleation is a dynamical process governed by a delicate interplay between condensation and evaporation. Since the population of the vapor phase is dominated by monomers at reasonable supersaturations, the formation of clusters is governed by monomer association and dissociation reactions. Although there is no intrinsic barrier in the interaction potential along the minimum energy path for the association process, the formation of a cluster is impeded by a free energy barrier. Dynamical nucleation theory provides a framework in which equilibrium evaporation rate constants can be calculated and the corresponding condensation rate constants determined from detailed balance. The nucleation rate can then be obtained by solving the kinetic equations. The rate constants governing the multistep kinetics of multicomponent nucleation including sensitivity analysis and the potential influence of contaminants will be presented and discussed. PMID:15267849

  14. Sensitivity analysis of periodic matrix population models.

    PubMed

    Caswell, Hal; Shyu, Esther

    2012-12-01

    Periodic matrix models are frequently used to describe cyclic temporal variation (seasonal or interannual) and to account for the operation of multiple processes (e.g., demography and dispersal) within a single projection interval. In either case, the models take the form of periodic matrix products. The perturbation analysis of periodic models must trace the effects of parameter changes, at each phase of the cycle, on output variables that are calculated over the entire cycle. Here, we apply matrix calculus to obtain the sensitivity and elasticity of scalar-, vector-, or matrix-valued output variables. We apply the method to linear models for periodic environments (including seasonal harvest models), to vec-permutation models in which individuals are classified by multiple criteria, and to nonlinear models including both immediate and delayed density dependence. The results can be used to evaluate management strategies and to study selection gradients in periodic environments. PMID:23316494

  15. Sensitivity Analysis of the Static Aeroelastic Response of a Wing

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.

    1993-01-01

    A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.

  16. Design and performance of a positron-sensitive surgical probe

    NASA Astrophysics Data System (ADS)

    Liu, Fang

    We report the design and performance of a portable positron-sensitive surgical imaging probe. The probe is designed to be sensitive to positrons and capable of rejecting background gammas including 511 keV. The probe consists of a multi-anode PMT and an 8 x 8 array of thin 2 mm x 2 mm plastic scintillators coupled 1:1 to GSO crystals. The probe uses three selection criteria to identify positrons. An energy threshold on the plastic signals reduces the false positron signals in the plastic due to background gammas; a second energy threshold on the PMT sum signal greatly reduces background gammas in the GSO. Finally, a timing window accepts only 511 keV gammas from the GSO that arrive within 15 ns of the plastic signals, reducing accidental coincidences to a negligible level. The first application being investigated is sentinel lymph node (SLN) surgery, to identify in real-time the location of SLNs in the axilla with high 18F-FDG uptake, which may indicate metastasis. Our simulations and measurements show that the probe's pixel separation ability in terms of peak-to-valley ratio is ˜3.5. The performance measurements also show that the 64-pixel probe has a sensitivity of 4.7 kcps/muCi using optimal signal selection criteria. For example, it is able to detect in 10 seconds a ˜4 mm lesion with a true-to-background ratio of ˜3 at a tumor uptake ratio of ˜8:1. The signal selection criteria can be fine-tuned, either for higher sensitivity, or for a higher image contrast.

  17. Global sensitivity analysis in wind energy assessment

    NASA Astrophysics Data System (ADS)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present

  18. Multitarget global sensitivity analysis of n-butanol combustion.

    PubMed

    Zhou, Dingyu D Y; Davis, Michael J; Skodje, Rex T

    2013-05-01

    A model for the combustion of butanol is studied using a recently developed theoretical method for the systematic improvement of the kinetic mechanism. The butanol mechanism includes 1446 reactions, and we demonstrate that it is straightforward and computationally feasible to implement a full global sensitivity analysis incorporating all the reactions. In addition, we extend our previous analysis of ignition-delay targets to include species targets. The combination of species and ignition targets leads to multitarget global sensitivity analysis, which allows for a more complete mechanism validation procedure than we previously implemented. The inclusion of species sensitivity analysis allows for a direct comparison between reaction pathway analysis and global sensitivity analysis. PMID:23530815

  19. Rheological Models of Blood: Sensitivity Analysis and Benchmark Simulations

    NASA Astrophysics Data System (ADS)

    Szeliga, Danuta; Macioł, Piotr; Banas, Krzysztof; Kopernik, Magdalena; Pietrzyk, Maciej

    2010-06-01

    Modeling of blood flow with respect to rheological parameters of the blood is the objective of this paper. Casson type equation was selected as a blood model and the blood flow was analyzed based on Backward Facing Step benchmark. The simulations were performed using ADINA-CFD finite element code. Three output parameters were selected, which characterize the accuracy of flow simulation. Sensitivity analysis of the results with Morris Design method was performed to identify rheological parameters and the model output, which control the blood flow to significant extent. The paper is the part of the work on identification of parameters controlling process of clotting.

  20. Sensitivity analysis of discrete structural systems: A survey

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.

    1984-01-01

    Methods for calculating sensitivity derivatives for discrete structural systems are surveyed, primarily covering literature published during the past two decades. Methods are described for calculating derivatives of static displacements and stresses, eigenvalues and eigenvectors, transient structural response, and derivatives of optimum structural designs with respect to problem parameters. The survey is focused on publications addressed to structural analysis, but also includes a number of methods developed in nonstructural fields such as electronics, controls, and physical chemistry which are directly applicable to structural problems. Most notable among the nonstructural-based methods are the adjoint variable technique from control theory, and the Green's function and FAST methods from physical chemistry.

  1. An analytical approach to grid sensitivity analysis for NACA four-digit wing sections

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1992-01-01

    Sensitivity analysis in computational fluid dynamics with emphasis on grids and surface parameterization is described. An interactive algebraic grid-generation technique is employed to generate C-type grids around NACA four-digit wing sections. An analytical procedure is developed for calculating grid sensitivity with respect to design parameters of a wing section. A comparison of the sensitivity with that obtained using a finite difference approach is made. Grid sensitivity with respect to grid parameters, such as grid-stretching coefficients, are also investigated. Using the resultant grid sensitivity, aerodynamic sensitivity is obtained using the compressible two-dimensional thin-layer Navier-Stokes equations.

  2. Sensitivity analysis of volume scattering phase functions.

    PubMed

    Tuchow, Noah; Broughton, Jennifer; Kudela, Raphael

    2016-08-01

    To solve the radiative transfer equation and relate inherent optical properties (IOPs) to apparent optical properties (AOPs), knowledge of the volume scattering phase function is required. Due to the difficulty of measuring the phase function, it is frequently approximated. We explore the sensitivity of derived AOPs to the phase function parameterization, and compare measured and modeled values of both the AOPs and estimated phase functions using data from Monterey Bay, California during an extreme "red tide" bloom event. Using in situ measurements of absorption and attenuation coefficients, as well as two sets of measurements of the volume scattering function (VSF), we compared output from the Hydrolight radiative transfer model to direct measurements. We found that several common assumptions used in parameterizing the radiative transfer model consistently introduced overestimates of modeled versus measured remote-sensing reflectance values. Phase functions from VSF data derived from measurements at multiple wavelengths and a single scattering single angle significantly overestimated reflectances when using the manufacturer-supplied corrections, but were substantially improved using newly published corrections; phase functions calculated from VSF measurements using three angles and three wavelengths and processed using manufacture-supplied corrections were comparable, demonstrating that reasonable predictions can be made using two commercially available instruments. While other studies have reached similar conclusions, our work extends the analysis to coastal waters dominated by an extreme algal bloom with surface chlorophyll concentrations in excess of 100 mg m-3. PMID:27505819

  3. Wear-Out Sensitivity Analysis Project Abstract

    NASA Technical Reports Server (NTRS)

    Harris, Adam

    2015-01-01

    During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.

  4. Designing and Building to ``Impossible'' Tolerances for Vibration Sensitive Equipment

    NASA Astrophysics Data System (ADS)

    Hertlein, Bernard H.

    2003-03-01

    As the precision and production capabilities of modern machines and factories increase, our expectations of them rise commensurately. Facility designers and engineers find themselves increasingly involved with measurement needs and design tolerances that were almost unthinkable a few years ago. An area of expertise that demonstrates this very clearly is the field of vibration measurement and control. Magnetic Resonance Imaging, Semiconductor manufacturing, micro-machining, surgical microscopes — These are just a few examples of equipment or techniques that need an extremely stable vibration environment. The challenge to architects, engineers and contractors is to provide that level of stability without undue cost or sacrificing the aesthetics and practicality of a structure. In addition, many facilities have run out of expansion room, so the design is often hampered by the need to reuse all or part of an existing structure, or to site vibration-sensitive equipment close to an existing vibration source. High resolution measurements and nondestructive testing techniques have proven to be invaluable additions to the engineer's toolbox in meeting these challenges. The author summarizes developments in this field over the last fifteen years or so, and lists some common errors of design and construction that can cost a lot of money in retrofit if missed, but can easily be avoided with a little foresight, an appropriate testing program and a carefully thought out checklist.

  5. Sensitivity analysis of hydrodynamic stability operators

    NASA Technical Reports Server (NTRS)

    Schmid, Peter J.; Henningson, Dan S.; Khorrami, Mehdi R.; Malik, Mujeeb R.

    1992-01-01

    The eigenvalue sensitivity for hydrodynamic stability operators is investigated. Classical matrix perturbation techniques as well as the concept of epsilon-pseudoeigenvalues are applied to show that parts of the spectrum are highly sensitive to small perturbations. Applications are drawn from incompressible plane Couette, trailing line vortex flow and compressible Blasius boundary layer flow. Parametric studies indicate a monotonically increasing effect of the Reynolds number on the sensitivity. The phenomenon of eigenvalue sensitivity is due to the non-normality of the operators and their discrete matrix analogs and may be associated with large transient growth of the corresponding initial value problem.

  6. Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Newman, James C., III; Barnwell, Richard W.; Taylor, Arthur C., III; Hou, Gene J.-W.

    1998-01-01

    This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics. The focus here is on those methods particularly well- suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid computational fluid dynamics algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid computational fluid dynamics in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.

  7. Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations

    NASA Technical Reports Server (NTRS)

    Newman, James C., III; Taylor, Arthur C., III; Barnwell, Richard W.; Newman, Perry A.; Hou, Gene J.-W.

    1999-01-01

    This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics (CFD). The focus here is on those methods particularly well-suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid CFD algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid CFDs in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.

  8. Integrating ethics in design through the value-sensitive design approach.

    PubMed

    Cummings, Mary L

    2006-10-01

    The Accreditation Board of Engineering and Technology (ABET) has declared that to achieve accredited status, 'engineering programs must demonstrate that their graduates have an understanding of professional and ethical responsibility.' Many engineering professors struggle to integrate this required ethics instruction in technical classes and projects because of the lack of a formalized ethics-in-design approach. However, one methodology developed in human-computer interaction research, the Value-Sensitive Design approach, can serve as an engineering education tool which bridges the gap between design and ethics for many engineering disciplines. The three major components of Value-Sensitive Design, conceptual, technical, and empirical, exemplified through a case study which focuses on the development of a command and control supervisory interface for a military cruise missile. PMID:17199145

  9. Optimizing human activity patterns using global sensitivity analysis

    PubMed Central

    Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2014-01-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080

  10. Optimizing human activity patterns using global sensitivity analysis

    SciTech Connect

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  11. Optimizing human activity patterns using global sensitivity analysis

    DOE PAGESBeta

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less

  12. Sensitive detection of human insulin using a designed combined pore approach.

    PubMed

    Lei, Chang; Noonan, Owen; Jambhrunkar, Siddharth; Qian, Kun; Xu, Chun; Zhang, Jun; Nouwens, Amanda; Yu, Chengzhong

    2014-06-25

    A unique combined pore approach to the sensitive detection of human insulin is developed. Through a systematic study to understand the impact of pore size and surface chemistry of nanoporous materials on their enrichment and purification performance, the advantages of selected porous materials are integrated to enhance detection sensitivity in a unified two-step process. In the first purification step, a rationally designed large pore material (ca. 100 nm in diameter) is chosen to repel the interferences from nontarget molecules. In the second enrichment step, a hydrophobically modified mesoporous material with a pore size of 5 nm is selected to enrich insulin molecules. A low detection limit of 0.05 ng mL(-1) in artificial urine is achieved by this advanced approach, similar to most antibody-based analysis protocols. This designer approach is efficient and low cost, and thus has great potential in the sensitive detection of biomolecules in complex biological systems. PMID:24599559

  13. Topographic Avalanche Risk: DEM Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Nazarkulova, Ainura; Strobl, Josef

    2015-04-01

    GIS-based models are frequently used to assess the risk and trigger probabilities of (snow) avalanche releases, based on parameters and geomorphometric derivatives like elevation, exposure, slope, proximity to ridges and local relief energy. Numerous models, and model-based specific applications and project results have been published based on a variety of approaches and parametrizations as well as calibrations. Digital Elevation Models (DEM) come with many different resolution (scale) and quality (accuracy) properties, some of these resulting from sensor characteristics and DEM generation algorithms, others from different DEM processing workflows and analysis strategies. This paper explores the impact of using different types and characteristics of DEMs for avalanche risk modeling approaches, and aims at establishing a framework for assessing the uncertainty of results. The research question is derived from simply demonstrating the differences in release risk areas and intensities by applying identical models to DEMs with different properties, and then extending this into a broader sensitivity analysis. For the quantification and calibration of uncertainty parameters different metrics are established, based on simple value ranges, probabilities, as well as fuzzy expressions and fractal metrics. As a specific approach the work on DEM resolution-dependent 'slope spectra' is being considered and linked with the specific application of geomorphometry-base risk assessment. For the purpose of this study focusing on DEM characteristics, factors like land cover, meteorological recordings and snowpack structure and transformation are kept constant, i.e. not considered explicitly. Key aims of the research presented here are the development of a multi-resolution and multi-scale framework supporting the consistent combination of large area basic risk assessment with local mitigation-oriented studies, and the transferability of the latter into areas without availability of

  14. Sensitivity analysis of textural parameters for vertebroplasty

    NASA Astrophysics Data System (ADS)

    Tack, Gye Rae; Lee, Seung Y.; Shin, Kyu-Chul; Lee, Sung J.

    2002-05-01

    Vertebroplasty is one of the newest surgical approaches for the treatment of the osteoporotic spine. Recent studies have shown that it is a minimally invasive, safe, promising procedure for patients with osteoporotic fractures while providing structural reinforcement of the osteoporotic vertebrae as well as immediate pain relief. However, treatment failures due to excessive bone cement injection have been reported as one of complications. It is believed that control of bone cement volume seems to be one of the most critical factors in preventing complications. We believed that an optimal bone cement volume could be assessed based on CT data of a patient. Gray-level run length analysis was used to extract textural information of the trabecular. At initial stage of the project, four indices were used to represent the textural information: mean width of intertrabecular space, mean width of trabecular, area of intertrabecular space, and area of trabecular. Finally, the area of intertrabecular space was selected as a parameter to estimate an optimal bone cement volume and it was found that there was a strong linear relationship between these 2 variables (correlation coefficient = 0.9433, standard deviation = 0.0246). In this study, we examined several factors affecting overall procedures. The threshold level, the radius of rolling ball and the size of region of interest were selected for the sensitivity analysis. As the level of threshold varied with 9, 10, and 11, the correlation coefficient varied from 0.9123 to 0.9534. As the radius of rolling ball varied with 45, 50, and 55, the correlation coefficient varied from 0.9265 to 0.9730. As the size of region of interest varied with 58 x 58, 64 x 64, and 70 x 70, the correlation coefficient varied from 0.9685 to 0.9468. Finally, we found that strong correlation between actual bone cement volume (Y) and the area (X) of the intertrabecular space calculated from the binary image and the linear equation Y = 0.001722 X - 2

  15. SSTO vs TSTO design considerations—an assessment of the overall performance, design considerations, technologies, costs, and sensitivities of SSTO and TSTO designs using modern technologies

    NASA Astrophysics Data System (ADS)

    Penn, Jay P.

    1996-03-01

    It is generally believed by those skilled in launch system design that Single-Stage-To-Orbit (SSTO) designs are more technically challenging, more performance sensitive, and yield larger lift-off weights than do Two-Stage-To-Orbit designs (TSTO's) offering similar payload delivery capability. Without additional insight into the other considerations which drive the development, recurring costs, operability, and reliability of a launch fleet, an analyst may easily conclude that the higher performing, less sensitive TSTO designs, thus yield a better solution to achieving low cost payload delivery. This limited insight could justify an argument to eliminate the X-33 SSTO technology/demonstration development effort, and thus proceed directly to less risky TSTO designs. Insight into real world design considerations of launch vehicles makes the choice of SSTO vs TSTO much less clear. The presentation addresses a more comprehensive evaluation of the general class of SSTO and TSTO concepts. These include pure SSTO's, augmented SSTO's, Siamese Twin, and Pure TSTO designs. The assessment considers vehicle performance and scaling relationships which characterize real vehicle designs. The assessment also addresses technology requirements, operations and supportability, cost implications, and sensitivities. Results of the assessment indicate that the trade space between various SSTO and TSTO design approaches is complex and not yet fully understood. The results of the X-33 technology demonstrators, as well as additional parametric analysis is required to better define the relative performance and costs of the various design approaches. The results also indicate that with modern technologies and today's better understanding of vehicle design considerations, the perception that SSTO's are dramatically heavier and more sensitive than TSTO designs is more of a myth, than reality.

  16. [Ecological sensitivity of Shanghai City based on GIS spatial analysis].

    PubMed

    Cao, Jian-jun; Liu, Yong-juan

    2010-07-01

    In this paper, five sensitivity factors affecting the eco-environment of Shanghai City, i.e., rivers and lakes, historical relics and forest parks, geological disasters, soil pollution, and land use, were selected, and their weights were determined by analytic hierarchy process. Combining with GIS spatial analysis technique, the sensitivities of these factors were classified into four grades, i.e., highly sensitive, moderately sensitive, low sensitive, and insensitive, and the spatial distribution of the ecological sensitivity of Shanghai City was figured out. There existed a significant spatial differentiation in the ecological sensitivity of the City, and the insensitive, low sensitive, moderately sensitive, and highly sensitive areas occupied 37.07%, 5.94%, 38.16%, and 18.83%, respectively. Some suggestions on the City's zoning protection and construction were proposed. This study could provide scientific references for the City's environmental protection and economic development. PMID:20879541

  17. 5 CFR 732.201 - Sensitivity level designations and investigative requirements.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Sensitivity level designations and... Requirements § 732.201 Sensitivity level designations and investigative requirements. (a) For purposes of this... material adverse effect on the national security as a sensitive position at one of three sensitivity...

  18. 5 CFR 732.201 - Sensitivity level designations and investigative requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Sensitivity level designations and... Requirements § 732.201 Sensitivity level designations and investigative requirements. (a) For purposes of this... material adverse effect on the national security as a sensitive position at one of three sensitivity...

  19. 5 CFR 732.201 - Sensitivity level designations and investigative requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Sensitivity level designations and... Requirements § 732.201 Sensitivity level designations and investigative requirements. (a) For purposes of this... material adverse effect on the national security as a sensitive position at one of three sensitivity...

  20. 5 CFR 732.201 - Sensitivity level designations and investigative requirements.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 2 2014-01-01 2014-01-01 false Sensitivity level designations and... Requirements § 732.201 Sensitivity level designations and investigative requirements. (a) For purposes of this... material adverse effect on the national security as a sensitive position at one of three sensitivity...

  1. 5 CFR 732.201 - Sensitivity level designations and investigative requirements.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 2 2013-01-01 2013-01-01 false Sensitivity level designations and... Requirements § 732.201 Sensitivity level designations and investigative requirements. (a) For purposes of this... material adverse effect on the national security as a sensitive position at one of three sensitivity...

  2. Evaluation of measurement sensitivity and design improvement for time domain reflectometry penetrometers

    NASA Astrophysics Data System (ADS)

    Zhan, Tony Liang-tong; Mu, Qing-yi; Chen, Yun-min; Ke, Han

    2015-04-01

    The time domain reflectometry (TDR) penetrometer, which can measure both the apparent dielectric permittivity and the bulk electrical conductivity of soils, is an important tool for the site investigation of contaminated land. This paper presents a theoretical method for evaluating the measurement sensitivity and an improved design of the TDR penetrometer. The sensitivity evaluation method is based on a spatial weighting analysis of the electromagnetic field using a seepage analysis software. This method is used to quantify the measurement sensitivity for the three types of TDR penetrometers reported in literature as well as guide the design improvement of the TDR penetrometer. The improved design includes the use of semicircle-shaped conductors and the optimization of the conductor diameter. The measurement sensitivity to the targeted medium for the improved TDR penetrometer is evaluated to be greater than those of the three types of TDR penetrometers reported in literature. The performance of the improved TDR penetrometer was demonstrated by conducting an experimental calibration of the probe and penetration tests in a chamber containing a silty soil column. The experimental results demonstrate that the measurements from the improved TDR penetrometer are able to capture the variation in the water content profiles as well as the leachate contaminated soil.

  3. Design and characterisation of bodipy sensitizers for dye-sensitized NiO solar cells.

    PubMed

    Summers, Gareth H; Lefebvre, Jean-François; Black, Fiona A; Davies, E Stephen; Gibson, Elizabeth A; Pullerits, Tönu; Wood, Christopher J; Zidek, Karel

    2016-01-14

    A series of photosensitizers for NiO-based dye-sensitized solar cells is presented. Three model compounds containing a triphenylamine donor appended to a boron dipyrromethene (bodipy) chromophore have been successfully prepared and characterised using emission spectroscopy, electrochemistry and spectroelectrochemistry, to ultimately direct the design of dyes with more complex structures. Carboxylic acid anchoring groups and thiophene spacers were appended to the model compounds to provide five dyes which were adsorbed onto NiO and integrated into dye-sensitized solar cells. Solar cells incorporating the simple Bodipy-CO₂H dye were surprisingly promising relative to the more complex dye 4. Cell performances were improved with dyes which had increased electronic communication between the donor and acceptor, achieved by incorporating a less hindered bodipy moiety. Further increases in performances were obtained from dyes which contained a thiophene spacer. Thus, the best performance was obtained for 7 which generated a very promising photocurrent density of 5.87 mA cm(-2) and an IPCE of 53%. Spectroelectrochemistry combined with time-resolved transient absorption spectroscopy were used to determine the identity and lifetime of excited state species. Short-lived (ps) transients were recorded for 4, 5 and 7 which are consistent with previous studies. Despite a longer lived (25 ns) charge-separated state for 6/NiO, there was no increase in the photocurrent generated by the corresponding solar cell. PMID:26660278

  4. Cross Section Sensitivity and Uncertainty Analysis Including Secondary Neutron Energy and Angular Distributions.

    Energy Science and Technology Software Center (ESTSC)

    1991-03-12

    Version 00 SUSD calculates sensitivity coefficients for one- and two-dimensional transport problems. Variance and standard deviation of detector responses or design parameters can be obtained using cross-section covariance matrices. In neutron transport problems, this code can perform sensitivity-uncertainty analysis for secondary angular distribution (SAD) or secondary energy distribution (SED).

  5. Extended forward sensitivity analysis of one-dimensional isothermal flow

    SciTech Connect

    Johnson, M.; Zhao, H.

    2013-07-01

    Sensitivity analysis and uncertainty quantification is an important part of nuclear safety analysis. In this work, forward sensitivity analysis is used to compute solution sensitivities on 1-D fluid flow equations typical of those found in system level codes. Time step sensitivity analysis is included as a method for determining the accumulated error from time discretization. The ability to quantify numerical error arising from the time discretization is a unique and important feature of this method. By knowing the relative sensitivity of time step with other physical parameters, the simulation is allowed to run at optimized time steps without affecting the confidence of the physical parameter sensitivity results. The time step forward sensitivity analysis method can also replace the traditional time step convergence studies that are a key part of code verification with much less computational cost. One well-defined benchmark problem with manufactured solutions is utilized to verify the method; another test isothermal flow problem is used to demonstrate the extended forward sensitivity analysis process. Through these sample problems, the paper shows the feasibility and potential of using the forward sensitivity analysis method to quantify uncertainty in input parameters and time step size for a 1-D system-level thermal-hydraulic safety code. (authors)

  6. Support systems design and analysis

    NASA Technical Reports Server (NTRS)

    Ferguson, R. M.

    1985-01-01

    The integration of Kennedy Space Center (KSC) ground support systems with the new launch processing system and new launch vehicle provided KSC with a unique challenge in system design and analysis for the Space Transportation System. Approximately 70 support systems are controlled and monitored by the launch processing system. Typical systems are main propulsion oxygen and hydrogen loading systems, environmental control life support system, hydraulics, etc. An End-to-End concept of documentation and analysis was chosen and applied to these systems. Unique problems were resolved in the areas of software analysis, safing under emergency conditions, sampling rates, and control loop analysis. New methods of performing End-to-End reliability analyses were implemented. The systems design approach selected and the resolution of major problem areas are discussed.

  7. Design optimization of structural parameters for highly sensitive photonic crystal label-free biosensors.

    PubMed

    Ju, Jonghyun; Han, Yun-ah; Kim, Seok-min

    2013-01-01

    The effects of structural design parameters on the performance of nano-replicated photonic crystal (PC) label-free biosensors were examined by the analysis of simulated reflection spectra of PC structures. The grating pitch, duty, scaled grating height and scaled TiO2 layer thickness were selected as the design factors to optimize the PC structure. The peak wavelength value (PWV), full width at half maximum of the peak, figure of merit for the bulk and surface sensitivities, and surface/bulk sensitivity ratio were also selected as the responses to optimize the PC label-free biosensor performance. A parametric study showed that the grating pitch was the dominant factor for PWV, and that it had low interaction effects with other scaled design factors. Therefore, we can isolate the effect of grating pitch using scaled design factors. For the design of PC-label free biosensor, one should consider that: (1) the PWV can be measured by the reflection peak measurement instruments, (2) the grating pitch and duty can be manufactured using conventional lithography systems, and (3) the optimum design is less sensitive to the grating height and TiO2 layer thickness variations in the fabrication process. In this paper, we suggested a design guide for highly sensitive PC biosensor in which one select the grating pitch and duty based on the limitations of the lithography and measurement system, and conduct a multi objective optimization of the grating height and TiO2 layer thickness for maximizing performance and minimizing the influence of parameter variation. Through multi-objective optimization of a PC structure with a fixed grating height of 550 nm and a duty of 50%, we obtained a surface FOM of 66.18 RIU-1 and an S/B ratio of 34.8%, with a grating height of 117 nm and TiO2 height of 210 nm. PMID:23470487

  8. Probabilistic Finite Element Analysis & Design Optimization for Structural Designs

    NASA Astrophysics Data System (ADS)

    Deivanayagam, Arumugam

    This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that

  9. Multivariate Sensitivity Analysis of Time-of-Flight Sensor Fusion

    NASA Astrophysics Data System (ADS)

    Schwarz, Sebastian; Sjöström, Mårten; Olsson, Roger

    2014-09-01

    Obtaining three-dimensional scenery data is an essential task in computer vision, with diverse applications in various areas such as manufacturing and quality control, security and surveillance, or user interaction and entertainment. Dedicated Time-of-Flight sensors can provide detailed scenery depth in real-time and overcome short-comings of traditional stereo analysis. Nonetheless, they do not provide texture information and have limited spatial resolution. Therefore such sensors are typically combined with high resolution video sensors. Time-of-Flight Sensor Fusion is a highly active field of research. Over the recent years, there have been multiple proposals addressing important topics such as texture-guided depth upsampling and depth data denoising. In this article we take a step back and look at the underlying principles of ToF sensor fusion. We derive the ToF sensor fusion error model and evaluate its sensitivity to inaccuracies in camera calibration and depth measurements. In accordance with our findings, we propose certain courses of action to ensure high quality fusion results. With this multivariate sensitivity analysis of the ToF sensor fusion model, we provide an important guideline for designing, calibrating and running a sophisticated Time-of-Flight sensor fusion capture systems.

  10. Probability density adjoint for sensitivity analysis of the Mean of Chaos

    SciTech Connect

    Blonigan, Patrick J. Wang, Qiqi

    2014-08-01

    Sensitivity analysis, especially adjoint based sensitivity analysis, is a powerful tool for engineering design which allows for the efficient computation of sensitivities with respect to many parameters. However, these methods break down when used to compute sensitivities of long-time averaged quantities in chaotic dynamical systems. This paper presents a new method for sensitivity analysis of ergodic chaotic dynamical systems, the density adjoint method. The method involves solving the governing equations for the system's invariant measure and its adjoint on the system's attractor manifold rather than in phase-space. This new approach is derived for and demonstrated on one-dimensional chaotic maps and the three-dimensional Lorenz system. It is found that the density adjoint computes very finely detailed adjoint distributions and accurate sensitivities, but suffers from large computational costs.

  11. Design of a High Sensitivity GNSS receiver for Lunar missions

    NASA Astrophysics Data System (ADS)

    Musumeci, Luciano; Dovis, Fabio; Silva, João S.; da Silva, Pedro F.; Lopes, Hugo D.

    2016-06-01

    This paper presents the design of a satellite navigation receiver architecture tailored for future Lunar exploration missions, demonstrating the feasibility of using Global Navigation Satellite Systems signals integrated with an orbital filter to achieve such a scope. It analyzes the performance of a navigation solution based on pseudorange and pseudorange rate measurements, generated through the processing of very weak signals of the Global Positioning System (GPS) L1/L5 and Galileo E1/E5 frequency bands. In critical scenarios (e.g. during manoeuvres) acceleration and attitude measurements from additional sensors complementing the GNSS measurements are integrated with the GNSS measurement to match the positioning requirement. A review of environment characteristics (dynamics, geometry and signal power) for the different phases of a reference Lunar mission is provided, focusing on the stringent requirements of the Descent, Approach and Hazard Detection and Avoidance phase. The design of High Sensitivity acquisition and tracking schemes is supported by an extensive simulation test campaign using a software receiver implementation and navigation results are validated by means of an end-to-end software simulator. Acquisition and tracking of GPS and Galileo signals of the L1/E1 and L5/E5a bands was successfully demonstrated for Carrier-to-Noise density ratios as low as 5-8 dB-Hz. The proposed navigation architecture provides acceptable performances during the considered critical phases, granting position and velocity errors below 61.4 m and 3.2 m/s, respectively, for the 99.7% of the mission time.

  12. Automated sensitivity analysis using the GRESS language

    SciTech Connect

    Pin, F.G.; Oblow, E.M.; Wright, R.Q.

    1986-04-01

    An automated procedure for performing large-scale sensitivity studies based on the use of computer calculus is presented. The procedure is embodied in a FORTRAN precompiler called GRESS, which automatically processes computer models and adds derivative-taking capabilities to the normal calculated results. In this report, the GRESS code is described, tested against analytic and numerical test problems, and then applied to a major geohydrological modeling problem. The SWENT nuclear waste repository modeling code is used as the basis for these studies. Results for all problems are discussed in detail. Conclusions are drawn as to the applicability of GRESS in the problems at hand and for more general large-scale modeling sensitivity studies.

  13. Discrete analysis of spatial-sensitivity models

    NASA Technical Reports Server (NTRS)

    Nielsen, Kenneth R. K.; Wandell, Brian A.

    1988-01-01

    Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.

  14. Towards More Efficient and Effective Global Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin

    2014-05-01

    Sensitivity analysis (SA) is an important paradigm in the context of model development and application. There are a variety of approaches towards sensitivity analysis that formally describe different "intuitive" understandings of the sensitivity of a single or multiple model responses to different factors such as model parameters or forcings. These approaches are based on different philosophies and theoretical definitions of sensitivity and range from simple local derivatives to rigorous Sobol-type analysis-of-variance approaches. In general, different SA methods focus and identify different properties of the model response and may lead to different, sometimes even conflicting conclusions about the underlying sensitivities. This presentation revisits the theoretical basis for sensitivity analysis, critically evaluates the existing approaches in the literature, and demonstrates their shortcomings through simple examples. Important properties of response surfaces that are associated with the understanding and interpretation of sensitivities are outlined. A new approach towards global sensitivity analysis is developed that attempts to encompass the important, sensitivity-related properties of response surfaces. Preliminary results show that the new approach is superior to the standard approaches in the literature in terms of effectiveness and efficiency.

  15. Fuzzy sensitivity analysis for reliability assessment of building structures

    NASA Astrophysics Data System (ADS)

    Kala, Zdeněk

    2016-06-01

    The mathematical concept of fuzzy sensitivity analysis, which studies the effects of the fuzziness of input fuzzy numbers on the fuzziness of the output fuzzy number, is described in the article. The output fuzzy number is evaluated using Zadeh's general extension principle. The contribution of stochastic and fuzzy uncertainty in reliability analysis tasks of building structures is discussed. The algorithm of fuzzy sensitivity analysis is an alternative to stochastic sensitivity analysis in tasks in which input and output variables are considered as fuzzy numbers.

  16. Structural Analysis and Design Software

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Collier Research and Development Corporation received a one-of-a-kind computer code for designing exotic hypersonic aircraft called ST-SIZE in the first ever Langley Research Center software copyright license agreement. Collier transformed the NASA computer code into a commercial software package called HyperSizer, which integrates with other Finite Element Modeling and Finite Analysis private-sector structural analysis program. ST-SIZE was chiefly conceived as a means to improve and speed the structural design of a future aerospace plane for Langley Hypersonic Vehicles Office. Including the NASA computer code into HyperSizer has enabled the company to also apply the software to applications other than aerospace, including improved design and construction for offices, marine structures, cargo containers, commercial and military aircraft, rail cars, and a host of everyday consumer products.

  17. Global sensitivity analysis of the Indian monsoon during the Pleistocene

    NASA Astrophysics Data System (ADS)

    Araya-Melo, P. A.; Crucifix, M.; Bounceur, N.

    2015-01-01

    The sensitivity of the Indian monsoon to the full spectrum of climatic conditions experienced during the Pleistocene is estimated using the climate model HadCM3. The methodology follows a global sensitivity analysis based on the emulator approach of Oakley and O'Hagan (2004) implemented following a three-step strategy: (1) development of an experiment plan, designed to efficiently sample a five-dimensional input space spanning Pleistocene astronomical configurations (three parameters), CO2 concentration and a Northern Hemisphere glaciation index; (2) development, calibration and validation of an emulator of HadCM3 in order to estimate the response of the Indian monsoon over the full input space spanned by the experiment design; and (3) estimation and interpreting of sensitivity diagnostics, including sensitivity measures, in order to synthesise the relative importance of input factors on monsoon dynamics, estimate the phase of the monsoon intensity response with respect to that of insolation, and detect potential non-linear phenomena. By focusing on surface temperature, precipitation, mixed-layer depth and sea-surface temperature over the monsoon region during the summer season (June-July-August-September), we show that precession controls the response of four variables: continental temperature in phase with June to July insolation, high glaciation favouring a late-phase response, sea-surface temperature in phase with May insolation, continental precipitation in phase with July insolation, and mixed-layer depth in antiphase with the latter. CO2 variations control temperature variance with an amplitude similar to that of precession. The effect of glaciation is dominated by the albedo forcing, and its effect on precipitation competes with that of precession. Obliquity is a secondary effect, negligible on most variables except sea-surface temperature. It is also shown that orography forcing reduces the glacial cooling, and even has a positive effect on precipitation

  18. New Uses for Sensitivity Analysis: How Different Movement Tasks Effect Limb Model Parameter Sensitivity

    NASA Technical Reports Server (NTRS)

    Winters, J. M.; Stark, L.

    1984-01-01

    Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.

  19. Habitat Design Optimization and Analysis

    NASA Technical Reports Server (NTRS)

    SanSoucie, Michael P.; Hull, Patrick V.; Tinker, Michael L.

    2006-01-01

    Long-duration surface missions to the Moon and Mars will require habitats for the astronauts. The materials chosen for the habitat walls play a direct role in the protection against the harsh environments found on the surface. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Advanced optimization techniques are necessary for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat design optimization tool utilizing genetic algorithms has been developed. Genetic algorithms use a "survival of the fittest" philosophy, where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multi-objective formulation of structural analysis, heat loss, radiation protection, and meteoroid protection. This paper presents the research and development of this tool.

  20. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  1. Sensitivity Analysis of Offshore Wind Cost of Energy (Poster)

    SciTech Connect

    Dykes, K.; Ning, A.; Graf, P.; Scott, G.; Damiami, R.; Hand, M.; Meadows, R.; Musial, W.; Moriarty, P.; Veers, P.

    2012-10-01

    No matter the source, offshore wind energy plant cost estimates are significantly higher than for land-based projects. For instance, a National Renewable Energy Laboratory (NREL) review on the 2010 cost of wind energy found baseline cost estimates for onshore wind energy systems to be 71 dollars per megawatt-hour ($/MWh), versus 225 $/MWh for offshore systems. There are many ways that innovation can be used to reduce the high costs of offshore wind energy. However, the use of such innovation impacts the cost of energy because of the highly coupled nature of the system. For example, the deployment of multimegawatt turbines can reduce the number of turbines, thereby reducing the operation and maintenance (O&M) costs associated with vessel acquisition and use. On the other hand, larger turbines may require more specialized vessels and infrastructure to perform the same operations, which could result in higher costs. To better understand the full impact of a design decision on offshore wind energy system performance and cost, a system analysis approach is needed. In 2011-2012, NREL began development of a wind energy systems engineering software tool to support offshore wind energy system analysis. The tool combines engineering and cost models to represent an entire offshore wind energy plant and to perform system cost sensitivity analysis and optimization. Initial results were collected by applying the tool to conduct a sensitivity analysis on a baseline offshore wind energy system using 5-MW and 6-MW NREL reference turbines. Results included information on rotor diameter, hub height, power rating, and maximum allowable tip speeds.

  2. Context-sensitive design and human interaction principles for usable, useful, and adoptable radars

    NASA Astrophysics Data System (ADS)

    McNamara, Laura A.; Klein, Laura M.

    2016-05-01

    The evolution of exquisitely sensitive Synthetic Aperture Radar (SAR) systems is positioning this technology for use in time-critical environments, such as search-and-rescue missions and improvised explosive device (IED) detection. SAR systems should be playing a keystone role in the United States' Intelligence, Surveillance, and Reconnaissance activities. Yet many in the SAR community see missed opportunities for incorporating SAR into existing remote sensing data collection and analysis challenges. Drawing on several years' of field research with SAR engineering and operational teams, this paper examines the human and organizational factors that mitigate against the adoption and use of SAR for tactical ISR and operational support. We suggest that SAR has a design problem, and that context-sensitive, human and organizational design frameworks are required if the community is to realize SAR's tactical potential.

  3. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks

    PubMed Central

    Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over

  4. Global sensitivity analysis of analytical vibroacoustic transmission models

    NASA Astrophysics Data System (ADS)

    Christen, Jean-Loup; Ichchou, Mohamed; Troclet, Bernard; Bareille, Olivier; Ouisse, Morvan

    2016-04-01

    Noise reduction issues arise in many engineering problems. One typical vibroacoustic problem is the transmission loss (TL) optimisation and control. The TL depends mainly on the mechanical parameters of the considered media. At early stages of the design, such parameters are not well known. Decision making tools are therefore needed to tackle this issue. In this paper, we consider the use of the Fourier Amplitude Sensitivity Test (FAST) for the analysis of the impact of mechanical parameters on features of interest. FAST is implemented with several structural configurations. FAST method is used to estimate the relative influence of the model parameters while assuming some uncertainty or variability on their values. The method offers a way to synthesize the results of a multiparametric analysis with large variability. Results are presented for transmission loss of isotropic, orthotropic and sandwich plates excited by a diffuse field on one side. Qualitative trends found to agree with the physical expectation. Design rules can then be set up for vibroacoustic indicators. The case of a sandwich plate is taken as an example of the use of this method inside an optimisation process and for uncertainty quantification.

  5. Estimate design sensitivity to process variation for the 14nm node

    NASA Astrophysics Data System (ADS)

    Landié, Guillaume; Farys, Vincent

    2016-03-01

    Looking for the highest density and best performance, the 14nm technological node saw the development of aggressive designs, with design rules as close as possible to the limit of the process. Edge placement error (EPE) budget is now tighter and Reticle Enhancement Techniques (RET) must take into account the highest number of parameters to be able to get the best printability and guaranty yield requirements. Overlay is a parameter that must be taken into account earlier during the design library development to avoid design structures presenting a high risk of performance failure. This paper presents a method taking into account the overlay variation and the Resist Image simulation across the process window variation to estimate the design sensitivity to overlay. Areas in the design are classified with specific metrics, from the highest to the lowest overlay sensitivity. This classification can be used to evaluate the robustness of a full chip product to process variability or to work with designers during the design library development. The ultimate goal is to evaluate critical structures in different contexts and report the most critical ones. In this paper, we study layers interacting together, such as Contact/Poly area overlap or Contact/Active distance. ASML-Brion tooling allowed simulating the different resist contours and applying the overlay value to one of the layers. Lithography Manufacturability Check (LMC) detectors are then set to extract the desired values for analysis. Two different approaches have been investigated. The first one is a systematic overlay where we apply the same overlay everywhere on the design. The second one is using a real overlay map which has been measured and applied to the LMC tools. The data are then post-processed and compared to the design target to create a classification and show the error distribution. Figure:

  6. Design sensitivity and Hessian matrix of generalized eigenproblems

    NASA Astrophysics Data System (ADS)

    Li, Li; Hu, Yujin; Wang, Xuelin

    2014-02-01

    A generalized eigenproblem is formed and its normalizations are presented and discussed. Then a unified consideration of the computation of the sensitivity and Hessian matrix is studied for both the self-adjoint and non-self-adjoint cases. In the self-adjoint case, a direct algebraic method is presented to determine the eigensolution derivatives simultaneously by solving a linear system with a symmetric coefficient matrix. In the non-self-adjoint case, an algebraic method is presented to determine the eigensolution derivatives directly and simultaneously without having to use the left eigenvectors. In this sense, the method has advantages in computational cost and storage capacity. It is shown that the second order derivatives of eigensolutions can also be obtained by solving a linear system and the computational effort of obtaining Hessian matrix is reduced remarkably since only the recalculation of the right-hand vector of the linear system is required. The presented methods are accurate, compact, numerically stable and easy to implement. Finally, two transcendental eigenproblem examples are used to demonstrate the validity of the presented methods. The first example is considered as an example of the case of non-self-adjoint systems, which can result from feedback control systems. The other example is used to illustrate the case of self-adjoint systems by considering the three bar truss structure which is a viscoelastic composite structure and consists of two aluminum truss components and one viscoelastic truss. In addition, the capacity of predicting the changes of eigenvalues and eigenvectors with respect to the changes of design parameters is studied.

  7. BEHAVIOR OF SENSITIVITIES IN THE ONE-DIMENSIONAL ADVECTION-DISPERSION EQUATION: IMPLICATIONS FOR PARAMETER ESTIMATION AND SAMPLING DESIGN.

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1987-01-01

    The spatial and temporal variability of sensitivities has a significant impact on parameter estimation and sampling design for studies of solute transport in porous media. Physical insight into the behavior of sensitivities is offered through an analysis of analytically derived sensitivities for the one-dimensional form of the advection-dispersion equation. When parameters are estimated in regression models of one-dimensional transport, the spatial and temporal variability in sensitivities influences variance and covariance of parameter estimates. Several principles account for the observed influence of sensitivities on parameter uncertainty. (1) Information about a physical parameter may be most accurately gained at points in space and time. (2) As the distance of observation points from the upstream boundary increases, maximum sensitivity to velocity during passage of the solute front increases. (3) The frequency of sampling must be 'in phase' with the S shape of the dispersion sensitivity curve to yield the most information on dispersion. (4) The sensitivity to the dispersion coefficient is usually at least an order of magnitude less than the sensitivity to velocity. (5) The assumed probability distribution of random error in observations of solute concentration determines the form of the sensitivities. (6) If variance in random error in observations is large, trends in sensitivities of observation points may be obscured by noise. (7) Designs that minimize the variance of one parameter may not necessarily minimize the variance of other parameters.

  8. Partial Differential Algebraic Sensitivity Analysis Code

    Energy Science and Technology Software Center (ESTSC)

    1995-05-15

    PDASAC solves stiff, nonlinear initial-boundary-value in a timelike dimension t and a space dimension x. Plane, circular cylindrical or spherical boundaries can be handled. Mixed-order systems of partial differential and algebraic equations can be analyzed with members of order or 0 or 1 in t, 0,1 or 2 in x. Parametric sensitivities of the calculated states are compted simultaneously on request, via the Jacobian of the state equations. Initial and boundary conditions are efficiently reconciled.more » Local error control (in the max-norm or the 2-norm) is provided for the state vector and can include the parametric sensitivites if desired.« less

  9. Automating sensitivity analysis of computer models using computer calculus

    SciTech Connect

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs.

  10. Automated procedure for sensitivity analysis using computer calculus

    SciTech Connect

    Oblow, E.M.

    1983-05-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach was found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies.