Science.gov

Sample records for design sensitivity analysis

  1. Shape design sensitivity analysis using domain information

    NASA Technical Reports Server (NTRS)

    Seong, Hwal-Gyeong; Choi, Kyung K.

    1985-01-01

    A numerical method for obtaining accurate shape design sensitivity information for built-up structures is developed and demonstrated through analysis of examples. The basic character of the finite element method, which gives more accurate domain information than boundary information, is utilized for shape design sensitivity improvement. A domain approach for shape design sensitivity analysis of built-up structures is derived using the material derivative idea of structural mechanics and the adjoint variable method of design sensitivity analysis. Velocity elements and B-spline curves are introduced to alleviate difficulties in generating domain velocity fields. The regularity requirements of the design velocity field are studied.

  2. Design sensitivity analysis of boundary element substructures

    NASA Technical Reports Server (NTRS)

    Kane, James H.; Saigal, Sunil; Gallagher, Richard H.

    1989-01-01

    The ability to reduce or condense a three-dimensional model exactly, and then iterate on this reduced size model representing the parts of the design that are allowed to change in an optimization loop is discussed. The discussion presents the results obtained from an ongoing research effort to exploit the concept of substructuring within the structural shape optimization context using a Boundary Element Analysis (BEA) formulation. The first part contains a formulation for the exact condensation of portions of the overall boundary element model designated as substructures. The use of reduced boundary element models in shape optimization requires that structural sensitivity analysis can be performed. A reduced sensitivity analysis formulation is then presented that allows for the calculation of structural response sensitivities of both the substructured (reduced) and unsubstructured parts of the model. It is shown that this approach produces significant computational economy in the design sensitivity analysis and reanalysis process by facilitating the block triangular factorization and forward reduction and backward substitution of smaller matrices. The implementatior of this formulation is discussed and timings and accuracies of representative test cases presented.

  3. Design sensitivity analysis using EAL. Part 1: Conventional design parameters

    NASA Technical Reports Server (NTRS)

    Dopker, B.; Choi, Kyung K.; Lee, J.

    1986-01-01

    A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.

  4. Sensitivity analysis of Stirling engine design parameters

    SciTech Connect

    Naso, V.; Dong, W.; Lucentini, M.; Capata, R.

    1998-07-01

    In the preliminary Stirling engine design process, the values of some design parameters (temperature ratio, swept volume ratio, phase angle and dead volume ratio) have to be assumed; as a matter of fact it can be difficult to determine the best values of these parameters for a particular engine design. In this paper, a mathematical model is developed to analyze the sensitivity of engine's performance variations corresponding to variations of these parameters.

  5. Design sensitivity analysis of rotorcraft airframe structures for vibration reduction

    NASA Technical Reports Server (NTRS)

    Murthy, T. Sreekanta

    1987-01-01

    Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.

  6. Design sensitivity analysis and optimization tool (DSO) for sizing design applications

    NASA Technical Reports Server (NTRS)

    Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa

    1992-01-01

    The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.

  7. Design Parameters Influencing Reliability of CCGA Assembly: A Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Tasooji, Amaneh; Ghaffarian, Reza; Rinaldi, Antonio

    2006-01-01

    Area Array microelectronic packages with small pitch and large I/O counts are now widely used in microelectronics packaging. The impact of various package design and materials/process parameters on reliability has been studied through extensive literature review. Reliability of Ceramic Column Grid Array (CCGA) package assemblies has been evaluated using JPL thermal cycle test results (-50(deg)/75(deg)C, -55(deg)/100(deg)C, and -55(deg)/125(deg)C), as well as those reported by other investigators. A sensitivity analysis has been performed using the literature da to study the impact of design parameters and global/local stress conditions on assembly reliability. The applicability of various life-prediction models for CCGA design has been investigated by comparing model's predictions with the experimental thermal cycling data. Finite Element Method (FEM) analysis has been conducted to assess the state of the stress/strain in CCGA assembly under different thermal cycling, and to explain the different failure modes and locations observed in JPL test assemblies.

  8. Design sensitivity analysis of mechanical systems in frequency domain

    NASA Astrophysics Data System (ADS)

    Nalecz, A. G.; Wicher, J.

    1988-02-01

    A procedure for determining the sensitivity functions of mechanical systems in the frequency domain by use of a vector-matrix approach is presented. Two examples, one for a ground vehicle passive front suspension, and the second for a vehicle active suspension, illustrate the practical applications of parametric sensitivity analysis for redesign and modification of mechanical systems. The sensitivity functions depend on the frequency of the system's oscillations. They can be easily related to the system's frequency characteristics which describe the dynamic properties of the system.

  9. Aerodynamic design optimization with sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay

    1995-01-01

    An investigation was conducted from October 1, 1990 to May 31, 1994 on the development of methodologies to improve the designs (more specifically, the shape) of aerodynamic surfaces of coupling optimization algorithms (OA) with Computational Fluid Dynamics (CFD) algorithms via sensitivity analyses (SA). The study produced several promising methodologies and their proof-of-concept cases, which have been reported in the open literature.

  10. SENSIT: a cross-section and design sensitivity and uncertainty analysis code. [In FORTRAN for CDC-7600, IBM 360

    SciTech Connect

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.

  11. Design component method for sensitivity analysis of built-up structures

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.; Seong, Hwai G.

    1986-01-01

    A 'design component method' that provides a unified and systematic organization of design sensitivity analysis for built-up structures is developed and implemented. Both conventional design variables, such as thickness and cross-sectional area, and shape design variables of components of built-up structures are considered. It is shown that design of components of built-up structures can be characterized and system design sensitivity expressions obtained by simply adding contributions from each component. The method leads to a systematic organization of computations for design sensitivity analysis that is similar to the way in which computations are organized within a finite element code.

  12. Observations Regarding Use of Advanced CFD Analysis, Sensitivity Analysis, and Design Codes in MDO

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Hou, Gene J. W.; Taylor, Arthur C., III

    1996-01-01

    Observations regarding the use of advanced computational fluid dynamics (CFD) analysis, sensitivity analysis (SA), and design codes in gradient-based multidisciplinary design optimization (MDO) reflect our perception of the interactions required of CFD and our experience in recent aerodynamic design optimization studies using CFD. Sample results from these latter studies are summarized for conventional optimization (analysis - SA codes) and simultaneous analysis and design optimization (design code) using both Euler and Navier-Stokes flow approximations. The amount of computational resources required for aerodynamic design using CFD via analysis - SA codes is greater than that required for design codes. Thus, an MDO formulation that utilizes the more efficient design codes where possible is desired. However, in the aerovehicle MDO problem, the various disciplines that are involved have different design points in the flight envelope; therefore, CFD analysis - SA codes are required at the aerodynamic 'off design' points. The suggested MDO formulation is a hybrid multilevel optimization procedure that consists of both multipoint CFD analysis - SA codes and multipoint CFD design codes that perform suboptimizations.

  13. Design sensitivity analysis for nonlinear magnetostatic problems by continuum approach

    NASA Astrophysics Data System (ADS)

    Park, Il-Han; Coulomb, J. L.; Hahn, Song-Yop

    1992-11-01

    Using the material derivative concept of continuum mechanics and an adjoint variable method, in a 2-dimensional nonlinear magnetostatic system the sensitivity formula is derived in a line integral form along the shape modification interface. The sensitivity coefficients are numerically evaluated from the solutions of state and adjoint variables calculated by the existing standard finite element code. To verify this method, the pole shape design problem of a quadrupole is provided. En utilisant la notion de dérivée matérielle de la mécanique des milieux continus et une méthode de variable adjointe, pour des problèmes magnétiques non linéaires bidimensionnels, la formule de sensibilité est dérivée sous forme d'une intégrale de contour sur la surface de modification. Les coefficients de sensibilité sont numériquement évalués avec les variables d'état et adjointes calculées à partir du logiciel existant d'éléments finis. Pour vérifier cette méthode, le problème d'optimisation de forme d'un quadripôle est décrit.

  14. Sensitivity analysis for aeroacoustic and aeroelastic design of turbomachinery blades

    NASA Technical Reports Server (NTRS)

    Lorence, Christopher B.; Hall, Kenneth C.

    1995-01-01

    A new method for computing the effect that small changes in the airfoil shape and cascade geometry have on the aeroacoustic and aeroelastic behavior of turbomachinery cascades is presented. The nonlinear unsteady flow is assumed to be composed of a nonlinear steady flow plus a small perturbation unsteady flow that is harmonic in time. First, the full potential equation is used to describe the behavior of the nonlinear mean (steady) flow through a two-dimensional cascade. The small disturbance unsteady flow through the cascade is described by the linearized Euler equations. Using rapid distortion theory, the unsteady velocity is split into a rotational part that contains the vorticity and an irrotational part described by a scalar potential. The unsteady vorticity transport is described analytically in terms of the drift and stream functions computed from the steady flow. Hence, the solution of the linearized Euler equations may be reduced to a single inhomogeneous equation for the unsteady potential. The steady flow and small disturbance unsteady flow equations are discretized using bilinear quadrilateral isoparametric finite elements. The nonlinear mean flow solution and streamline computational grid are computed simultaneously using Newton iteration. At each step of the Newton iteration, LU decomposition is used to solve the resulting set of linear equations. The unsteady flow problem is linear, and is also solved using LU decomposition. Next, a sensitivity analysis is performed to determine the effect small changes in cascade and airfoil geometry have on the mean and unsteady flow fields. The sensitivity analysis makes use of the nominal steady and unsteady flow LU decompositions so that no additional matrices need to be factored. Hence, the present method is computationally very efficient. To demonstrate how the sensitivity analysis may be used to redesign cascades, a compressor is redesigned for improved aeroelastic stability and two different fan exit guide

  15. Results of an integrated structure-control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1988-01-01

    Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.

  16. Design tradeoff studies and sensitivity analysis, appendix B

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Further work was performed on the Near Term Hybrid Passenger Vehicle Development Program. Fuel economy on the order of 2 to 3 times that of a conventional vehicle, with a comparable life cycle cost, is possible. The two most significant factors in keeping the life cycle cost down are the retail price increment and the ratio of battery replacement cost to battery life. Both factors can be reduced by reducing the power rating of the electric drive portion of the system relative to the system power requirements. The type of battery most suitable for the hybrid, from the point of view of minimizing life cycle cost, is nickel-iron. The hybrid is much less sensitive than a conventional vehicle is, in terms of the reduction in total fuel consumption and resultant decreases in operating expense, to reductions in vehicle weight, tire rolling resistance, etc., and to propulsion system and drivetrain improvements designed to improve the brake specific fuel consumption of the engine under low road load conditions. It is concluded that modifications to package the propulsion system and battery pack can be easily accommodated within the confines of a modified carryover body such as the Ford Ltd.

  17. On 3-D modeling and automatic regridding in shape design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.; Yao, Tse-Min

    1987-01-01

    The material derivative idea of continuum mechanics and the adjoint variable method of design sensitivity analysis are used to obtain a computable expression for the effect of shape variations on measures of structural performance of three-dimensional elastic solids.

  18. Sensitivity analysis and multidisciplinary optimization for aircraft design: Recent advances and results

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    Optimization by decomposition, complex system sensitivity analysis, and a rapid growth of disciplinary sensitivity analysis are some of the recent developments that hold promise of a quantum jump in the support engineers receive from computers in the quantitative aspects of design. Review of the salient points of these techniques is given and illustrated by examples from aircraft design as a process that combines the best of human intellect and computer power to manipulate data.

  19. Sensitivity analysis and multidisciplinary optimization for aircraft design - Recent advances and results

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    Optimization by decomposition, complex system sensitivity analysis, and a rapid growth of disciplinary sensitivity analysis are some of the recent developments that hold promise of a quantum jump in the support engineers receive from computers in the quantitative aspects of design. Review of the salient points of these techniques is given and illustrated by examples from aircraft design as a process that combines the best of human intellect and computer power to manipulate data.

  20. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  1. Parallel-vector design sensitivity analysis in structural dynamics

    NASA Technical Reports Server (NTRS)

    Zhang, Y.; Nguyen, D. T.

    1992-01-01

    This paper presents a parallel-vector algorithm for sensitivity calculations in linear structural dynamics. The proposed alternative formulation works efficiently with the reduced system of dynamic equations, since it eliminates the need for expensive and complicated based-vector derivatives, which are required in the conventional reduced system formulation. The relationship between the alternative formulation and the conventional reduced system formulation has been established, and it has been proven analytically that the two approaches are identical when all the mode shapes are included. This paper validates the proposed alternative algorithm through numerical experiments, where only a small number of mode shapes are used. In addition, a modified mode acceleration method is presented, thus not only the displacements but also the velocities and accelerations are shown to be improved.

  2. Automatic differentiation for design sensitivity analysis of structural systems using multiple processors

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi

    1994-01-01

    An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.

  3. Variational Methods in Design Optimization and Sensitivity Analysis for Two-Dimensional Euler Equations

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.

    1997-01-01

    Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.

  4. Results of an integrated structure/control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1989-01-01

    A design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations is discussed. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changes in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient than finite difference methods for the computation of the equivalent sensitivity information.

  5. Methodology for Sensitivity Analysis, Approximate Analysis, and Design Optimization in CFD for Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1996-01-01

    An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.

  6. Geometrically nonlinear design sensitivity analysis on parallel-vector high-performance computers

    NASA Technical Reports Server (NTRS)

    Baddourah, Majdi A.; Nguyen, Duc T.

    1993-01-01

    Parallel-vector solution strategies for generation and assembly of element matrices, solution of the resulted system of linear equations, calculations of the unbalanced loads, displacements, stresses, and design sensitivity analysis (DSA) are all incorporated into the Newton Raphson (NR) procedure for nonlinear finite element analysis and DSA. Numerical results are included to show the performance of the proposed method for structural analysis and DSA in a parallel-vector computer environment.

  7. Stratospheric Airship Design Sensitivity

    NASA Astrophysics Data System (ADS)

    Smith, Ira Steve; Fortenberry, Michael; Noll, . James; Perry, William

    2012-07-01

    The concept of a stratospheric or high altitude powered platform has been around almost as long as stratospheric free balloons. Airships are defined as Lighter-Than-Air (LTA) vehicles with propulsion and steering systems. Over the past five (5) years there has been an increased interest by the U. S. Department of Defense as well as commercial enterprises in airships at all altitudes. One of these interests is in the area of stratospheric airships. Whereas DoD is primarily interested in things that look down, such platforms offer a platform for science applications, both downward and outward looking. Designing airships to operate in the stratosphere is very challenging due to the extreme high altitude environment. It is significantly different than low altitude airship designs such as observed in the familiar advertising or tourism airships or blimps. The stratospheric airship design is very dependent on the specific application and the particular requirements levied on the vehicle with mass and power limits. The design is a complex iterative process and is sensitive to many factors. In an effort to identify the key factors that have the greatest impacts on the design, a parametric analysis of a simplified airship design has been performed. The results of these studies will be presented.

  8. Generalized Timoshenko modelling of composite beam structures: sensitivity analysis and optimal design

    NASA Astrophysics Data System (ADS)

    Augusta Neto, Maria; Yu, Wenbin; Pereira Leal, Rogerio

    2008-10-01

    This article describes a new approach to design the cross-section layer orientations of composite laminated beam structures. The beams are modelled with realistic cross-sectional geometry and material properties instead of a simplified model. The VABS (the variational asymptotic beam section analysis) methodology is used to compute the cross-sectional model for a generalized Timoshenko model, which was embedded in the finite element solver FEAP. Optimal design is performed with respect to the layers' orientation. The design sensitivity analysis is analytically formulated and implemented. The direct differentiation method is used to evaluate the response sensitivities with respect to the design variables. Thus, the design sensitivities of the Timoshenko stiffness computed by VABS methodology are imbedded into the modified VABS program and linked to the beam finite element solver. The modified method of feasible directions and sequential quadratic programming algorithms are used to seek the optimal continuous solution of a set of numerical examples. The buckling load associated with the twist-bend instability of cantilever composite beams, which may have several cross-section geometries, is improved in the optimization procedure.

  9. System Sensitivity Analysis Applied to the Conceptual Design of a Dual-Fuel Rocket SSTO

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1994-01-01

    This paper reports the results of initial efforts to apply the System Sensitivity Analysis (SSA) optimization method to the conceptual design of a single-stage-to-orbit (SSTO) launch vehicle. SSA is an efficient, calculus-based MDO technique for generating sensitivity derivatives in a highly multidisciplinary design environment. The method has been successfully applied to conceptual aircraft design and has been proven to have advantages over traditional direct optimization methods. The method is applied to the optimization of an advanced, piloted SSTO design similar to vehicles currently being analyzed by NASA as possible replacements for the Space Shuttle. Powered by a derivative of the Russian RD-701 rocket engine, the vehicle employs a combination of hydrocarbon, hydrogen, and oxygen propellants. Three primary disciplines are included in the design - propulsion, performance, and weights & sizing. A complete, converged vehicle analysis depends on the use of three standalone conceptual analysis computer codes. Efforts to minimize vehicle dry (empty) weight are reported in this paper. The problem consists of six system-level design variables and one system-level constraint. Using SSA in a 'manual' fashion to generate gradient information, six system-level iterations were performed from each of two different starting points. The results showed a good pattern of convergence for both starting points. A discussion of the advantages and disadvantages of the method, possible areas of improvement, and future work is included.

  10. Reduced order techniques for sensitivity analysis and design optimization of aerospace systems

    NASA Astrophysics Data System (ADS)

    Parrish, Jefferson Carter

    This work proposes a new method for using reduced order models in lieu of high fidelity analysis during the sensitivity analysis step of gradient based design optimization. The method offers a reduction in the computational cost of finite difference based sensitivity analysis in that context. The method relies on interpolating reduced order models which are based on proper orthogonal decomposition. The interpolation process is performed using radial basis functions and Grassmann manifold projection. It does not require additional high fidelity analyses to interpolate a reduced order model for new points in the design space. The interpolated models are used specifically for points in the finite difference stencil during sensitivity analysis. The proposed method is applied to an airfoil shape optimization (ASO) problem and a transport wing optimization (TWO) problem. The errors associated with the reduced order models themselves as well as the gradients calculated from them are evaluated. The effects of the method on the overall optimization path, computation times, and function counts are also examined. The ASO results indicate that the proposed scheme is a viable method for reducing the computational cost of these optimizations. They also indicate that the adaptive step is an effective method of improving interpolated gradient accuracy. The TWO results indicate that the interpolation accuracy can have a strong impact on optimization search direction.

  11. A sensitivity analysis of hazardous waste disposal site climatic and soil design parameters using HELP3

    SciTech Connect

    Adelman, D.D.; Stansbury, J.

    1997-12-31

    The Resource Conservation and Recovery Act (RCRA) Subtitle C, Comprehensive Environmental Response, Compensation, And Liability Act (CERCLA), and subsequent amendments have formed a comprehensive framework to deal with hazardous wastes on the national level. Key to this waste management is guidance on design (e.g., cover and bottom leachate control systems) of hazardous waste landfills. The objective of this research was to investigate the sensitivity of leachate volume at hazardous waste disposal sites to climatic, soil cover, and vegetative cover (Leaf Area Index) conditions. The computer model HELP3 which has the capability to simulate double bottom liner systems as called for in hazardous waste disposal sites was used in the analysis. HELP3 was used to model 54 combinations of climatic conditions, disposal site soil surface curve numbers, and leaf area index values to investigate how sensitive disposal site leachate volume was to these three variables. Results showed that leachate volume from the bottom double liner system was not sensitive to these parameters. However, the cover liner system leachate volume was quite sensitive to climatic conditions and less sensitive to Leaf Area Index and curve number values. Since humid locations had considerably more cover liner system leachate volume than and locations, different design standards may be appropriate for humid conditions than for and conditions.

  12. Sensitivity Analysis of the Thermal Response of 9975 Packaging Using Factorial Design Methods

    SciTech Connect

    Gupta, Narendra K.

    2005-10-31

    A method is presented for using the statistical design of experiment (2{sup k} Factorial Design) technique in the sensitivity analysis of the thermal response (temperature) of the 9975 radioactive material packaging where multiple thermal properties of the impact absorbing and fire insulating material Celotex and certain boundary conditions are subject to uncertainty. 2{sup k} Factorial Design method is very efficient in the use of available data and is capable of analyzing the impact of main variables (Factors) and their interactions on the component design. The 9975 design is based on detailed finite element (FE) analyses and extensive proof testing to meet the design requirements given in 10CFR71 [1]. However, the FE analyses use Celotex thermal properties that are based on published data and limited experiments. Celotex is an orthotropic material that is used in the home building industry. Its thermal properties are prone to variation due to manufacturing and fabrication processes, and due to long environmental exposure. This paper will evaluate the sensitivity of variations in thermal conductivity of the Celotex, convection coefficient at the drum surface, and drum emissivity (herein called Factors) on the thermal response of 9975 packaging under Normal Conditions of Transport (NCT). Application of this methodology will ascertain the robustness of the 9975 design and it can lead to more specific and useful understanding of the effects of various Factors on 9975 performance.

  13. Design of a smart magnetic sensor by sensitivity based covariance analysis

    NASA Astrophysics Data System (ADS)

    Krishna Kumar, P. T.

    2001-08-01

    We use the technique of sensitivity based covariance analysis to design a smart magnetic sensor for depth profile studies where a NMR flux meter is used as the sensor in a Van de Graff accelerator (VGA). The minimum detection limit of any sensor tends to the systematic uncertainty, and, using this phenomenology, we estimated the upper and lower bounds for the correlated systematic uncertainties for the proton energy accelerated by the VGA by the technique of determinant inequalities. Knowledge of the bounds would help in the design of a smart magnetic sensor with reduced correlated systematic uncertainty.

  14. Design and analysis of a PZT-based micromachined acoustic sensor with increased sensitivity.

    PubMed

    Wang, Zheyao; Wang, Chao; Liu, Litian

    2005-10-01

    The ever-growing applications of lead zirconate titanate (PZT) thin films to sensing devices have given birth to a variety of microsensors. This paper presents the design and theoretical analysis of a PZT-based micro acoustic sensor that uses interdigital electrodes (IDE) and in-plane polarization (IPP) instead of commonly used parallel plate-electrodes (PPE) and through-thickness polarization (TTP). The sensitivity of IDE-based sensors is increased due to the small capacitance of the interdigital capacitor and the large and adjustable electrode spacing. In addition, the sensitivity takes advantage of a large piezoelectric coefficient d33 rather than d31, which is used in PPE-based sensors, resulting in a further improvement in the sensitivity. Laminated beam theory is used to analyze the laminated piezoelectric sensors, and the capacitance of the IDE is deduced by using conformal mapping and partial capacitance techniques. Analytical formulations for predicting the sensitivity of both PPE- and IDE-based microsensors are presented, and factors that influence sensitivity are discussed in detail. Results show that the IDE and IPP can improve the sensitivity significantly.

  15. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  16. The sensitivity analysis of the economic and economic statistical designs of the synthetic X¯ chart

    NASA Astrophysics Data System (ADS)

    Yeong, Wai Chung; Khoo, Michael Boon Chong; Chong, Jia Kit; Lim, Shun Jinn; Teoh, Wei Lin

    2014-12-01

    The economic and economic statistical designs allow the practitioner to implement the control chart in an economically optimal manner. For the economic design, the optimal chart parameters are obtained to minimize the cost, while for the economic statistical design, additional constraints in terms of the average run length is imposed. However, these designs involve the estimation of quite a number of input parameters. Some of these input parameters are difficult to estimate accurately. Thus, a sensitivity analysis is required in order to identify which parameters need to be estimated accurately, and which requires just a rough estimation. This study focuses on the significance of 11 input parameters toward the optimal cost and average run lengths of the synthetic ¯X chart. The significant input parameters are identified through a two-level fractional factorial design, which allows interaction effects to be identified. An analysis of variance is performed to obtain the P-values by using the Minitab software. The significant input parameters and interactions on the optimal cost and average run lengths are identified based on a 5% significance level. The results of this study show that the input parameters which are significant towards the economic design may not be significant for the economic statistical design, and vice versa. This study also shows that there are quite a number of significant interaction effects, which may mask the significance of the main effects.

  17. Aerodynamic Shape Sensitivity Analysis and Design Optimization of Complex Configurations Using Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Newman, James C., III; Barnwell, Richard W.

    1997-01-01

    A three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed and is extended to model geometrically complex configurations. The advantage of unstructured grids (when compared with a structured-grid approach) is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional geometry and a Gauss-Seidel algorithm for the three-dimensional; similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Simple parameterization techniques are utilized for demonstrative purposes. Once the surface has been deformed, the unstructured grid is adapted by considering the mesh as a system of interconnected springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR (which is an advanced automatic-differentiation software tool). To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for a two-dimensional high-lift multielement airfoil and for a three-dimensional Boeing 747-200 aircraft.

  18. Design-oriented thermoelastic analysis, sensitivities, and approximations for shape optimization of aerospace vehicles

    NASA Astrophysics Data System (ADS)

    Bhatia, Manav

    Aerospace structures operate under extreme thermal environments. Hot external aerothermal environment at high Mach number flight leads to high structural temperatures. At the same time, cold internal cryogenic-fuel-tanks and thermal management concepts like Thermal Protection System (TPS) and active cooling result in a high temperature gradient through the structure. Multidisciplinary Design Optimization (MDO) of such structures requires a design-oriented approach to this problem. The broad goal of this research effort is to advance the existing state of the art towards MDO of large scale aerospace structures. The components required for this work are the sensitivity analysis formulation encompassing the scope of the physical phenomena being addressed, a set of efficient approximations to cut-down the required CPU cost, and a general purpose design-oriented numerical analysis tool capable of handling problems of this scope. In this work finite element discretization has been used to solve the conduction partial differential equations and the Poljak method has been used to discretize the integral equations for internal cavity radiation. A methodology has been established to couple the conduction finite element analysis to the internal radiation analysis. This formulation is then extended for sensitivity analysis of heat transfer and coupled thermal-structural problems. The most CPU intensive operations in the overall analysis have been identified, and approximation methods have been proposed to reduce the associated CPU cost. Results establish the effectiveness of these approximation methods, which lead to very high savings in CPU cost without any deterioration in the results. The results presented in this dissertation include two cases: a hexahedral cavity with internal and external radiation with conducting walls, and a wing box which is geometrically similar to the orbiter wing.

  19. Sensitivity Analysis of Wind Plant Performance to Key Turbine Design Parameters: A Systems Engineering Approach; Preprint

    SciTech Connect

    Dykes, K.; Ning, A.; King, R.; Graf, P.; Scott, G.; Veers, P.

    2014-02-01

    This paper introduces the development of a new software framework for research, design, and development of wind energy systems which is meant to 1) represent a full wind plant including all physical and nonphysical assets and associated costs up to the point of grid interconnection, 2) allow use of interchangeable models of varying fidelity for different aspects of the system, and 3) support system level multidisciplinary analyses and optimizations. This paper describes the design of the overall software capability and applies it to a global sensitivity analysis of wind turbine and plant performance and cost. The analysis was performed using three different model configurations involving different levels of fidelity, which illustrate how increasing fidelity can preserve important system interactions that build up to overall system performance and cost. Analyses were performed for a reference wind plant based on the National Renewable Energy Laboratory's 5-MW reference turbine at a mid-Atlantic offshore location within the United States.

  20. Sensitivity analysis of a dry-processed Candu fuel pellet's design parameters

    SciTech Connect

    Choi, Hangbok; Ryu, Ho Jin

    2007-07-01

    Sensitivity analysis was carried out in order to investigate the effect of a fuel pellet's design parameters on the performance of a dry-processed Canada deuterium uranium (CANDU) fuel and to suggest the optimum design modifications. Under a normal operating condition, a dry-processed fuel has a higher internal pressure and plastic strain due to a higher fuel centerline temperature when compared with a standard natural uranium CANDU fuel. Under a condition that the fuel bundle dimensions do not change, sensitivity calculations were performed on a fuel's design parameters such as the axial gap, dish depth, gap clearance and plenum volume. The results showed that the internal pressure and plastic strain of the cladding were most effectively reduced if a fuel's element plenum volume was increased. More specifically, the internal pressure and plastic strain of the dry-processed fuel satisfied the design limits of a standard CANDU fuel when the plenum volume was increased by one half a pellet, 0.5 mm{sup 3}/K. (authors)

  1. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1994-01-01

    The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.

  2. Novel design and sensitivity analysis of displacement measurement system utilizing knife edge diffraction for nanopositioning stages.

    PubMed

    Lee, ChaBum; Lee, Sun-Kyu; Tarbutton, Joshua A

    2014-09-01

    This paper presents a novel design and sensitivity analysis of a knife edge-based optical displacement sensor that can be embedded with nanopositioning stages. The measurement system consists of a laser, two knife edge locations, two photodetectors, and axillary optics components in a simple configuration. The knife edge is installed on the stage parallel to its moving direction and two separated laser beams are incident on knife edges. While the stage is in motion, the direct transverse and diffracted light at each knife edge is superposed producing interference at the detector. The interference is measured with two photodetectors in a differential amplification configuration. The performance of the proposed sensor was mathematically modeled, and the effect of the optical and mechanical parameters, wavelength, beam diameter, distances from laser to knife edge to photodetector, and knife edge topography, on sensor outputs was investigated to obtain a novel analytical method to predict linearity and sensitivity. From the model, all parameters except for the beam diameter have a significant influence on measurement range and sensitivity of the proposed sensing system. To validate the model, two types of knife edges with different edge topography were used for the experiment. By utilizing a shorter wavelength, smaller sensor distance and higher edge quality increased measurement sensitivity can be obtained. The model was experimentally validated and the results showed a good agreement with the theoretically estimated results. This sensor is expected to be easily implemented into nanopositioning stage applications at a low cost and mathematical model introduced here can be used for design and performance estimation of the knife edge-based sensor as a tool. PMID:25273778

  3. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1993-01-01

    In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.

  4. Reliability Sensitivity Analysis and Design Optimization of Composite Structures Based on Response Surface Methodology

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2003-01-01

    This report discusses the development and application of two alternative strategies in the form of global and sequential local response surface (RS) techniques for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS) whereas the local technique uses multiple first-order RS models with each applied to a small subregion of FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, thickness and orientation angle of each ply, cylinder diameter and length, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy with the sequential local RS technique having a considerably better computational efficiency.

  5. Sensitivity Analysis in Engineering

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)

    1987-01-01

    The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.

  6. Case Definition and Design Sensitivity

    PubMed Central

    Small, Dylan S.; Cheng, Jing; Halloran, M. Elizabeth; Rosenbaum, Paul R.

    2013-01-01

    In a case-referent study, cases of disease are compared to non-cases with respect to their antecedent exposure to a treatment in an effort to determine whether exposure causes some cases of the disease. Because exposure is not randomly assigned in the population, as it would be if the population were a vast randomized trial, exposed and unexposed subjects may differ prior to exposure with respect to covariates that may or may not have been measured. After controlling for measured pre-exposure differences, for instance by matching, a sensitivity analysis asks about the magnitude of bias from unmeasured covariates that would need to be present to alter the conclusions of a study that presumed matching for observed covariates removes all bias. The definition of a case of disease affects sensitivity to unmeasured bias. We explore this issue using: (i) an asymptotic tool, the design sensitivity, (ii) a simulation for finite samples, and (iii) an example. Under favorable circumstances, a narrower case definition can yield an increase in the design sensitivity, and hence an increase in the power of a sensitivity analysis. Also, we discuss an adaptive method that seeks to discover the best case definition from the data at hand while controlling for multiple testing. An implementation in R is available as SensitivityCaseControl. PMID:24482549

  7. Automated divertor target design by adjoint shape sensitivity analysis and a one-shot method

    SciTech Connect

    Dekeyser, W.; Reiter, D.; Baelmans, M.

    2014-12-01

    As magnetic confinement fusion progresses towards the development of first reactor-scale devices, computational tokamak divertor design is a topic of high priority. Presently, edge plasma codes are used in a forward approach, where magnetic field and divertor geometry are manually adjusted to meet design requirements. Due to the complex edge plasma flows and large number of design variables, this method is computationally very demanding. On the other hand, efficient optimization-based design strategies have been developed in computational aerodynamics and fluid mechanics. Such an optimization approach to divertor target shape design is elaborated in the present paper. A general formulation of the design problems is given, and conditions characterizing the optimal designs are formulated. Using a continuous adjoint framework, design sensitivities can be computed at a cost of only two edge plasma simulations, independent of the number of design variables. Furthermore, by using a one-shot method the entire optimization problem can be solved at an equivalent cost of only a few forward simulations. The methodology is applied to target shape design for uniform power load, in simplified edge plasma geometry.

  8. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.

    PubMed

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.

  9. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis

    PubMed Central

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736

  10. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis.

    SciTech Connect

    Eldred, Michael Scott; Vigil, Dena M.; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Lefantzi, Sophia; Hough, Patricia Diane; Eddy, John P.

    2011-12-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.

  11. Sensitivity analysis of air gap motion with respect to wind load and mooring system for semi-submersible platform design

    NASA Astrophysics Data System (ADS)

    Huo, Fa-li; Nie, Yan; Yang, De-qing; Dong, Gang; Cui, Jin

    2016-07-01

    A design of semi-submersible platform is mainly based on the extreme response analysis due to the forces experienced by the components during lifetime. The external loads can induce the extreme air gap response and potential deck impact to the semi-submersible platform. It is important to predict air gap response of platforms accurately in order to check the strength of local structures which withstand the wave slamming due to negative air gap. The wind load cannot be simulated easily by model test in towing tank whereas it can be simulated accurately in wind tunnel test. Furthermore, full scale simulation of the mooring system in model test is still a tuff work especially the stiffness of the mooring system. Owing to the above mentioned problem, the model test results are not accurate enough for air gap evaluation. The aim of this paper is to present sensitivity analysis results of air gap motion with respect to the mooring system and wind load for the design of semi-submersible platform. Though the model test results are not suitable for the direct evaluation of air gap, they can be used as a good basis for tuning the radiation damping and viscous drag in numerical simulation. In the presented design example, a numerical model is tuned and validated by ANSYS AQWA based on the model test results with a simple 4 line symmetrical horizontal soft mooring system. According to the tuned numerical model, sensitivity analysis studies of air gap motion with respect to the mooring system and wind load are performed in time domain. Three mooring systems and five simulation cases about the presented platform are simulated based on the results of wind tunnel tests and sea-keeping tests. The sensitivity analysis results are valuable for the floating platform design.

  12. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    SciTech Connect

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.; Jakeman, John Davis; Swiler, Laura Painton; Stephens, John Adam; Vigil, Dena M.; Wildey, Timothy Michael; Bohnhoff, William J.; Eddy, John P.; Hu, Kenneth T.; Dalbey, Keith R.; Bauman, Lara E; Hough, Patricia Diane

    2014-05-01

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

  13. Sensitivity Test Analysis

    1992-02-20

    SENSIT,MUSIG,COMSEN is a set of three related programs for sensitivity test analysis. SENSIT conducts sensitivity tests. These tests are also known as threshold tests, LD50 tests, gap tests, drop weight tests, etc. SENSIT interactively instructs the experimenter on the proper level at which to stress the next specimen, based on the results of previous responses. MUSIG analyzes the results of a sensitivity test to determine the mean and standard deviation of the underlying population bymore » computing maximum likelihood estimates of these parameters. MUSIG also computes likelihood ratio joint confidence regions and individual confidence intervals. COMSEN compares the results of two sensitivity tests to see if the underlying populations are significantly different. COMSEN provides an unbiased method of distinguishing between statistical variation of the estimates of the parameters of the population and true population difference.« less

  14. LISA Telescope Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Waluschka, Eugene; Krebs, Carolyn (Technical Monitor)

    2001-01-01

    The results of a LISA telescope sensitivity analysis will be presented, The emphasis will be on the outgoing beam of the Dall-Kirkham' telescope and its far field phase patterns. The computed sensitivity analysis will include motions of the secondary with respect to the primary, changes in shape of the primary and secondary, effect of aberrations of the input laser beam and the effect the telescope thin film coatings on polarization. An end-to-end optical model will also be discussed.

  15. A sensitivity analysis of process design parameters, commodity prices and robustness on the economics of odour abatement technologies.

    PubMed

    Estrada, José M; Kraakman, N J R Bart; Lebrero, Raquel; Muñoz, Raúl

    2012-01-01

    The sensitivity of the economics of the five most commonly applied odour abatement technologies (biofiltration, biotrickling filtration, activated carbon adsorption, chemical scrubbing and a hybrid technology consisting of a biotrickling filter coupled with carbon adsorption) towards design parameters and commodity prices was evaluated. Besides, the influence of the geographical location on the Net Present Value calculated for a 20 years lifespan (NPV20) of each technology and its robustness towards typical process fluctuations and operational upsets were also assessed. This comparative analysis showed that biological techniques present lower operating costs (up to 6 times) and lower sensitivity than their physical/chemical counterparts, with the packing material being the key parameter affecting their operating costs (40-50% of the total operating costs). The use of recycled or partially treated water (e.g. secondary effluent in wastewater treatment plants) offers an opportunity to significantly reduce costs in biological techniques. Physical/chemical technologies present a high sensitivity towards H2S concentration, which is an important drawback due to the fluctuating nature of malodorous emissions. The geographical analysis evidenced high NPV20 variations around the world for all the technologies evaluated, but despite the differences in wage and price levels, biofiltration and biotrickling filtration are always the most cost-efficient alternatives (NPV20). When, in an economical evaluation, the robustness is as relevant as the overall costs (NPV20), the hybrid technology would move up next to BTF as the most preferred technologies.

  16. Use of Sensitivity and Uncertainty Analysis in the Design of Reactor Physics and Criticality Benchmark Experiments for Advanced Nuclear Fuel

    SciTech Connect

    Rearden, B.T.; Anderson, W.J.; Harms, G.A.

    2005-08-15

    Framatome ANP, Sandia National Laboratories (SNL), Oak Ridge National Laboratory (ORNL), and the University of Florida are cooperating on the U.S. Department of Energy Nuclear Energy Research Initiative (NERI) project 2001-0124 to design, assemble, execute, analyze, and document a series of critical experiments to validate reactor physics and criticality safety codes for the analysis of commercial power reactor fuels consisting of UO{sub 2} with {sup 235}U enrichments {>=}5 wt%. The experiments will be conducted at the SNL Pulsed Reactor Facility.Framatome ANP and SNL produced two series of conceptual experiment designs based on typical parameters, such as fuel-to-moderator ratios, that meet the programmatic requirements of this project within the given restraints on available materials and facilities. ORNL used the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) to assess, from a detailed physics-based perspective, the similarity of the experiment designs to the commercial systems they are intended to validate. Based on the results of the TSUNAMI analysis, one series of experiments was found to be preferable to the other and will provide significant new data for the validation of reactor physics and criticality safety codes.

  17. Fusion-neutron-yield, activation measurements at the Z accelerator: Design, analysis, and sensitivity

    SciTech Connect

    Hahn, K. D. Ruiz, C. L.; Fehl, D. L.; Chandler, G. A.; Knapp, P. F.; Smelser, R. M.; Torres, J. A.; Cooper, G. W.; Nelson, A. J.; Leeper, R. J.

    2014-04-15

    We present a general methodology to determine the diagnostic sensitivity that is directly applicable to neutron-activation diagnostics fielded on a wide variety of neutron-producing experiments, which include inertial-confinement fusion (ICF), dense plasma focus, and ion beam-driven concepts. This approach includes a combination of several effects: (1) non-isotropic neutron emission; (2) the 1/r{sup 2} decrease in neutron fluence in the activation material; (3) the spatially distributed neutron scattering, attenuation, and energy losses due to the fielding environment and activation material itself; and (4) temporally varying neutron emission. As an example, we describe the copper-activation diagnostic used to measure secondary deuterium-tritium fusion-neutron yields on ICF experiments conducted on the pulsed-power Z Accelerator at Sandia National Laboratories. Using this methodology along with results from absolute calibrations and Monte Carlo simulations, we find that for the diagnostic configuration on Z, the diagnostic sensitivity is 0.037% ± 17% counts/neutron per cm{sup 2} and is ∼ 40% less sensitive than it would be in an ideal geometry due to neutron attenuation, scattering, and energy-loss effects.

  18. [Structural sensitivity analysis].

    PubMed

    Carrera-Hueso, F J; Ramón-Barrios, A

    2011-05-01

    The aim of this study was to perform a structural sensitivity analysis of a decision model and to identify its advantages and limitations. A previously published model of dinoprostone was modified, taking two scenarios into account: eliminating postpartum hemorrhages and including both hemorrhages and uterine hyperstimulation among the adverse effects. The result of the structural sensitivity analysis shows the robustness of the underlying model and confirmed the initial results: the intrauterine device is more cost-effective than intracervical dinoprostone gel. Structural sensitivity analyses should be congruent with the situation studied and clinically validated. Although uncertainty may be only slightly reduced, these analyses provide information and add greater validity and reliability to the model.

  19. Characterizing Wheel-Soil Interaction Loads Using Meshfree Finite Element Methods: A Sensitivity Analysis for Design Trade Studies

    NASA Technical Reports Server (NTRS)

    Contreras, Michael T.; Trease, Brian P.; Bojanowski, Cezary; Kulakx, Ronald F.

    2013-01-01

    A wheel experiencing sinkage and slippage events poses a high risk to planetary rover missions as evidenced by the mobility challenges endured by the Mars Exploration Rover (MER) project. Current wheel design practice utilizes loads derived from a series of events in the life cycle of the rover which do not include (1) failure metrics related to wheel sinkage and slippage and (2) performance trade-offs based on grouser placement/orientation. Wheel designs are rigorously tested experimentally through a variety of drive scenarios and simulated soil environments; however, a robust simulation capability is still in development due to myriad of complex interaction phenomena that contribute to wheel sinkage and slippage conditions such as soil composition, large deformation soil behavior, wheel geometry, nonlinear contact forces, terrain irregularity, etc. For the purposes of modeling wheel sinkage and slippage at an engineering scale, meshfree nite element approaches enable simulations that capture su cient detail of wheel-soil interaction while remaining computationally feasible. This study implements the JPL wheel-soil benchmark problem in the commercial code environment utilizing the large deformation modeling capability of Smooth Particle Hydrodynamics (SPH) meshfree methods. The nominal, benchmark wheel-soil interaction model that produces numerically stable and physically realistic results is presented and simulations are shown for both wheel traverse and wheel sinkage cases. A sensitivity analysis developing the capability and framework for future ight applications is conducted to illustrate the importance of perturbations to critical material properties and parameters. Implementation of the proposed soil-wheel interaction simulation capability and associated sensitivity framework has the potential to reduce experimentation cost and improve the early stage wheel design proce

  20. Design tradeoff studies and sensitivity analysis, appendices B1 - B4. [hybrid electric vehicles

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Documentation is presented for a program which separately computes fuel and energy consumption for the two modes of operation of a hybrid electric vehicle. The distribution of daily travel is specified as input data as well as the weights which the component driving cycles are given in each of the composite cycles. The possibility of weight reduction through the substitution of various materials is considered as well as the market potential for hybrid vehicles. Data relating to battery compartment weight distribution and vehicle handling analysis is tabulated.

  1. Optimizing the design and analysis of cryogenic semiconductor dark matter detectors for maximum sensitivity

    SciTech Connect

    Pyle, Matt Christopher

    2012-01-01

    In this thesis, we illustrate how the complex E- field geometry produced by interdigitated electrodes at alternating voltage biases naturally encodes 3D fiducial volume information into the charge and phonon signals and thus is a natural geometry for our next generation dark matter detectors. Secondly, we will study in depth the physics of import to our devices including transition edge sensor dynamics, quasi- particle dynamics in our Al collection fins, and phonon physics in the crystal itself so that we can both understand the performance of our previous CDMS II device as well as optimize the design of our future devices. Of interest to the broader physics community is the derivation of the ideal athermal phonon detector resolution and it's T3 c scaling behavior which suggests that the athermal phonon detector technology developed by CDMS could also be used to discover coherent neutrino scattering and search for non-standard neutrino interaction and sterile neutrinos. These proposed resolution optimized devices can also be used in searches for exotic MeV-GeV dark matter as well as novel background free searches for 8GeV light WIMPs.

  2. Reducing Production Basis Risk through Rainfall Intensity Frequency (RIF) Indexes: Global Sensitivity Analysis' Implication on Policy Design

    NASA Astrophysics Data System (ADS)

    Muneepeerakul, Chitsomanus; Huffaker, Ray; Munoz-Carpena, Rafael

    2016-04-01

    The weather index insurance promises financial resilience to farmers struck by harsh weather conditions with swift compensation at affordable premium thanks to its minimal adverse selection and moral hazard. Despite these advantages, the very nature of indexing causes the presence of "production basis risk" that the selected weather indexes and their thresholds do not correspond to actual damages. To reduce basis risk without additional data collection cost, we propose the use of rain intensity and frequency as indexes as it could offer better protection at the lower premium by avoiding basis risk-strike trade-off inherent in the total rainfall index. We present empirical evidences and modeling results that even under the similar cumulative rainfall and temperature environment, yield can significantly differ especially for drought sensitive crops. We further show that deriving the trigger level and payoff function from regression between historical yield and total rainfall data may pose significant basis risk owing to their non-unique relationship in the insured range of rainfall. Lastly, we discuss the design of index insurance in terms of contract specifications based on the results from global sensitivity analysis.

  3. RESRAD parameter sensitivity analysis

    SciTech Connect

    Cheng, J.J.; Yu, C.; Zielen, A.J.

    1991-08-01

    Three methods were used to perform a sensitivity analysis of RESRAD code input parameters -- enhancement of RESRAD by the Gradient Enhanced Software System (GRESS) package, direct parameter perturbation, and graphic comparison. Evaluation of these methods indicated that (1) the enhancement of RESRAD by GRESS has limitations and should be used cautiously, (2) direct parameter perturbation is tedious to implement, and (3) the graphics capability of RESRAD 4.0 is the most direct and convenient method for performing sensitivity analyses. This report describes procedures for implementing these methods and presents a comparison of results. 3 refs., 9 figs., 8 tabs.

  4. Naval Waste Package Design Sensitivity

    SciTech Connect

    T. Schmitt

    2006-12-13

    The purpose of this calculation is to determine the sensitivity of the structural response of the Naval waste packages to varying inner cavity dimensions when subjected to a comer drop and tip-over from elevated surface. This calculation will also determine the sensitivity of the structural response of the Naval waste packages to the upper bound of the naval canister masses. The scope of this document is limited to reporting the calculation results in terms of through-wall stress intensities in the outer corrosion barrier. This calculation is intended for use in support of the preliminary design activities for the license application design of the Naval waste package. It examines the effects of small changes between the naval canister and the inner vessel, and in these dimensions, the Naval Long waste package and Naval Short waste package are similar. Therefore, only the Naval Long waste package is used in this calculation and is based on the proposed potential designs presented by the drawings and sketches in References 2.1.10 to 2.1.17 and 2.1.20. All conclusions are valid for both the Naval Long and Naval Short waste packages.

  5. CAD based design sensitivity analysis and shape optimization of scaffolds for bio-root regeneration in swine.

    PubMed

    Luo, Xiangyou; Yang, Bo; Sheng, Lei; Chen, Jinlong; Li, Hui; Xie, Li; Chen, Gang; Yu, Mei; Guo, Weihua; Tian, Weidong

    2015-07-01

    Tooth root supports dental crown and bears occlusal force. While proper root shape and size render the force being evenly delivered and dispersed into jawbone. Yet it remains unclear what shape and size of a biological tooth root (bio-root), which is mostly determined by the scaffold geometric design, is suitable for stress distributing and mastication performing. Therefore, this study hypothesized scaffold fabricated in proper shape and size is better for regeneration of tooth root with approving biomechanical functional features. In this study, we optimized shape and size of scaffolds for bio-root regeneration using computer aided design (CAD) modeling and finite element analysis (FEA). Statical structural analysis showed the total deformation (TD) and equivalent von-mises stress (EQV) of the restored tooth model mainly concentrated on the scaffold and the post, in accordance with the condition in a natural post restored tooth. Design sensitivity analysis showed increasing the height and upper diameter of the scaffold can tremendously reduce the TD and EQV of the model, while increasing the bottom diameter of scaffold can, to some extent, reduce the EQV in post. However, increase on post height had little influence on the whole model, only slightly increased the native EQV stress in post. Through response surface based optimization, we successfully screened out the optimal shape of the scaffold used in tissue engineering of tooth root. The optimal scaffold adopted a slightly tapered shape with the upper diameter of 4.9 mm, bottom diameter of 3.4 mm; the length of the optimized scaffold shape was 9.4 mm. While the analysis also suggested a height of about 9 mm for a metal post with a diameter of 1.4 mm suitable for crown restoration in bio-root regeneration. In order to validate the physiological function of the shape optimized scaffold in vivo, we transplanted the shape optimized treated dentin matrix (TDM) scaffold, seeding with dental stem cells, into alveolar

  6. CAD based design sensitivity analysis and shape optimization of scaffolds for bio-root regeneration in swine.

    PubMed

    Luo, Xiangyou; Yang, Bo; Sheng, Lei; Chen, Jinlong; Li, Hui; Xie, Li; Chen, Gang; Yu, Mei; Guo, Weihua; Tian, Weidong

    2015-07-01

    Tooth root supports dental crown and bears occlusal force. While proper root shape and size render the force being evenly delivered and dispersed into jawbone. Yet it remains unclear what shape and size of a biological tooth root (bio-root), which is mostly determined by the scaffold geometric design, is suitable for stress distributing and mastication performing. Therefore, this study hypothesized scaffold fabricated in proper shape and size is better for regeneration of tooth root with approving biomechanical functional features. In this study, we optimized shape and size of scaffolds for bio-root regeneration using computer aided design (CAD) modeling and finite element analysis (FEA). Statical structural analysis showed the total deformation (TD) and equivalent von-mises stress (EQV) of the restored tooth model mainly concentrated on the scaffold and the post, in accordance with the condition in a natural post restored tooth. Design sensitivity analysis showed increasing the height and upper diameter of the scaffold can tremendously reduce the TD and EQV of the model, while increasing the bottom diameter of scaffold can, to some extent, reduce the EQV in post. However, increase on post height had little influence on the whole model, only slightly increased the native EQV stress in post. Through response surface based optimization, we successfully screened out the optimal shape of the scaffold used in tissue engineering of tooth root. The optimal scaffold adopted a slightly tapered shape with the upper diameter of 4.9 mm, bottom diameter of 3.4 mm; the length of the optimized scaffold shape was 9.4 mm. While the analysis also suggested a height of about 9 mm for a metal post with a diameter of 1.4 mm suitable for crown restoration in bio-root regeneration. In order to validate the physiological function of the shape optimized scaffold in vivo, we transplanted the shape optimized treated dentin matrix (TDM) scaffold, seeding with dental stem cells, into alveolar

  7. Scaling in sensitivity analysis

    USGS Publications Warehouse

    Link, W.A.; Doherty, P.F.

    2002-01-01

    Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.

  8. LISA Telescope Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Waluschka, Eugene; Krebs, Carolyn (Technical Monitor)

    2002-01-01

    The Laser Interferometer Space Antenna (LISA) for the detection of Gravitational Waves is a very long baseline interferometer which will measure the changes in the distance of a five million kilometer arm to picometer accuracies. As with any optical system, even one with such very large separations between the transmitting and receiving, telescopes, a sensitivity analysis should be performed to see how, in this case, the far field phase varies when the telescope parameters change as a result of small temperature changes.

  9. Sensitivity testing and analysis

    SciTech Connect

    Neyer, B.T.

    1991-01-01

    New methods of sensitivity testing and analysis are proposed. The new test method utilizes Maximum Likelihood Estimates to pick the next test level in order to maximize knowledge of both the mean, {mu}, and the standard deviation, {sigma} of the population. Simulation results demonstrate that this new test provides better estimators (less bias and smaller variance) of both {mu} and {sigma} than the other commonly used tests (Probit, Bruceton, Robbins-Monro, Langlie). A new method of analyzing sensitivity tests is also proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions, for {mu}, {sigma}, and arbitrary percentiles. Unlike presently used methods, such as the program ASENT which is based on the Cramer-Rao theorem, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The new test and analysis methods will be explained and compared to the presently used methods. 19 refs., 12 figs.

  10. Evaluation of transverse dispersion effects in tank experiments by numerical modeling: parameter estimation, sensitivity analysis and revision of experimental design.

    PubMed

    Ballarini, E; Bauer, S; Eberhardt, C; Beyer, C

    2012-06-01

    Transverse dispersion represents an important mixing process for transport of contaminants in groundwater and constitutes an essential prerequisite for geochemical and biodegradation reactions. Within this context, this work describes the detailed numerical simulation of highly controlled laboratory experiments using uranine, bromide and oxygen depleted water as conservative tracers for the quantification of transverse mixing in porous media. Synthetic numerical experiments reproducing an existing laboratory experimental set-up of quasi two-dimensional flow through tank were performed to assess the applicability of an analytical solution of the 2D advection-dispersion equation for the estimation of transverse dispersivity as fitting parameter. The fitted dispersivities were compared to the "true" values introduced in the numerical simulations and the associated error could be precisely estimated. A sensitivity analysis was performed on the experimental set-up in order to evaluate the sensitivities of the measurements taken at the tank experiment on the individual hydraulic and transport parameters. From the results, an improved experimental set-up as well as a numerical evaluation procedure could be developed, which allow for a precise and reliable determination of dispersivities. The improved tank set-up was used for new laboratory experiments, performed at advective velocities of 4.9 m d(-1) and 10.5 m d(-1). Numerical evaluation of these experiments yielded a unique and reliable parameter set, which closely fits the measured tracer concentration data. For the porous medium with a grain size of 0.25-0.30 mm, the fitted longitudinal and transverse dispersivities were 3.49×10(-4) m and 1.48×10(-5) m, respectively. The procedures developed in this paper for the synthetic and rigorous design and evaluation of the experiments can be generalized and transferred to comparable applications. PMID:22575873

  11. Evaluation of transverse dispersion effects in tank experiments by numerical modeling: parameter estimation, sensitivity analysis and revision of experimental design.

    PubMed

    Ballarini, E; Bauer, S; Eberhardt, C; Beyer, C

    2012-06-01

    Transverse dispersion represents an important mixing process for transport of contaminants in groundwater and constitutes an essential prerequisite for geochemical and biodegradation reactions. Within this context, this work describes the detailed numerical simulation of highly controlled laboratory experiments using uranine, bromide and oxygen depleted water as conservative tracers for the quantification of transverse mixing in porous media. Synthetic numerical experiments reproducing an existing laboratory experimental set-up of quasi two-dimensional flow through tank were performed to assess the applicability of an analytical solution of the 2D advection-dispersion equation for the estimation of transverse dispersivity as fitting parameter. The fitted dispersivities were compared to the "true" values introduced in the numerical simulations and the associated error could be precisely estimated. A sensitivity analysis was performed on the experimental set-up in order to evaluate the sensitivities of the measurements taken at the tank experiment on the individual hydraulic and transport parameters. From the results, an improved experimental set-up as well as a numerical evaluation procedure could be developed, which allow for a precise and reliable determination of dispersivities. The improved tank set-up was used for new laboratory experiments, performed at advective velocities of 4.9 m d(-1) and 10.5 m d(-1). Numerical evaluation of these experiments yielded a unique and reliable parameter set, which closely fits the measured tracer concentration data. For the porous medium with a grain size of 0.25-0.30 mm, the fitted longitudinal and transverse dispersivities were 3.49×10(-4) m and 1.48×10(-5) m, respectively. The procedures developed in this paper for the synthetic and rigorous design and evaluation of the experiments can be generalized and transferred to comparable applications.

  12. WASTE PACKAGE DESIGN SENSITIVITY REPORT

    SciTech Connect

    P. Mecharet

    2001-03-09

    The purpose of this technical report is to present the current designs for waste packages and determine which designs will be evaluated for the Site Recommendation (SR) or Licence Application (LA), to demonstrate how the design will be shown to comply with the applicable design criteria. The evaluations to support SR or LA are based on system description document criteria. The objective is to determine those system description document criteria for which compliance is to be demonstrated for SR; and, having identified the criteria, to refer to the documents that show compliance. In addition, those system description document criteria for which compliance will be addressed for LA are identified, with a distinction made between two steps of the LA process: the LA-Construction Authorization (LA-CA) phase on one hand, and the LA-Receive and Possess (LA-R&P) phase on the other hand. The scope of this work encompasses the Waste Package Project disciplines for criticality, shielding, structural, and thermal analysis.

  13. DAKOTA, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 reference manual

    SciTech Connect

    Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane; Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Guinta, Anthony A.; Brown, Shannon L.

    2006-10-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.

  14. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, user's reference manual.

    SciTech Connect

    Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.

    2010-05-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.

  15. Sensitivity analysis in computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Bristow, D. R.

    1984-01-01

    Information on sensitivity analysis in computational aerodynamics is given in outline, graphical, and chart form. The prediction accuracy if the MCAERO program, a perturbation analysis method, is discussed. A procedure for calculating perturbation matrix, baseline wing paneling for perturbation analysis test cases and applications of an inviscid sensitivity matrix are among the topics covered.

  16. Design stress evaluation based on strain-rate sensitivity analysis for nickel alloys used in the very-high temperature nuclear system

    SciTech Connect

    Mo, K.; Tung, H. M.; Chen, X.; Zhao, Y.; Stubbins, J. F.

    2012-07-01

    Both Alloy 617 and Alloy 230 have been considered the most promising structural materials for the Very High Temperature Reactor (VHTR). In this study, mechanical properties of both alloys were examined by performing tensile tests at three different strain rates and at temperatures up to 1000 deg.C. This range covers time-dependent (plasticity) to time-independent (creep) deformations. Strain-rate sensitivity analysis for each alloy was conducted in order to approximate the long-term flow stresses. The strain-rate sensitivities for the 0.2% flow stress were found to be temperature independent (m {approx_equal} 0) at temperatures ranging from room temperature to 700 deg.C due to dynamic strain aging. At elevated temperatures (800-1000 deg.C), the strain-rate sensitivity significantly increased (m > 0.1). Compared to Alloy 617, Alloy 230 displayed higher strain-rate sensitivities at high temperatures. This leads to a lower estimated long-term flow stresses. Results of this analysis were used to evaluate current American Society of Mechanical Engineers (ASME) allowable design limits. According to the comparison with the estimated flow stresses, the allowable design stresses in ASME B and PV Code for either alloy did not provide adequate degradation estimation for the possible long-term service life in VHTR. However, rupture stresses for Alloy 617, developed in ASME code case N-47-28, can generally satisfy the safety margin estimated in the study following the strain-rate sensitivity analysis. Nevertheless, additional material development studies might be required, since the design parameters for rupture stresses are constrained such that current VHTR conceptual designs cannot satisfy the limits. (authors)

  17. Structural sensitivity analysis: Methods, applications and needs

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.; Camarda, C. J.; Walsh, J. L.

    1984-01-01

    Innovative techniques applicable to sensitivity analysis of discretized structural systems are reviewed. The techniques include a finite difference step size selection algorithm, a method for derivatives of iterative solutions, a Green's function technique for derivatives of transient response, simultaneous calculation of temperatures and their derivatives, derivatives with respect to shape, and derivatives of optimum designs with respect to problem parameters. Computerized implementations of sensitivity analysis and applications of sensitivity derivatives are also discussed. Some of the critical needs in the structural sensitivity area are indicated along with plans for dealing with some of those needs.

  18. Structural sensitivity analysis: Methods, applications, and needs

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.; Camarda, C. J.; Walsh, J. L.

    1984-01-01

    Some innovative techniques applicable to sensitivity analysis of discretized structural systems are reviewed. These techniques include a finite-difference step-size selection algorithm, a method for derivatives of iterative solutions, a Green's function technique for derivatives of transient response, a simultaneous calculation of temperatures and their derivatives, derivatives with respect to shape, and derivatives of optimum designs with respect to problem parameters. Computerized implementations of sensitivity analysis and applications of sensitivity derivatives are also discussed. Finally, some of the critical needs in the structural sensitivity area are indicated along with Langley plans for dealing with some of these needs.

  19. Accurate adjoint design sensitivities for nano metal optics.

    PubMed

    Hansen, Paul; Hesselink, Lambertus

    2015-09-01

    We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics. PMID:26368483

  20. Sensitivity and Uncertainty Analysis Shell

    1999-04-20

    SUNS (Sensitivity and Uncertainty Analysis Shell) is a 32-bit application that runs under Windows 95/98 and Windows NT. It is designed to aid in statistical analyses for a broad range of applications. The class of problems for which SUNS is suitable is generally defined by two requirements: 1. A computer code is developed or acquired that models some processes for which input is uncertain and the user is interested in statistical analysis of the outputmore » of that code. 2. The statistical analysis of interest can be accomplished using the Monte Carlo analysis. The implementation then requires that the user identify which input to the process model is to be manipulated for statistical analysis. With this information, the changes required to loosely couple SUNS with the process model can be completed. SUNS is then used to generate the required statistical sample and the user-supplied process model analyses the sample. The SUNS post processor displays statistical results from any existing file that contains sampled input and output values.« less

  1. One-Dimensional, Multigroup Cross Section and Design Sensitivity and Uncertainty Analysis Code System - Generalized Perturbation Theory.

    1981-02-02

    Version: 00 SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections (of standard multigroup cross-section sets) and for secondary energy distributions (SED's) of multigroup scattering matrices.

  2. Further comments on sensitivities, parameter estimation, and sampling design in one-dimensional analysis of solute transport in porous media

    USGS Publications Warehouse

    Knopman, D.S.; Voss, C.I.

    1988-01-01

    Sensitivities of solute concentration to parameters associated with first-order chemical decay, boundary conditions, initial conditions, and multilayer transport are examined. A sensitivity is a change in solute concentration resulting from a change in a model parameter. Minimum information required in regression on chemical data for the estimation of model parameters by regression is expressed in terms of sensitivities. Nonlinear regression models were tested on sets of noiseless observations from known models that exceeded the minimum sensitivity information requirements. Results demonstrate that the regression models consistently converged to the correct parameters when the initial sets of parameter values substantially deviated from the correct parameters. -from Authors

  3. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 developers manual.

    SciTech Connect

    Griffin, Joshua D. (Sandia National lababoratory, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson (Sandia National lababoratory, Livermore, CA); Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane (Sandia National lababoratory, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.

    2006-10-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

  4. DAKOTA, a multilevel parellel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 uers's manual.

    SciTech Connect

    Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J. (Sandai National Labs, Livermore, CA); Hough, Patricia Diane (Sandai National Labs, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.

    2006-10-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

  5. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, developers manual.

    SciTech Connect

    Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.

    2010-05-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

  6. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, user's manual.

    SciTech Connect

    Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.

    2010-05-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

  7. Sensitivity analysis of thermodynamic calculations

    NASA Astrophysics Data System (ADS)

    Irwin, C. L.; Obrien, T. J.

    Iterative solution methods and sensitivity analysis for mathematical models of chemical equilibrium are formally similar. For models which are a Newton-type iterative solution scheme, such as the NASA-Lewis CEC code or the R-Gibbs unit of ASPEN, it is shown that extensive sensitivity information is available for approximately the cost of one additional Newton iteration. All matrices and vectors required for implementation of first and second order sensitivity analysis in the CEC code are given in an appendix. A simple problem for which an analytical solution is possible is presented to illustrate the calculations and verify the computer calculations.

  8. Sensitivity Analysis Using Risk Measures.

    PubMed

    Tsanakas, Andreas; Millossovich, Pietro

    2016-01-01

    In a quantitative model with uncertain inputs, the uncertainty of the output can be summarized by a risk measure. We propose a sensitivity analysis method based on derivatives of the output risk measure, in the direction of model inputs. This produces a global sensitivity measure, explicitly linking sensitivity and uncertainty analyses. We focus on the case of distortion risk measures, defined as weighted averages of output percentiles, and prove a representation of the sensitivity measure that can be evaluated on a Monte Carlo sample, as a weighted average of gradients over the input space. When the analytical model is unknown or hard to work with, nonparametric techniques are used for gradient estimation. This process is demonstrated through the example of a nonlinear insurance loss model. Furthermore, the proposed framework is extended in order to measure sensitivity to constant model parameters, uncertain statistical parameters, and random factors driving dependence between model inputs.

  9. D2PC sensitivity analysis

    SciTech Connect

    Lombardi, D.P.

    1992-08-01

    The Chemical Hazard Prediction Model (D2PC) developed by the US Army will play a critical role in the Chemical Stockpile Emergency Preparedness Program by predicting chemical agent transport and dispersion through the atmosphere after an accidental release. To aid in the analysis of the output calculated by D2PC, this sensitivity analysis was conducted to provide information on model response to a variety of input parameters. The sensitivity analysis focused on six accidental release scenarios involving chemical agents VX, GB, and HD (sulfur mustard). Two categories, corresponding to conservative most likely and worst case meteorological conditions, provided the reference for standard input values. D2PC displayed a wide variety of sensitivity to the various input parameters. The model displayed the greatest overall sensitivity to wind speed, mixing height, and breathing rate. For other input parameters, sensitivity was mixed but generally lower. Sensitivity varied not only with parameter, but also over the range of values input for a single parameter. This information on model response can provide useful data for interpreting D2PC output.

  10. A measurement system analysis with design of experiments: Investigation of the adhesion performance of a pressure sensitive adhesive with the probe tack test.

    PubMed

    Michaelis, Marc; Leopold, Claudia S

    2015-12-30

    The tack of a pressure sensitive adhesive (PSA) is not an inherent material property and strongly depends on the measurement conditions. Following the concept of a measurement system analysis (MSA), influencing factors of the probe tack test were investigated by a design of experiments (DoE) approach. A response surface design with 38 runs was built to evaluate the influence of detachment speed, dwell time, contact force, adhesive film thickness and API content on tack, determined as the maximum of the stress strain curve (σmax). It could be shown that all investigated factors have a significant effect on the response and that the DoE approach allowed to detect two-factorial interactions between the dwell time, the contact force, the adhesive film thickness and the API content. Surprisingly, it was found that tack increases with decreasing and not with increasing adhesive film thickness. PMID:26428630

  11. A measurement system analysis with design of experiments: Investigation of the adhesion performance of a pressure sensitive adhesive with the probe tack test.

    PubMed

    Michaelis, Marc; Leopold, Claudia S

    2015-12-30

    The tack of a pressure sensitive adhesive (PSA) is not an inherent material property and strongly depends on the measurement conditions. Following the concept of a measurement system analysis (MSA), influencing factors of the probe tack test were investigated by a design of experiments (DoE) approach. A response surface design with 38 runs was built to evaluate the influence of detachment speed, dwell time, contact force, adhesive film thickness and API content on tack, determined as the maximum of the stress strain curve (σmax). It could be shown that all investigated factors have a significant effect on the response and that the DoE approach allowed to detect two-factorial interactions between the dwell time, the contact force, the adhesive film thickness and the API content. Surprisingly, it was found that tack increases with decreasing and not with increasing adhesive film thickness.

  12. Adjoint sensitivity analysis of an ultrawideband antenna

    SciTech Connect

    Stephanson, M B; White, D A

    2011-07-28

    The frequency domain finite element method using H(curl)-conforming finite elements is a robust technique for full-wave analysis of antennas. As computers become more powerful, it is becoming feasible to not only predict antenna performance, but also to compute sensitivity of antenna performance with respect to multiple parameters. This sensitivity information can then be used for optimization of the design or specification of manufacturing tolerances. In this paper we review the Adjoint Method for sensitivity calculation, and apply it to the problem of optimizing a Ultrawideband antenna.

  13. Wideband sensitivity analysis of plasmonic structures

    NASA Astrophysics Data System (ADS)

    Ahmed, Osman S.; Bakr, Mohamed H.; Li, Xun; Nomura, Tsuyoshi

    2013-03-01

    We propose an adjoint variable method (AVM) for efficient wideband sensitivity analysis of the dispersive plasmonic structures. Transmission Line Modeling (TLM) is exploited for calculation of the structure sensitivities. The theory is developed for general dispersive materials modeled by Drude or Lorentz model. Utilizing the dispersive AVM, sensitivities are calculated with respect to all the designable parameters regardless of their number using at most one extra simulation. This is significantly more efficient than the regular finite difference approaches whose computational overhead scales linearly with the number of design parameters. A Z-domain formulation is utilized to allow for the extension of the theory to a general material model. The theory has been successfully applied to a structure with teethshaped plasmonic resonator. The design variables are the shape parameters (widths and thicknesses) of these teeth. The results are compared to the accurate yet expensive finite difference approach and good agreement is achieved.

  14. Design and Vibration Sensitivity Analysis of a MEMS Tuning Fork Gyroscope with an Anchored Diamond Coupling Mechanism.

    PubMed

    Guan, Yanwei; Gao, Shiqiao; Liu, Haipeng; Jin, Lei; Niu, Shaohua

    2016-04-02

    In this paper, a new micromachined tuning fork gyroscope (TFG) with an anchored diamond coupling mechanism is proposed while the mode ordering and the vibration sensitivity are also investigated. The sense-mode of the proposed TFG was optimized through use of an anchored diamond coupling spring, which enables the in-phase mode frequency to be 108.3% higher than the anti-phase one. The frequencies of the in- and anti-phase modes in the sense direction are 9799.6 Hz and 4705.3 Hz, respectively. The analytical solutions illustrate that the stiffness difference ratio of the in- and anti-phase modes is inversely proportional to the output induced by the vibration from the sense direction. Additionally, FEM simulations demonstrate that the stiffness difference ratio of the anchored diamond coupling TFG is 16.08 times larger than the direct coupling one while the vibration output is reduced by 94.1%. Consequently, the proposed new anchored diamond coupling TFG can structurally increase the stiffness difference ratio to improve the mode ordering and considerably reduce the vibration sensitivity without sacrificing the scale factor.

  15. Design and Vibration Sensitivity Analysis of a MEMS Tuning Fork Gyroscope with an Anchored Diamond Coupling Mechanism.

    PubMed

    Guan, Yanwei; Gao, Shiqiao; Liu, Haipeng; Jin, Lei; Niu, Shaohua

    2016-01-01

    In this paper, a new micromachined tuning fork gyroscope (TFG) with an anchored diamond coupling mechanism is proposed while the mode ordering and the vibration sensitivity are also investigated. The sense-mode of the proposed TFG was optimized through use of an anchored diamond coupling spring, which enables the in-phase mode frequency to be 108.3% higher than the anti-phase one. The frequencies of the in- and anti-phase modes in the sense direction are 9799.6 Hz and 4705.3 Hz, respectively. The analytical solutions illustrate that the stiffness difference ratio of the in- and anti-phase modes is inversely proportional to the output induced by the vibration from the sense direction. Additionally, FEM simulations demonstrate that the stiffness difference ratio of the anchored diamond coupling TFG is 16.08 times larger than the direct coupling one while the vibration output is reduced by 94.1%. Consequently, the proposed new anchored diamond coupling TFG can structurally increase the stiffness difference ratio to improve the mode ordering and considerably reduce the vibration sensitivity without sacrificing the scale factor. PMID:27049385

  16. Design and Vibration Sensitivity Analysis of a MEMS Tuning Fork Gyroscope with an Anchored Diamond Coupling Mechanism

    PubMed Central

    Guan, Yanwei; Gao, Shiqiao; Liu, Haipeng; Jin, Lei; Niu, Shaohua

    2016-01-01

    In this paper, a new micromachined tuning fork gyroscope (TFG) with an anchored diamond coupling mechanism is proposed while the mode ordering and the vibration sensitivity are also investigated. The sense-mode of the proposed TFG was optimized through use of an anchored diamond coupling spring, which enables the in-phase mode frequency to be 108.3% higher than the anti-phase one. The frequencies of the in- and anti-phase modes in the sense direction are 9799.6 Hz and 4705.3 Hz, respectively. The analytical solutions illustrate that the stiffness difference ratio of the in- and anti-phase modes is inversely proportional to the output induced by the vibration from the sense direction. Additionally, FEM simulations demonstrate that the stiffness difference ratio of the anchored diamond coupling TFG is 16.08 times larger than the direct coupling one while the vibration output is reduced by 94.1%. Consequently, the proposed new anchored diamond coupling TFG can structurally increase the stiffness difference ratio to improve the mode ordering and considerably reduce the vibration sensitivity without sacrificing the scale factor. PMID:27049385

  17. Precision of Sensitivity in the Design Optimization of Indeterminate Structures

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Hopkins, Dale A.

    2006-01-01

    Design sensitivity is central to most optimization methods. The analytical sensitivity expression for an indeterminate structural design optimization problem can be factored into a simple determinate term and a complicated indeterminate component. Sensitivity can be approximated by retaining only the determinate term and setting the indeterminate factor to zero. The optimum solution is reached with the approximate sensitivity. The central processing unit (CPU) time to solution is substantially reduced. The benefit that accrues from using the approximate sensitivity is quantified by solving a set of problems in a controlled environment. Each problem is solved twice: first using the closed-form sensitivity expression, then using the approximation. The problem solutions use the CometBoards testbed as the optimization tool with the integrated force method as the analyzer. The modification that may be required, to use the stiffener method as the analysis tool in optimization, is discussed. The design optimization problem of an indeterminate structure contains many dependent constraints because of the implicit relationship between stresses, as well as the relationship between the stresses and displacements. The design optimization process can become problematic because the implicit relationship reduces the rank of the sensitivity matrix. The proposed approximation restores the full rank and enhances the robustness of the design optimization method.

  18. Sensitivity analysis and application in exploration geophysics

    NASA Astrophysics Data System (ADS)

    Tang, R.

    2013-12-01

    In exploration geophysics, the usual way of dealing with geophysical data is to form an Earth model describing underground structure in the area of investigation. The resolved model, however, is based on the inversion of survey data which is unavoidable contaminated by various noises and is sampled in a limited number of observation sites. Furthermore, due to the inherent non-unique weakness of inverse geophysical problem, the result is ambiguous. And it is not clear that which part of model features is well-resolved by the data. Therefore the interpretation of the result is intractable. We applied a sensitivity analysis to address this problem in magnetotelluric(MT). The sensitivity, also named Jacobian matrix or the sensitivity matrix, is comprised of the partial derivatives of the data with respect to the model parameters. In practical inversion, the matrix can be calculated by direct modeling of the theoretical response for the given model perturbation, or by the application of perturbation approach and reciprocity theory. We now acquired visualized sensitivity plot by calculating the sensitivity matrix and the solution is therefore under investigation that the less-resolved part is indicated and should not be considered in interpretation, while the well-resolved parameters can relatively be convincing. The sensitivity analysis is hereby a necessary and helpful tool for increasing the reliability of inverse models. Another main problem of exploration geophysics is about the design strategies of joint geophysical survey, i.e. gravity, magnetic & electromagnetic method. Since geophysical methods are based on the linear or nonlinear relationship between observed data and subsurface parameters, an appropriate design scheme which provides maximum information content within a restricted budget is quite difficult. Here we firstly studied sensitivity of different geophysical methods by mapping the spatial distribution of different survey sensitivity with respect to the

  19. Addressing the expected survival benefit for clinical trial design in metastatic castration-resistant prostate cancer: Sensitivity analysis of randomized trials.

    PubMed

    Massari, Francesco; Modena, Alessandra; Ciccarese, Chiara; Pilotto, Sara; Maines, Francesca; Bracarda, Sergio; Sperduti, Isabella; Giannarelli, Diana; Carlini, Paolo; Santini, Daniele; Tortora, Giampaolo; Porta, Camillo; Bria, Emilio

    2016-02-01

    We performed a sensitivity analysis, cumulating all randomized clinical trials (RCTs) in which patients with metastatic castration-resistant prostate cancer (mCRPC) received systemic therapy, to evaluate if the comparison of RCTs may drive to biased survival estimations. An overall survival (OS) significant difference according to therapeutic strategy was more likely be determined in RCTs evaluating hormonal drugs versus those studies testing immunotherapy, chemotherapy or other strategies. With regard to control arm, an OS significant effect was found for placebo-controlled trials versus studies comparing experimental treatment with active therapies. Finally, regarding to docetaxel (DOC) timing, the OS benefit was more likely to be proved in Post-DOC setting in comparison with DOC and Pre-DOC. These data suggest that clinical trial design should take into account new benchmarks such as the type of treatment strategy, the choice of the comparator and the phase of the disease in relation to the administration of standard chemotherapy.

  20. Involute composite design evaluation using global design sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Hart, J. K.; Stanton, E. L.

    1989-01-01

    An optimization capability for involute structures has been developed. Its key feature is the use of global material geometry variables which are so chosen that all combinations of design variables within a set of lower and upper bounds correspond to manufacturable designs. A further advantage of global variables is that their number does not increase with increasing mesh density. The accuracy of the sensitivity derivatives has been verified both through finite difference tests and through the successful use of the derivatives by an optimizer. The state of the art in composite design today is still marked by point design algorithms linked together using ad hoc methods not directly related to a manufacturing procedure. The global design sensitivity approach presented here for involutes can be applied to filament wound shells and other composite constructions using material form features peculiar to each construction. The present involute optimization technology is being applied to the Space Shuttle SRM nozzle boot ring redesigns by PDA Engineering.

  1. Stellarator Coil Design and Plasma Sensitivity

    SciTech Connect

    Long-Poe Ku and Allen H. Boozer

    2010-11-03

    The rich information contained in the plasma response to external magnetic perturbations can be used to help design stellarator coils more effectively. We demonstrate the feasibility by first devel- oping a simple, direct method to study perturbations in stellarators that do not break stellarator symmetry and periodicity. The method applies a small perturbation to the plasma boundary and evaluates the resulting perturbed free-boundary equilibrium to build up a sensitivity matrix for the important physics attributes of the underlying configuration. Using this sensitivity information, design methods for better stellarator coils are then developed. The procedure and a proof-of-principle application are given that (1) determine the spatial distributions of external normal magnetic field at the location of the unperturbed plasma boundary to which the plasma properties are most sen- sitive, (2) determine the distributions of external normal magnetic field that can be produced most efficiently by distant coils, (3) choose the ratios of the magnitudes of the the efficiently produced magnetic distributions so the sensitive plasma properties can be controlled. Using these methods, sets of modular coils are found for the National Compact Stellarator Experiment (NCSX) that are either smoother or can be located much farther from the plasma boundary than those of the present design.

  2. Design oriented structural analysis

    NASA Technical Reports Server (NTRS)

    Giles, Gary L.

    1994-01-01

    Desirable characteristics and benefits of design oriented analysis methods are described and illustrated by presenting a synoptic description of the development and uses of the Equivalent Laminated Plate Solution (ELAPS) computer code. ELAPS is a design oriented structural analysis method which is intended for use in the early design of aircraft wing structures. Model preparation is minimized by using a few large plate segments to model the wing box structure. Computational efficiency is achieved by using a limited number of global displacement functions that encompass all segments over the wing planform. Coupling with other codes is facilitated since the output quantities such as deflections and stresses are calculated as continuous functions over the plate segments. Various aspects of the ELAPS development are discussed including the analytical formulation, verification of results by comparison with finite element analysis results, coupling with other codes, and calculation of sensitivity derivatives. The effectiveness of ELAPS for multidisciplinary design application is illustrated by describing its use in design studies of high speed civil transport wing structures.

  3. Sensitivity Analysis in the Model Web

    NASA Astrophysics Data System (ADS)

    Jones, R.; Cornford, D.; Boukouvalas, A.

    2012-04-01

    The Model Web, and in particular the Uncertainty enabled Model Web being developed in the UncertWeb project aims to allow model developers and model users to deploy and discover models exposed as services on the Web. In particular model users will be able to compose model and data resources to construct and evaluate complex workflows. When discovering such workflows and models on the Web it is likely that the users might not have prior experience of the model behaviour in detail. It would be particularly beneficial if users could undertake a sensitivity analysis of the models and workflows they have discovered and constructed to allow them to assess the sensitivity to their assumptions and parameters. This work presents a Web-based sensitivity analysis tool which provides computationally efficient sensitivity analysis methods for models exposed on the Web. In particular the tool is tailored to the UncertWeb profiles for both information models (NetCDF and Observations and Measurements) and service specifications (WPS and SOAP/WSDL). The tool employs emulation technology where this is found to be possible, constructing statistical surrogate models for the models or workflows, to allow very fast variance based sensitivity analysis. Where models are too complex for emulation to be possible, or evaluate too fast for this to be necessary the original models are used with a carefully designed sampling strategy. A particular benefit of constructing emulators of the models or workflow components is that within the framework these can be communicated and evaluated at any physical location. The Web-based tool and backend API provide several functions to facilitate the process of creating an emulator and performing sensitivity analysis. A user can select a model exposed on the Web and specify the input ranges. Once this process is complete, they are able to perform screening to discover important inputs, train an emulator, and validate the accuracy of the trained emulator. In

  4. A numerical comparison of sensitivity analysis techniques

    SciTech Connect

    Hamby, D.M.

    1993-12-31

    Engineering and scientific phenomena are often studied with the aid of mathematical models designed to simulate complex physical processes. In the nuclear industry, modeling the movement and consequence of radioactive pollutants is extremely important for environmental protection and facility control. One of the steps in model development is the determination of the parameters most influential on model results. A {open_quotes}sensitivity analysis{close_quotes} of these parameters is not only critical to model validation but also serves to guide future research. A previous manuscript (Hamby) detailed many of the available methods for conducting sensitivity analyses. The current paper is a comparative assessment of several methods for estimating relative parameter sensitivity. Method practicality is based on calculational ease and usefulness of the results. It is the intent of this report to demonstrate calculational rigor and to compare parameter sensitivity rankings resulting from various sensitivity analysis techniques. An atmospheric tritium dosimetry model (Hamby) is used here as an example, but the techniques described can be applied to many different modeling problems. Other investigators (Rose; Dalrymple and Broyd) present comparisons of sensitivity analyses methodologies, but none as comprehensive as the current work.

  5. [Sensitivity analysis in health investment projects].

    PubMed

    Arroyave-Loaiza, G; Isaza-Nieto, P; Jarillo-Soto, E C

    1994-01-01

    This paper discusses some of the concepts and methodologies frequently used in sensitivity analyses in the evaluation of investment programs. In addition, a concrete example is presented: a hospital investment in which four indicators were used to design different scenarios and their impact on investment costs. This paper emphasizes the importance of this type of analysis in the field of management of health services, and more specifically in the formulation of investment programs.

  6. Dynamic sensitivity analysis of biological systems

    PubMed Central

    Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang

    2008-01-01

    Background A mathematical model to understand, predict, control, or even design a real biological system is a central theme in systems biology. A dynamic biological system is always modeled as a nonlinear ordinary differential equation (ODE) system. How to simulate the dynamic behavior and dynamic parameter sensitivities of systems described by ODEs efficiently and accurately is a critical job. In many practical applications, e.g., the fed-batch fermentation systems, the system admissible input (corresponding to independent variables of the system) can be time-dependent. The main difficulty for investigating the dynamic log gains of these systems is the infinite dimension due to the time-dependent input. The classical dynamic sensitivity analysis does not take into account this case for the dynamic log gains. Results We present an algorithm with an adaptive step size control that can be used for computing the solution and dynamic sensitivities of an autonomous ODE system simultaneously. Although our algorithm is one of the decouple direct methods in computing dynamic sensitivities of an ODE system, the step size determined by model equations can be used on the computations of the time profile and dynamic sensitivities with moderate accuracy even when sensitivity equations are more stiff than model equations. To show this algorithm can perform the dynamic sensitivity analysis on very stiff ODE systems with moderate accuracy, it is implemented and applied to two sets of chemical reactions: pyrolysis of ethane and oxidation of formaldehyde. The accuracy of this algorithm is demonstrated by comparing the dynamic parameter sensitivities obtained from this new algorithm and from the direct method with Rosenbrock stiff integrator based on the indirect method. The same dynamic sensitivity analysis was performed on an ethanol fed-batch fermentation system with a time-varying feed rate to evaluate the applicability of the algorithm to realistic models with time

  7. Designing robots for care: care centered value-sensitive design.

    PubMed

    van Wynsberghe, Aimee

    2013-06-01

    The prospective robots in healthcare intended to be included within the conclave of the nurse-patient relationship--what I refer to as care robots--require rigorous ethical reflection to ensure their design and introduction do not impede the promotion of values and the dignity of patients at such a vulnerable and sensitive time in their lives. The ethical evaluation of care robots requires insight into the values at stake in the healthcare tradition. What's more, given the stage of their development and lack of standards provided by the International Organization for Standardization to guide their development, ethics ought to be included into the design process of such robots. The manner in which this may be accomplished, as presented here, uses the blueprint of the Value-sensitive design approach as a means for creating a framework tailored to care contexts. Using care values as the foundational values to be integrated into a technology and using the elements in care, from the care ethics perspective, as the normative criteria, the resulting approach may be referred to as care centered value-sensitive design. The framework proposed here allows for the ethical evaluation of care robots both retrospectively and prospectively. By evaluating care robots in this way, we may ultimately ask what kind of care we, as a society, want to provide in the future.

  8. Visualization of the Invisible, Explanation of the Unknown, Ruggedization of the Unstable: Sensitivity Analysis, Virtual Tryout and Robust Design through Systematic Stochastic Simulation

    SciTech Connect

    Zwickl, Titus; Carleer, Bart; Kubli, Waldemar

    2005-08-05

    In the past decade, sheet metal forming simulation became a well established tool to predict the formability of parts. In the automotive industry, this has enabled significant reduction in the cost and time for vehicle design and development, and has helped to improve the quality and performance of vehicle parts. However, production stoppages for troubleshooting and unplanned die maintenance, as well as production quality fluctuations continue to plague manufacturing cost and time. The focus therefore has shifted in recent times beyond mere feasibility to robustness of the product and process being engineered. Ensuring robustness is the next big challenge for the virtual tryout / simulation technology.We introduce new methods, based on systematic stochastic simulations, to visualize the behavior of the part during the whole forming process -- in simulation as well as in production. Sensitivity analysis explains the response of the part to changes in influencing parameters. Virtual tryout allows quick exploration of changed designs and conditions. Robust design and manufacturing guarantees quality and process capability for the production process. While conventional simulations helped to reduce development time and cost by ensuring feasible processes, robustness engineering tools have the potential for far greater cost and time savings.Through examples we illustrate how expected and unexpected behavior of deep drawing parts may be tracked down, identified and assigned to the influential parameters. With this knowledge, defects can be eliminated or springback can be compensated e.g.; the response of the part to uncontrollable noise can be predicted and minimized. The newly introduced methods enable more reliable and predictable stamping processes in general.

  9. Using Dynamic Sensitivity Analysis to Assess Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey; Morell, Larry; Miller, Keith

    1990-01-01

    This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.

  10. Stiff DAE integrator with sensitivity analysis capabilities

    2007-11-26

    IDAS is a general purpose (serial and parallel) solver for differential equation (ODE) systems with senstivity analysis capabilities. It provides both forward and adjoint sensitivity analysis options.

  11. Grid sensitivity for aerodynamic optimization and flow analysis

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1993-01-01

    After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.

  12. Nursing-sensitive indicators: a concept analysis

    PubMed Central

    Heslop, Liza; Lu, Sai

    2014-01-01

    Aim To report a concept analysis of nursing-sensitive indicators within the applied context of the acute care setting. Background The concept of ‘nursing sensitive indicators’ is valuable to elaborate nursing care performance. The conceptual foundation, theoretical role, meaning, use and interpretation of the concept tend to differ. The elusiveness of the concept and the ambiguity of its attributes may have hindered research efforts to advance its application in practice. Design Concept analysis. Data sources Using ‘clinical indicators’ or ‘quality of nursing care’ as subject headings and incorporating keyword combinations of ‘acute care’ and ‘nurs*’, CINAHL and MEDLINE with full text in EBSCOhost databases were searched for English language journal articles published between 2000–2012. Only primary research articles were selected. Methods A hybrid approach was undertaken, incorporating traditional strategies as per Walker and Avant and a conceptual matrix based on Holzemer's Outcomes Model for Health Care Research. Results The analysis revealed two main attributes of nursing-sensitive indicators. Structural attributes related to health service operation included: hours of nursing care per patient day, nurse staffing. Outcome attributes related to patient care included: the prevalence of pressure ulcer, falls and falls with injury, nosocomial selective infection and patient/family satisfaction with nursing care. Conclusion This concept analysis may be used as a basis to advance understandings of the theoretical structures that underpin both research and practical application of quality dimensions of nursing care performance. PMID:25113388

  13. Implementation of efficient sensitivity analysis for optimization of large structures

    NASA Technical Reports Server (NTRS)

    Umaretiya, J. R.; Kamil, H.

    1990-01-01

    The paper presents the theoretical bases and implementation techniques of sensitivity analyses for efficient structural optimization of large structures, based on finite element static and dynamic analysis methods. The sensitivity analyses have been implemented in conjunction with two methods for optimization, namely, the Mathematical Programming and Optimality Criteria methods. The paper discusses the implementation of the sensitivity analysis method into our in-house software package, AutoDesign.

  14. Data fusion qualitative sensitivity analysis

    SciTech Connect

    Clayton, E.A.; Lewis, R.E.

    1995-09-01

    Pacific Northwest Laboratory was tasked with testing, debugging, and refining the Hanford Site data fusion workstation (DFW), with the assistance of Coleman Research Corporation (CRC), before delivering the DFW to the environmental restoration client at the Hanford Site. Data fusion is the mathematical combination (or fusion) of disparate data sets into a single interpretation. The data fusion software used in this study was developed by CRC. The data fusion software developed by CRC was initially demonstrated on a data set collected at the Hanford Site where three types of data were combined. These data were (1) seismic reflection, (2) seismic refraction, and (3) depth to geologic horizons. The fused results included a contour map of the top of a low-permeability horizon. This report discusses the results of a sensitivity analysis of data fusion software to variations in its input parameters. The data fusion software developed by CRC has a large number of input parameters that can be varied by the user and that influence the results of data fusion. Many of these parameters are defined as part of the earth model. The earth model is a series of 3-dimensional polynomials with horizontal spatial coordinates as the independent variables and either subsurface layer depth or values of various properties within these layers (e.g., compression wave velocity, resistivity) as the dependent variables.

  15. Longitudinal Genetic Analysis of Anxiety Sensitivity

    ERIC Educational Resources Information Center

    Zavos, Helena M. S.; Gregory, Alice M.; Eley, Thalia C.

    2012-01-01

    Anxiety sensitivity is associated with both anxiety and depression and has been shown to be heritable. Little, however, is known about the role of genetic influence on continuity and change of symptoms over time. The authors' aim was to examine the stability of anxiety sensitivity during adolescence. By using a genetically sensitive design, the…

  16. A review of sensitivity analysis techniques

    SciTech Connect

    Hamby, D.M.

    1993-12-31

    Mathematical models are utilized to approximate various highly complex engineering, physical, environmental, social, and economic phenomena. Model parameters exerting the most influence on model results are identified through a {open_quotes}sensitivity analysis.{close_quotes} A comprehensive review is presented of more than a dozen sensitivity analysis methods. The most fundamental of sensitivity techniques utilizes partial differentiation whereas the simplest approach requires varying parameter values one-at-a-time. Correlation analysis is used to determine relationships between independent and dependent variables. Regression analysis provides the most comprehensive sensitivity measure and is commonly utilized to build response surfaces that approximate complex models.

  17. Geothermal power, policy, and design: Using levelized cost of energy and sensitivity analysis to target improved policy incentives for the U.S. geothermal market

    NASA Astrophysics Data System (ADS)

    Richard, Christopher L.

    At the core of the geothermal industry is a need to identify how policy incentives can better be applied for optimal return. Literature from Bloomquist (1999), Doris et al. (2009), and McIlveen (2011) suggest that a more tailored approach to crafting geothermal policy is warranted. In this research the guiding theory is based on those suggestions and is structured to represent a policy analysis approach using analytical methods. The methods being used are focus on qualitative and quantitative results. To address the qualitative sections of this research an extensive review of contemporary literature is used to identify the frequency of use for specific barriers, and is followed upon with an industry survey to determine existing gaps. As a result there is support for certain barriers and justification for expanding those barriers found within the literature. This method of inquiry is an initial point for structuring modeling tools to further quantify the research results as part of the theoretical framework. Analytical modeling utilizes the levelized cost of energy as a foundation for comparative assessment of policy incentives. Model parameters use assumptions to draw conclusions from literature and survey results to reflect unique attributes held by geothermal power technologies. Further testing by policy option provides an opportunity to assess the sensitivity of each variable with respect to applied policy. Master limited partnerships, feed in tariffs, RD&D, and categorical exclusions all result as viable options for mitigating specific barriers associated to developing geothermal power. The results show reductions of levelized cost based upon the model's exclusive parameters. These results are also compared to contemporary policy options highlighting the need for tailored policy, as discussed by Bloomquist (1999), Doris et al. (2009), and McIlveen (2011). It is the intent of this research to provide the reader with a descriptive understanding of the role of

  18. Structural design utilizing updated, approximate sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.

    1993-01-01

    A method to improve the computational efficiency of structural optimization algorithms is investigated. In this method, the calculations of 'exact' sensitivity derivatives of constraint functions are performed only at selected iterations during the optimization process. The sensitivity derivatives utilized within other iterations are approximate derivatives which are calculated using an inexpensive derivative update formula. Optimization results are presented for an analytic optimization problem (i.e., one having simple polynomial expressions for the objective and constraint functions) and for two structural optimization problems. The structural optimization results indicate that up to a factor of three improvement in computation time is possible when using the updated sensitivity derivatives.

  19. Sensitivity Analysis of Wing Aeroelastic Responses

    NASA Technical Reports Server (NTRS)

    Issac, Jason Cherian

    1995-01-01

    Design for prevention of aeroelastic instability (that is, the critical speeds leading to aeroelastic instability lie outside the operating range) is an integral part of the wing design process. Availability of the sensitivity derivatives of the various critical speeds with respect to shape parameters of the wing could be very useful to a designer in the initial design phase, when several design changes are made and the shape of the final configuration is not yet frozen. These derivatives are also indispensable for a gradient-based optimization with aeroelastic constraints. In this study, flutter characteristic of a typical section in subsonic compressible flow is examined using a state-space unsteady aerodynamic representation. The sensitivity of the flutter speed of the typical section with respect to its mass and stiffness parameters, namely, mass ratio, static unbalance, radius of gyration, bending frequency, and torsional frequency is calculated analytically. A strip theory formulation is newly developed to represent the unsteady aerodynamic forces on a wing. This is coupled with an equivalent plate structural model and solved as an eigenvalue problem to determine the critical speed of the wing. Flutter analysis of the wing is also carried out using a lifting-surface subsonic kernel function aerodynamic theory (FAST) and an equivalent plate structural model. Finite element modeling of the wing is done using NASTRAN so that wing structures made of spars and ribs and top and bottom wing skins could be analyzed. The free vibration modes of the wing obtained from NASTRAN are input into FAST to compute the flutter speed. An equivalent plate model which incorporates first-order shear deformation theory is then examined so it can be used to model thick wings, where shear deformations are important. The sensitivity of natural frequencies to changes in shape parameters is obtained using ADIFOR. A simple optimization effort is made towards obtaining a minimum weight

  20. FOCUS - An experimental environment for fault sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Choi, Gwan S.; Iyer, Ravishankar K.

    1992-01-01

    FOCUS, a simulation environment for conducting fault-sensitivity analysis of chip-level designs, is described. The environment can be used to evaluate alternative design tactics at an early design stage. A range of user specified faults is automatically injected at runtime, and their propagation to the chip I/O pins is measured through the gate and higher levels. A number of techniques for fault-sensitivity analysis are proposed and implemented in the FOCUS environment. These include transient impact assessment on latch, pin and functional errors, external pin error distribution due to in-chip transients, charge-level sensitivity analysis, and error propagation models to depict the dynamic behavior of latch errors. A case study of the impact of transient faults on a microprocessor-based jet-engine controller is used to identify the critical fault propagation paths, the module most sensitive to fault propagation, and the module with the highest potential for causing external errors.

  1. Recent developments in structural sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Adelman, Howard M.

    1988-01-01

    Recent developments are reviewed in two major areas of structural sensitivity analysis: sensitivity of static and transient response; and sensitivity of vibration and buckling eigenproblems. Recent developments from the standpoint of computational cost, accuracy, and ease of implementation are presented. In the area of static response, current interest is focused on sensitivity to shape variation and sensitivity of nonlinear response. Two general approaches are used for computing sensitivities: differentiation of the continuum equations followed by discretization, and the reverse approach of discretization followed by differentiation. It is shown that the choice of methods has important accuracy and implementation implications. In the area of eigenproblem sensitivity, there is a great deal of interest and significant progress in sensitivity of problems with repeated eigenvalues. In addition to reviewing recent contributions in this area, the paper raises the issue of differentiability and continuity associated with the occurrence of repeated eigenvalues.

  2. Sensitivity analysis of a wing aeroelastic response

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Eldred, Lloyd B.; Barthelemy, Jean-Francois M.

    1991-01-01

    A variation of Sobieski's Global Sensitivity Equations (GSE) approach is implemented to obtain the sensitivity of the static aeroelastic response of a three-dimensional wing model. The formulation is quite general and accepts any aerodynamics and structural analysis capability. An interface code is written to convert one analysis's output to the other's input, and visa versa. Local sensitivity derivatives are calculated by either analytic methods or finite difference techniques. A program to combine the local sensitivities, such as the sensitivity of the stiffness matrix or the aerodynamic kernel matrix, into global sensitivity derivatives is developed. The aerodynamic analysis package FAST, using a lifting surface theory, and a structural package, ELAPS, implementing Giles' equivalent plate model are used.

  3. Sensitivity Analysis for some Water Pollution Problem

    NASA Astrophysics Data System (ADS)

    Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff

    2014-05-01

    Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .

  4. Extended Forward Sensitivity Analysis for Uncertainty Quantification

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau

    2013-01-01

    This paper presents the extended forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed to run at optimized time and space steps without affecting the confidence of the physical parameter sensitivity results. The time and space steps forward sensitivity analysis method can also replace the traditional time step and grid convergence study with much less computational cost. Two well-defined benchmark problems with manufactured solutions are utilized to demonstrate the method.

  5. Updated Chemical Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    2005-01-01

    An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.

  6. Sensitivity analysis for large-scale problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  7. Imaging system sensitivity analysis with NV-IPM

    NASA Astrophysics Data System (ADS)

    Fanning, Jonathan; Teaney, Brian

    2014-05-01

    This paper describes the sensitivity analysis capabilities to be added to version 1.2 of the NVESD imaging sensor model NV-IPM. Imaging system design always involves tradeoffs to design the best system possible within size, weight, and cost constraints. In general, the performance of a well designed system will be limited by the largest, heaviest, and most expensive components. Modeling is used to analyze system designs before the system is built. Traditionally, NVESD models were only used to determine the performance of a given system design. NV-IPM has the added ability to automatically determine the sensitivity of any system output to changes in the system parameters. The component-based structure of NV-IPM tracks the dependence between outputs and inputs such that only the relevant parameters are varied in the sensitivity analysis. This allows sensitivity analysis of an output such as probability of identification to determine the limiting parameters of the system. Individual components can be optimized by doing sensitivity analysis of outputs such as NETD or SNR. This capability will be demonstrated by analyzing example imaging systems.

  8. Aero-Structural Interaction, Analysis, and Shape Sensitivity

    NASA Technical Reports Server (NTRS)

    Newman, James C., III

    1999-01-01

    A multidisciplinary sensitivity analysis technique that has been shown to be independent of step-size selection is examined further. The accuracy of this step-size independent technique, which uses complex variables for determining sensitivity derivatives, has been previously established. The primary focus of this work is to validate the aero-structural analysis procedure currently being used. This validation consists of comparing computed and experimental data obtained for an Aeroelastic Research Wing (ARW-2). Since the aero-structural analysis procedure has the complex variable modifications already included into the software, sensitivity derivatives can automatically be computed. Other than for design purposes, sensitivity derivatives can be used for predicting the solution at nearby conditions. The use of sensitivity derivatives for predicting the aero-structural characteristics of this configuration is demonstrated.

  9. Coal Transportation Rate Sensitivity Analysis

    EIA Publications

    2005-01-01

    On December 21, 2004, the Surface Transportation Board (STB) requested that the Energy Information Administration (EIA) analyze the impact of changes in coal transportation rates on projected levels of electric power sector energy use and emissions. Specifically, the STB requested an analysis of changes in national and regional coal consumption and emissions resulting from adjustments in railroad transportation rates for Wyoming's Powder River Basin (PRB) coal using the National Energy Modeling System (NEMS). However, because NEMS operates at a relatively aggregate regional level and does not represent the costs of transporting coal over specific rail lines, this analysis reports on the impacts of interregional changes in transportation rates from those used in the Annual Energy Outlook 2005 (AEO2005) reference case.

  10. Proximity effect correction sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Zepka, Alex; Zimmermann, Rainer; Hoppe, Wolfgang; Schulz, Martin

    2010-05-01

    Determining the quality of a proximity effect correction (PEC) is often done via 1-dimensional measurements such as: CD deviations from target, corner rounding, or line-end shortening. An alternative approach would compare the entire perimeter of the exposed shape and its original design. Unfortunately, this is not a viable solution as there is a practical limit to the number of metrology measurements that can be done in a reasonable amount of time. In this paper we make use of simulated results and introduce a method which may be considered complementary to the standard way of PEC qualification. It compares simulated contours with the target layout via a Boolean XOR operation with the area of the XOR differences providing a direct measure of how close a corrected layout approximates the target.

  11. Evolution of Geometric Sensitivity Derivatives from Computer Aided Design Models

    NASA Technical Reports Server (NTRS)

    Jones, William T.; Lazzara, David; Haimes, Robert

    2010-01-01

    The generation of design parameter sensitivity derivatives is required for gradient-based optimization. Such sensitivity derivatives are elusive at best when working with geometry defined within the solid modeling context of Computer-Aided Design (CAD) systems. Solid modeling CAD systems are often proprietary and always complex, thereby necessitating ad hoc procedures to infer parameter sensitivity. A new perspective is presented that makes direct use of the hierarchical associativity of CAD features to trace their evolution and thereby track design parameter sensitivity. In contrast to ad hoc methods, this method provides a more concise procedure following the model design intent and determining the sensitivity of CAD geometry directly to its respective defining parameters.

  12. Sensitivity analysis of distributed volcanic source inversion

    NASA Astrophysics Data System (ADS)

    Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José

    2016-04-01

    A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep

  13. Tilt-Sensitivity Analysis for Space Telescopes

    NASA Technical Reports Server (NTRS)

    Papalexandris, Miltiadis; Waluschka, Eugene

    2003-01-01

    A report discusses a computational-simulation study of phase-front propagation in the Laser Interferometer Space Antenna (LISA), in which space telescopes would transmit and receive metrological laser beams along 5-Gm interferometer arms. The main objective of the study was to determine the sensitivity of the average phase of a beam with respect to fluctuations in pointing of the beam. The simulations account for the effects of obscurations by a secondary mirror and its supporting struts in a telescope, and for the effects of optical imperfections (especially tilt) of a telescope. A significant innovation introduced in this study is a methodology, applicable to space telescopes in general, for predicting the effects of optical imperfections. This methodology involves a Monte Carlo simulation in which one generates many random wavefront distortions and studies their effects through computational simulations of propagation. Then one performs a statistical analysis of the results of the simulations and computes the functional relations among such important design parameters as the sizes of distortions and the mean value and the variance of the loss of performance. These functional relations provide information regarding position and orientation tolerances relevant to design and operation.

  14. Multiple predictor smoothing methods for sensitivity analysis.

    SciTech Connect

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  15. An analytical sensitivity method for use in integrated aeroservoelastic aircraft design

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1989-01-01

    Interdisciplinary analysis capabilities have been developed for aeroservoelastic aircraft and large flexible spacecraft, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Gaussian (LQG) optimal control laws, enabling the use of LQG techniques in the hierarchal design methodology. The LQG sensitivity analysis method calculates the change in the optimal control law and resulting controlled system responses due to changes in fixed design integration parameters using analytical sensitivity equations. Numerical results of a LQG design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimal control law and aircraft response for various parameters such as wing bending natural frequency is determined. The sensitivity results computed from the analytical expressions are used to estimate changes in response resulting from changes in the parameters. Comparisons of the estimates with exact calculated responses show they are reasonably accurate for + or - 15 percent changes in the parameters. Evaluation of the analytical expressions is computationally faster than equivalent finite difference calculations.

  16. Methodology for a stormwater sensitive urban watershed design

    NASA Astrophysics Data System (ADS)

    Romnée, Ambroise; Evrard, Arnaud; Trachte, Sophie

    2015-11-01

    In urban stormwater management, decentralized systems are nowadays worldwide experimented, including stormwater best management practices. However, a watershed-scale approach, relevant for urban hydrology, is almost always neglected when designing a stormwater management plan with best management practices. As a consequence, urban designers fail to convince public authorities of the actual hydrologic effectiveness of such an approach to urban watershed stormwater management. In this paper, we develop a design oriented methodology for studying the morphology of an urban watershed in terms of sustainable stormwater management. The methodology is a five-step method, firstly based on the cartographic analysis of many stormwater relevant indicators regarding the landscape, the urban fabric and the governance. The second step focuses on the identification of many territorial stakes and their corresponding strategies of a decentralized stormwater management. Based on the indicators, the stakes and the strategies, the third step defines many spatial typologies regarding the roadway system and the urban fabric system. The fourth step determines many stormwater management scenarios to be applied to both spatial typologies systems. The fifth step is the design of decentralized stormwater management projects integrating BMPs into each spatial typology. The methodology aims to advise urban designers and engineering offices in the right location and selection of BMPs without given them a hypothetical unique solution. Since every location and every watershed is different due to local guidelines and stakeholders, this paper provide a methodology for a stormwater sensitive urban watershed design that could be reproduced everywhere. As an example, the methodology is applied as a case study to an urban watershed in Belgium, confirming that the method is applicable to any urban watershed. This paper should be helpful for engineering and design offices in urban hydrology to define a

  17. Liquid Acquisition Device Design Sensitivity Study

    NASA Technical Reports Server (NTRS)

    VanDyke, M. K.; Hastings, L. J.

    2012-01-01

    In-space propulsion often necessitates the use of a capillary liquid acquisition device (LAD) to assure that gas-free liquid propellant is available to support engine restarts in microgravity. If a capillary screen-channel device is chosen, then the designer must determine the appropriate combination screen mesh and channel geometry. A screen mesh selection which results in the smallest LAD width when compared to any other screen candidate (for a constant length) is desirable; however, no best screen exists for all LAD design requirements. Flow rate, percent fill, and acceleration are the most influential drivers for determining screen widths. Increased flow rates and reduced percent fills increase the through-the-screen flow pressure losses, which drive the LAD to increased widths regardless of screen choice. Similarly, increased acceleration levels and corresponding liquid head pressures drive the screen mesh selection toward a higher bubble point (liquid retention capability). After ruling out some screens on the basis of acceleration requirements alone, candidates can be identified by examining screens with small flow-loss-to-bubble point ratios for a given condition (i.e., comparing screens at certain flow rates and fill levels). Within the same flow rate and fill level, the screen constants inertia resistance coefficient, void fraction, screen pore or opening diameter, and bubble point can become the driving forces in identifying the smaller flow-loss-to-bubble point ratios.

  18. DAKOTA, A Multilevel Parallel Object-Oriented Framework for Design Optimization, Parameter Estimation, Uncertainty Quantification, and Sensitivity Analysis Version 3.0 Developers Manual (title change from electronic posting)

    SciTech Connect

    ELDRED, MICHAEL S.; GIUNTA, ANTHONY A.; VAN BLOEMEN WAANDERS, BART G.; WOJTKIEWICZ JR., STEVEN F.; HART, WILLIAM E.; ALLEVA, MARIO

    2002-04-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, analytic reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

  19. A strategy to design novel structure photochromic sensitizers for dye-sensitized solar cells

    PubMed Central

    Wu, Wenjun; Wang, Jiaxing; Zheng, Zhiwei; Hu, Yue; Jin, Jiayu; Zhang, Qiong; Hua, Jianli

    2015-01-01

    Two sensitizers with novel structure were designed and synthetized by introducing photochromic bisthienylethene (BTE) group into the conjugated system. Thanks to the photochromic effect the sensitizers have under ultraviolet and visible light, the conjugated bridge can be restructured and the resulting two photoisomers showed different behaviors in photovoltaic devices. This opens up a new research way for the dye-sensitized solar cells (DSSCs). PMID:25716204

  20. A PDE Sensitivity Equation Method for Optimal Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1996-01-01

    The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.

  1. Ceramic tubesheet design analysis

    SciTech Connect

    Mallett, R.H.; Swindeman, R.W.

    1996-06-01

    A transport combustor is being commissioned at the Southern Services facility in Wilsonville, Alabama to provide a gaseous product for the assessment of hot-gas filtering systems. One of the barrier filters incorporates a ceramic tubesheet to support candle filters. The ceramic tubesheet, designed and manufactured by Industrial Filter and Pump Manufacturing Company (EF&PM), is unique and offers distinct advantages over metallic systems in terms of density, resistance to corrosion, and resistance to creep at operating temperatures above 815{degrees}C (1500{degrees}F). Nevertheless, the operational requirements of the ceramic tubesheet are severe. The tubesheet is almost 1.5 m in (55 in.) in diameter, has many penetrations, and must support the weight of the ceramic filters, coal ash accumulation, and a pressure drop (one atmosphere). Further, thermal stresses related to steady state and transient conditions will occur. To gain a better understanding of the structural performance limitations, a contract was placed with Mallett Technology, Inc. to perform a thermal and structural analysis of the tubesheet design. The design analysis specification and a preliminary design analysis were completed in the early part of 1995. The analyses indicated that modifications to the design were necessary to reduce thermal stress, and it was necessary to complete the redesign before the final thermal/mechanical analysis could be undertaken. The preliminary analysis identified the need to confirm that the physical and mechanical properties data used in the design were representative of the material in the tubesheet. Subsequently, few exploratory tests were performed at ORNL to evaluate the ceramic structural material.

  2. SEP thrust subsystem performance sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.

    1973-01-01

    This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.

  3. Sensitive chiral analysis by capillary electrophoresis.

    PubMed

    García-Ruiz, Carmen; Marina, María Luisa

    2006-01-01

    In this review, an updated view of the different strategies used up to now to enhance the sensitivity of detection in chiral analysis by CE will be provided to the readers. With this aim, it will include a brief description of the fundamentals and most of the recent applications performed in sensitive chiral analysis by CE using offline and online sample treatment techniques (SPE, liquid-liquid extraction, microdialysis, etc.), on-column preconcentration techniques based on electrophoretic principles (ITP, stacking, and sweeping), and alternative detection systems (spectroscopic, spectrometric, and electrochemical) to the widely used UV-Vis absorption detection.

  4. Comparative Sensitivity Analysis of Muscle Activation Dynamics.

    PubMed

    Rockenfeller, Robert; Günther, Michael; Schmitt, Syn; Götz, Thomas

    2015-01-01

    We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379

  5. Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)

    1996-01-01

    Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite

  6. Sensitivity study of the monogroove with screen heat pipe design

    NASA Technical Reports Server (NTRS)

    Evans, Austin L.; Joyce, Martin

    1988-01-01

    The present sensitivity study of design variable effects on the performance of a monogroove-with-screen heat pipe obtains performance curves for maximum heat-transfer rates vs. operating temperatures by means of a computer code; performance projections for both 1-g and zero-g conditions are obtainable. The variables in question were liquid and vapor channel design, wall groove design, and the number of feed lines in the evaporator and condenser. The effect on performance of three different working fluids, namely ammonia, methanol, and water, were also determined. Greatest sensitivity was to changes in liquid and vapor channel diameters.

  7. Sensitivity analysis for interactions under unmeasured confounding.

    PubMed

    Vanderweele, Tyler J; Mukherjee, Bhramar; Chen, Jinbo

    2012-09-28

    We develop a sensitivity analysis technique to assess the sensitivity of interaction analyses to unmeasured confounding. We give bias formulas for sensitivity analysis for interaction under unmeasured confounding on both additive and multiplicative scales. We provide simplified formulas in the case in which either one of the two factors does not interact with the unmeasured confounder in its effects on the outcome. An interesting consequence of the results is that if the two exposures of interest are independent (e.g., gene-environment independence), even under unmeasured confounding, if the estimate of the interaction is nonzero, then either there is a true interaction between the two factors or there is an interaction between one of the factors and the unmeasured confounder; an interaction must be present in either scenario. We apply the results to two examples drawn from the literature.

  8. Sensitivity and Uncertainty Analysis of the keff for VHTR fuel

    NASA Astrophysics Data System (ADS)

    Han, Tae Young; Lee, Hyun Chul; Noh, Jae Man

    2014-06-01

    For the uncertainty and sensitivity analysis of PMR200 designed as a VHTR in KAERI, MUSAD was implemented based on the deterministic method in the connection with DeCART/CAPP code system. The sensitivity of the multiplication factor was derived using the classical perturbation theory and the sensitivity coefficients for the individual cross sections were obtained by the adjoint method within the framework of the transport equation. Then, the uncertainty of the multiplication factor was calculated from the product of the covariance matrix and the sensitivity. For the verification calculation of the implemented code, the uncertainty analysis on GODIVA benchmark and PMR200 pin cell problem were carried out and the results were compared with the reference codes, TSUNAMI and McCARD. As a result, they are in a good agreement except the uncertainty by the scattering cross section which was calculated using the different scattering moment.

  9. Pediatric Pain, Predictive Inference, and Sensitivity Analysis.

    ERIC Educational Resources Information Center

    Weiss, Robert

    1994-01-01

    Coping style and effects of counseling intervention on pain tolerance was studied for 61 elementary school students through immersion of hands in cold water. Bayesian predictive inference tools are able to distinguish between subject characteristics and manipulable treatments. Sensitivity analysis strengthens the certainty of conclusions about…

  10. Improving Discrete-Sensitivity-Based Approach for Practical Design Optimization

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Cordero, Yvette; Pandya, Mohagna J.

    1997-01-01

    In developing the automated methodologies for simulation-based optimal shape designs, their accuracy, efficiency and practicality are the defining factors to their success. To that end, four recent improvements to the building blocks of such a methodology, intended for more practical design optimization, have been reported. First, in addition to a polynomial-based parameterization, a partial differential equation (PDE) based parameterization was shown to be a practical tool for a number of reasons. Second, an alternative has been incorporated to one of the tedious phases of developing such a methodology, namely, the automatic differentiation of the computer code for the flow analysis in order to generate the sensitivities. Third, by extending the methodology for the thin-layer Navier-Stokes (TLNS) based flow simulations, the more accurate flow physics was made available. However, the computer storage requirement for a shape optimization of a practical configuration with the -fidelity simulations (TLNS and dense-grid based simulations), required substantial computational resources. Therefore, the final improvement reported herein responded to this point by including the alternating-direct-implicit (ADI) based system solver as an alternative to the preconditioned biconjugate (PbCG) and other direct solvers.

  11. Geothermal well cost sensitivity analysis: current status

    SciTech Connect

    Carson, C.C.; Lin, Y.T.

    1980-01-01

    The geothermal well-cost model developed by Sandia National Laboratories is being used to analyze the sensitivity of well costs to improvements in geothermal drilling technology. Three interim results from this modeling effort are discussed. The sensitivity of well costs to bit parameters, rig parameters, and material costs; an analysis of the cost reduction potential of an advanced bit; and a consideration of breakeven costs for new cementing technology. All three results illustrate that the well-cost savings arising from any new technology will be highly site-dependent but that in specific wells the advances considered can result in significant cost reductions.

  12. NIR sensitivity analysis with the VANE

    NASA Astrophysics Data System (ADS)

    Carrillo, Justin T.; Goodin, Christopher T.; Baylot, Alex E.

    2016-05-01

    Near infrared (NIR) cameras, with peak sensitivity around 905-nm wavelengths, are increasingly used in object detection applications such as pedestrian detection, occupant detection in vehicles, and vehicle detection. In this work, we present the results of simulated sensitivity analysis for object detection with NIR cameras. The analysis was conducted using high performance computing (HPC) to determine the environmental effects on object detection in different terrains and environmental conditions. The Virtual Autonomous Navigation Environment (VANE) was used to simulate highresolution models for environment, terrain, vehicles, and sensors. In the experiment, an active fiducial marker was attached to the rear bumper of a vehicle. The camera was mounted on a following vehicle that trailed at varying standoff distances. Three different terrain conditions (rural, urban, and forest), two environmental conditions (clear and hazy), three different times of day (morning, noon, and evening), and six different standoff distances were used to perform the sensor sensitivity analysis. The NIR camera that was used for the simulation is the DMK firewire monochrome on a pan-tilt motor. Standoff distance was varied along with environment and environmental conditions to determine the critical failure points for the sensor. Feature matching was used to detect the markers in each frame of the simulation, and the percentage of frames in which one of the markers was detected was recorded. The standoff distance produced the biggest impact on the performance of the camera system, while the camera system was not sensitive to environment conditions.

  13. Microelectromechanical Resonant Accelerometer Designed with a High Sensitivity.

    PubMed

    Zhang, Jing; Su, Yan; Shi, Qin; Qiu, An-Ping

    2015-01-01

    This paper describes the design and experimental evaluation of a silicon micro-machined resonant accelerometer (SMRA). This type of accelerometer works on the principle that a proof mass under acceleration applies force to two double-ended tuning fork (DETF) resonators, and the frequency output of two DETFs exhibits a differential shift. The dies of an SMRA are fabricated using silicon-on-insulator (SOI) processing and wafer-level vacuum packaging. This research aims to design a high-sensitivity SMRA because a high sensitivity allows for the acceleration signal to be easily demodulated by frequency counting techniques and decreases the noise level. This study applies the energy-consumed concept and the Nelder-Mead algorithm in the SMRA to address the design issues and further increase its sensitivity. Using this novel method, the sensitivity of the SMRA has been increased by 66.1%, which attributes to both the re-designed DETF and the reduced energy loss on the micro-lever. The results of both the closed-form and finite-element analyses are described and are in agreement with one another. A resonant frequency of approximately 22 kHz, a frequency sensitivity of over 250 Hz per g, a one-hour bias stability of 55 μg, a bias repeatability (1σ) of 48 μg and the bias-instability of 4.8 μg have been achieved. PMID:26633425

  14. Microelectromechanical Resonant Accelerometer Designed with a High Sensitivity

    PubMed Central

    Zhang, Jing; Su, Yan; Shi, Qin; Qiu, An-Ping

    2015-01-01

    This paper describes the design and experimental evaluation of a silicon micro-machined resonant accelerometer (SMRA). This type of accelerometer works on the principle that a proof mass under acceleration applies force to two double-ended tuning fork (DETF) resonators, and the frequency output of two DETFs exhibits a differential shift. The dies of an SMRA are fabricated using silicon-on-insulator (SOI) processing and wafer-level vacuum packaging. This research aims to design a high-sensitivity SMRA because a high sensitivity allows for the acceleration signal to be easily demodulated by frequency counting techniques and decreases the noise level. This study applies the energy-consumed concept and the Nelder-Mead algorithm in the SMRA to address the design issues and further increase its sensitivity. Using this novel method, the sensitivity of the SMRA has been increased by 66.1%, which attributes to both the re-designed DETF and the reduced energy loss on the micro-lever. The results of both the closed-form and finite-element analyses are described and are in agreement with one another. A resonant frequency of approximately 22 kHz, a frequency sensitivity of over 250 Hz per g, a one-hour bias stability of 55 μg, a bias repeatability (1σ) of 48 μg and the bias-instability of 4.8 μg have been achieved. PMID:26633425

  15. Sensitivity Analysis for Coupled Aero-structural Systems

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.

    1999-01-01

    A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.

  16. SENSITIVITY ANALYSIS FOR OSCILLATING DYNAMICAL SYSTEMS

    PubMed Central

    WILKINS, A. KATHARINA; TIDOR, BRUCE; WHITE, JACOB; BARTON, PAUL I.

    2012-01-01

    Boundary value formulations are presented for exact and efficient sensitivity analysis, with respect to model parameters and initial conditions, of different classes of oscillating systems. Methods for the computation of sensitivities of derived quantities of oscillations such as period, amplitude and different types of phases are first developed for limit-cycle oscillators. In particular, a novel decomposition of the state sensitivities into three parts is proposed to provide an intuitive classification of the influence of parameter changes on period, amplitude and relative phase. The importance of the choice of time reference, i.e., the phase locking condition, is demonstrated and discussed, and its influence on the sensitivity solution is quantified. The methods are then extended to other classes of oscillatory systems in a general formulation. Numerical techniques are presented to facilitate the solution of the boundary value problem, and the computation of different types of sensitivities. Numerical results are verified by demonstrating consistency with finite difference approximations and are superior both in computational efficiency and in numerical precision to existing partial methods. PMID:23296349

  17. Fault Tree Reliability Analysis and Design-for-reliability

    1998-05-05

    WinR provides a fault tree analysis capability for performing systems reliability and design-for-reliability analyses. The package includes capabilities for sensitivity and uncertainity analysis, field failure data analysis, and optimization.

  18. Spacecraft design sensitivity for a disaster warning satellite system

    NASA Technical Reports Server (NTRS)

    Maloy, J. E.; Provencher, C. E.; Leroy, B. E.; Braley, R. C.; Shumaker, H. A.

    1977-01-01

    A disaster warning satellite (DWS) is described for warning the general public of impending natural catastrophes. The concept is responsive to NOAA requirements and maximizes the use of ATS-6 technology. Upon completion of concept development, the study was extended to establishing the sensitivity of the DWSS spacecraft power, weight, and cost to variations in both warning and conventional communications functions. The results of this sensitivity analysis are presented.

  19. Targeting HSP70-induced thermotolerance for design of thermal sensitizers.

    PubMed

    Calderwood, S K; Asea, A

    2002-01-01

    Thermal therapy has been shown to be an extremely powerful anti-cancer agent and a potent radiation sensitizer. However, the full potential of thermal therapy is hindered by a number of considerations including highly conserved heat resistance pathways in tumour cells and inhomogeneous heating of deep-seated tumours due to energy deposition and perfusion issues. This report reviews recent progress in the development of hyperthermia sensitizing drugs designed to specifically amplify the effects of hyperthermia. Such agents might be particularly useful in situations where heating is not adequate for the full biological effect or is not homogeneously delivered to tumours. The particular pathway concentrated on is thermotolerance, a complex, inducible cellular response that leads to heat resistance. This paper will concentrate on the molecular pathways of thermotolerance induction for designing inhibitors of heat resistance/thermal sensitizers, which may allow the full potential of thermal therapy to be utilized.

  20. Estimating the upper limit of gas production from Class 2 hydrate accumulations in the permafrost: 2. Alternative well designs and sensitivity analysis

    SciTech Connect

    Moridis, G.; Reagan, M.T.

    2011-01-15

    In the second paper of this series, we evaluate two additional well designs for production from permafrost-associated (PA) hydrate deposits. Both designs are within the capabilities of conventional technology. We determine that large volumes of gas can be produced at high rates (several MMSCFD) for long times using either well design. The production approach involves initial fluid withdrawal from the water zone underneath the hydrate-bearing layer (HBL). The production process follows a cyclical pattern, with each cycle composed of two stages: a long stage (months to years) of increasing gas production and decreasing water production, and a short stage (days to weeks) that involves destruction of the secondary hydrate (mainly through warm water injection) that evolves during the first stage, and is followed by a reduction in the fluid withdrawal rate. A well configuration with completion throughout the HBL leads to high production rates, but also the creation of a secondary hydrate barrier around the well that needs to be destroyed regularly by water injection. However, a configuration that initially involves heating of the outer surface of the wellbore and later continuous injection of warm water at low rates (Case C) appears to deliver optimum performance over the period it takes for the exhaustion of the hydrate deposit. Using Case C as the standard, we determine that gas production from PA hydrate deposits increases with the fluid withdrawal rate, the initial hydrate saturation and temperature, and with the formation permeability.

  1. Sensitivity analysis of the critical speed in railway vehicle dynamics

    NASA Astrophysics Data System (ADS)

    Bigoni, D.; True, H.; Engsig-Karup, A. P.

    2014-05-01

    We present an approach to global sensitivity analysis aiming at the reduction of its computational cost without compromising the results. The method is based on sampling methods, cubature rules, high-dimensional model representation and total sensitivity indices. It is applied to a half car with a two-axle Cooperrider bogie, in order to study the sensitivity of the critical speed with respect to the suspension parameters. The importance of a certain suspension component is expressed by the variance in critical speed that is ascribable to it. This proves to be useful in the identification of parameters for which the accuracy of their values is critically important. The approach has a general applicability in many engineering fields and does not require the knowledge of the particular solver of the dynamical system. This analysis can be used as part of the virtual homologation procedure and to help engineers during the design phase of complex systems.

  2. Software Performs Complex Design Analysis

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Designers use computational fluid dynamics (CFD) to gain greater understanding of the fluid flow phenomena involved in components being designed. They also use finite element analysis (FEA) as a tool to help gain greater understanding of the structural response of components to loads, stresses and strains, and the prediction of failure modes. Automated CFD and FEA engineering design has centered on shape optimization, which has been hindered by two major problems: 1) inadequate shape parameterization algorithms, and 2) inadequate algorithms for CFD and FEA grid modification. Working with software engineers at Stennis Space Center, a NASA commercial partner, Optimal Solutions Software LLC, was able to utilize its revolutionary, one-of-a-kind arbitrary shape deformation (ASD) capability-a major advancement in solving these two aforementioned problems-to optimize the shapes of complex pipe components that transport highly sensitive fluids. The ASD technology solves the problem of inadequate shape parameterization algorithms by allowing the CFD designers to freely create their own shape parameters, therefore eliminating the restriction of only being able to use the computer-aided design (CAD) parameters. The problem of inadequate algorithms for CFD grid modification is solved by the fact that the new software performs a smooth volumetric deformation. This eliminates the extremely costly process of having to remesh the grid for every shape change desired. The program can perform a design change in a markedly reduced amount of time, a process that would traditionally involve the designer returning to the CAD model to reshape and then remesh the shapes, something that has been known to take hours, days-even weeks or months-depending upon the size of the model.

  3. [Biomechanical analysis of different ProDisc-C arthroplasty design parameters after implanted: a numerical sensitivity study based on finite element method].

    PubMed

    Tang, Qiaohong; Mo, Zhongjun; Yao, Jie; Li, Qi; Du, Chenfei; Wang, Lizhen; Fan, Yubo

    2014-12-01

    This study was aimed to estimate the effect of different ProDisc-C arthroplasty designs after it was implanted to C5-C6 cervicalspine. Finite element (FE) model of intact C5-C6 segments including the vertebrae and disc was developed and validated. Ball-and-socket artificial disc prosthesis model (ProDisc-C, Synthes) was implanted into the validated FE model and the curvature of the ProDisc-C prosthesis was varied. All models were loaded with compressed force 74 N and the pure moment of 1.8 Nm along flexion-extension and bilateral bending and axial torsion separately. The results indicated that the variation in the curvature of ball and socket configuration would influence the range of motion in flexion/extension, while there were not apparently differences under other conditions of loads. The method increasing the curvature will solve the stress concentration of the polyethylene, but it will also bring adverse outcomes, such as facet joint force increasing and ligament tension increasing. Therefore, the design of artificial discs should be considered comprehensively to reserve the range of motion as well as to avoid the adverse problems, so as not to affect the long-term clinical results.

  4. Development and application of optimum sensitivity analysis of structures

    NASA Technical Reports Server (NTRS)

    Barthelemy, J. F. M.; Hallauer, W. L., Jr.

    1984-01-01

    The research focused on developing an algorithm applying optimum sensitivity analysis for multilevel optimization. The research efforts have been devoted to assisting NASA Langley's Interdisciplinary Research Office (IRO) in the development of a mature methodology for a multilevel approach to the design of complex (large and multidisciplinary) engineering systems. An effort was undertaken to identify promising multilevel optimization algorithms. In the current reporting period, the computer program generating baseline single level solutions was completed and tested out.

  5. Sensitivity method for integrated structure/active control law design

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1987-01-01

    The development is described of an integrated structure/active control law design methodology for aeroelastic aircraft applications. A short motivating introduction to aeroservoelasticity is given along with the need for integrated structures/controls design algorithms. Three alternative approaches to development of an integrated design method are briefly discussed with regards to complexity, coordination and tradeoff strategies, and the nature of the resulting solutions. This leads to the formulation of the proposed approach which is based on the concepts of sensitivity of optimum solutions and multi-level decompositions. The concept of sensitivity of optimum is explained in more detail and compared with traditional sensitivity concepts of classical control theory. The analytical sensitivity expressions for the solution of the linear, quadratic cost, Gaussian (LQG) control problem are summarized in terms of the linear regulator solution and the Kalman Filter solution. Numerical results for a state space aeroelastic model of the DAST ARW-II vehicle are given, showing the changes in aircraft responses to variations of a structural parameter, in this case first wing bending natural frequency.

  6. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  7. Three-dimensional aerodynamic shape optimization using discrete sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Burgreen, Gregory W.

    1995-01-01

    An aerodynamic shape optimization procedure based on discrete sensitivity analysis is extended to treat three-dimensional geometries. The function of sensitivity analysis is to directly couple computational fluid dynamics (CFD) with numerical optimization techniques, which facilitates the construction of efficient direct-design methods. The development of a practical three-dimensional design procedures entails many challenges, such as: (1) the demand for significant efficiency improvements over current design methods; (2) a general and flexible three-dimensional surface representation; and (3) the efficient solution of very large systems of linear algebraic equations. It is demonstrated that each of these challenges is overcome by: (1) employing fully implicit (Newton) methods for the CFD analyses; (2) adopting a Bezier-Bernstein polynomial parameterization of two- and three-dimensional surfaces; and (3) using preconditioned conjugate gradient-like linear system solvers. Whereas each of these extensions independently yields an improvement in computational efficiency, the combined effect of implementing all the extensions simultaneously results in a significant factor of 50 decrease in computational time and a factor of eight reduction in memory over the most efficient design strategies in current use. The new aerodynamic shape optimization procedure is demonstrated in the design of both two- and three-dimensional inviscid aerodynamic problems including a two-dimensional supersonic internal/external nozzle, two-dimensional transonic airfoils (resulting in supercritical shapes), three-dimensional transport wings, and three-dimensional supersonic delta wings. Each design application results in realistic and useful optimized shapes.

  8. High Sensitivity MEMS Strain Sensor: Design and Simulation

    PubMed Central

    Mohammed, Ahmed A. S.; Moussa, Walied A.; Lou, Edmond

    2008-01-01

    In this article, we report on the new design of a miniaturized strain microsensor. The proposed sensor utilizes the piezoresistive properties of doped single crystal silicon. Employing the Micro Electro Mechanical Systems (MEMS) technology, high sensor sensitivities and resolutions have been achieved. The current sensor design employs different levels of signal amplifications. These amplifications include geometric, material and electronic levels. The sensor and the electronic circuits can be integrated on a single chip, and packaged as a small functional unit. The sensor converts input strain to resistance change, which can be transformed to bridge imbalance voltage. An analog output that demonstrates high sensitivity (0.03mV/με), high absolute resolution (1με) and low power consumption (100μA) with a maximum range of ±4000με has been reported. These performance characteristics have been achieved with high signal stability over a wide temperature range (±50°C), which introduces the proposed MEMS strain sensor as a strong candidate for wireless strain sensing applications under harsh environmental conditions. Moreover, this sensor has been designed, verified and can be easily modified to measure other values such as force, torque…etc. In this work, the sensor design is achieved using Finite Element Method (FEM) with the application of the piezoresistivity theory. This design process and the microfabrication process flow to prototype the design have been presented.

  9. Efficient sensitivity analysis and optimization of a helicopter rotor

    NASA Technical Reports Server (NTRS)

    Lim, Joon W.; Chopra, Inderjit

    1989-01-01

    Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.

  10. The Theoretical Foundation of Sensitivity Analysis for GPS

    NASA Astrophysics Data System (ADS)

    Shikoska, U.; Davchev, D.; Shikoski, J.

    2008-10-01

    In this paper the equations of sensitivity analysis are derived and established theoretical underpinnings for the analyses. Paper propounds a land-vehicle navigation concepts and definition for sensitivity analysis. Equations of sensitivity analysis are presented for a linear Kalman filter and case study is given to illustrate the use of sensitivity analysis to the reader. At the end of the paper, extensions that are required for this research are made to the basic equations of sensitivity analysis specifically; the equations of sensitivity analysis are re-derived for a linearized Kalman filter.

  11. LCA data quality: sensitivity and uncertainty analysis.

    PubMed

    Guo, M; Murphy, R J

    2012-10-01

    Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions. PMID:22854094

  12. Design and Synthesis of a Calcium-Sensitive Photocage.

    PubMed

    Heckman, Laurel M; Grimm, Jonathan B; Schreiter, Eric R; Kim, Charles; Verdecia, Mark A; Shields, Brenda C; Lavis, Luke D

    2016-07-11

    Photolabile protecting groups (or "photocages") enable precise spatiotemporal control of chemical functionality and facilitate advanced biological experiments. Extant photocages exhibit a simple input-output relationship, however, where application of light elicits a photochemical reaction irrespective of the environment. Herein, we refine and extend the concept of photolabile groups, synthesizing the first Ca(2+) -sensitive photocage. This system functions as a chemical coincidence detector, releasing small molecules only in the presence of both light and elevated [Ca(2+) ]. Caging a fluorophore with this ion-sensitive moiety yields an "ion integrator" that permanently marks cells undergoing high Ca(2+) flux during an illumination-defined time period. Our general design concept demonstrates a new class of light-sensitive material for cellular imaging, sensing, and targeted molecular delivery. PMID:27218487

  13. Sensitivity Analysis of Situational Awareness Measures

    NASA Technical Reports Server (NTRS)

    Shively, R. J.; Davison, H. J.; Burdick, M. D.; Rutkowski, Michael (Technical Monitor)

    2000-01-01

    A great deal of effort has been invested in attempts to define situational awareness, and subsequently to measure this construct. However, relatively less work has focused on the sensitivity of these measures to manipulations that affect the SA of the pilot. This investigation was designed to manipulate SA and examine the sensitivity of commonly used measures of SA. In this experiment, we tested the most commonly accepted measures of SA: SAGAT, objective performance measures, and SART, against different levels of SA manipulation to determine the sensitivity of such measures in the rotorcraft flight environment. SAGAT is a measure in which the simulation blanks in the middle of a trial and the pilot is asked specific, situation-relevant questions about the state of the aircraft or the objective of a particular maneuver. In this experiment, after the pilot responded verbally to several questions, the trial continued from the point frozen. SART is a post-trial questionnaire that asked for subjective SA ratings from the pilot at certain points in the previous flight. The objective performance measures included: contacts with hazards (power lines and towers) that impeded the flight path, lateral and vertical anticipation of these hazards, response time to detection of other air traffic, and response time until an aberrant fuel gauge was detected. An SA manipulation of the flight environment was chosen that undisputedly affects a pilot's SA-- visibility. Four variations of weather conditions (clear, light rain, haze, and fog) resulted in a different level of visibility for each trial. Pilot SA was measured by either SAGAT or the objective performance measures within each level of visibility. This enabled us to not only determine the sensitivity within a measure, but also between the measures. The SART questionnaire and the NASA-TLX, a measure of workload, were distributed after every trial. Using the newly developed rotorcraft part-task laboratory (RPTL) at NASA Ames

  14. Simple Sensitivity Analysis for Orion GNC

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  15. Bayesian sensitivity analysis of bifurcating nonlinear models

    NASA Astrophysics Data System (ADS)

    Becker, W.; Worden, K.; Rowson, J.

    2013-01-01

    Sensitivity analysis allows one to investigate how changes in input parameters to a system affect the output. When computational expense is a concern, metamodels such as Gaussian processes can offer considerable computational savings over Monte Carlo methods, albeit at the expense of introducing a data modelling problem. In particular, Gaussian processes assume a smooth, non-bifurcating response surface. This work highlights a recent extension to Gaussian processes which uses a decision tree to partition the input space into homogeneous regions, and then fits separate Gaussian processes to each region. In this way, bifurcations can be modelled at region boundaries and different regions can have different covariance properties. To test this method, both the treed and standard methods were applied to the bifurcating response of a Duffing oscillator and a bifurcating FE model of a heart valve. It was found that the treed Gaussian process provides a practical way of performing uncertainty and sensitivity analysis on large, potentially-bifurcating models, which cannot be dealt with by using a single GP, although an open problem remains how to manage bifurcation boundaries that are not parallel to coordinate axes.

  16. A Post-Monte-Carlo Sensitivity Analysis Code

    2000-04-04

    SATOOL (Sensitivity Analysis TOOL) is a code for sensitivity analysis, following an uncertainity analysis with Monte Carlo simulations. Sensitivity analysis identifies those input variables, whose variance contributes dominatly to the variance in the output. This analysis can be used to reduce the variance in the output variables by redefining the "sensitive" variables with greater precision, i.e. with lower variance. The code identifies a group of sensitive variables, ranks them in the order of importance andmore » also quantifies the relative importance among the sensitive variables.« less

  17. Sensitivity Analysis of Chaotic Flow around Two-Dimensional Airfoil

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Wang, Qiqi; Nielsen, Eric; Diskin, Boris

    2015-11-01

    Computational methods for sensitivity analysis are invaluable tools for fluid dynamics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods, including the adjoint method, break down when applied to long-time averaged quantities in chaotic fluid flow fields, such as high-fidelity turbulence simulations. This break down is due to the ``Butterfly Effect'' the high sensitivity of chaotic dynamical systems to the initial condition. A new sensitivity analysis method developed by the authors, Least Squares Shadowing (LSS), can compute useful and accurate gradients for quantities of interest in chaotic dynamical systems. LSS computes gradients using the ``shadow trajectory'', a phase space trajectory (or solution) for which perturbations to the flow field do not grow exponentially in time. To efficiently compute many gradients for one objective function, we use an adjoint version of LSS. This talk will briefly outline Least Squares Shadowing and demonstrate it on chaotic flow around a Two-Dimensional airfoil.

  18. Integrating reliability analysis and design

    SciTech Connect

    Rasmuson, D. M.

    1980-10-01

    This report describes the Interactive Reliability Analysis Project and demonstrates the advantages of using computer-aided design systems (CADS) in reliability analysis. Common cause failure problems require presentations of systems, analysis of fault trees, and evaluation of solutions to these. Results have to be communicated between the reliability analyst and the system designer. Using a computer-aided design system saves time and money in the analysis of design. Computer-aided design systems lend themselves to cable routing, valve and switch lists, pipe routing, and other component studies. At EG and G Idaho, Inc., the Applicon CADS is being applied to the study of water reactor safety systems.

  19. Stormwater quality models: performance and sensitivity analysis.

    PubMed

    Dotto, C B S; Kleidorfer, M; Deletic, A; Fletcher, T D; McCarthy, D T; Rauch, W

    2010-01-01

    The complex nature of pollutant accumulation and washoff, along with high temporal and spatial variations, pose challenges for the development and establishment of accurate and reliable models of the pollution generation process in urban environments. Therefore, the search for reliable stormwater quality models remains an important area of research. Model calibration and sensitivity analysis of such models are essential in order to evaluate model performance; it is very unlikely that non-calibrated models will lead to reasonable results. This paper reports on the testing of three models which aim to represent pollutant generation from urban catchments. Assessment of the models was undertaken using a simplified Monte Carlo Markov Chain (MCMC) method. Results are presented in terms of performance, sensitivity to the parameters and correlation between these parameters. In general, it was suggested that the tested models poorly represent reality and result in a high level of uncertainty. The conclusions provide useful information for the improvement of existing models and insights for the development of new model formulations.

  20. Scalable analysis tools for sensitivity analysis and UQ (3160) results.

    SciTech Connect

    Karelitz, David B.; Ice, Lisa G.; Thompson, David C.; Bennett, Janine C.; Fabian, Nathan; Scott, W. Alan; Moreland, Kenneth D.

    2009-09-01

    The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.

  1. Phase sensitivity analysis of circadian rhythm entrainment.

    PubMed

    Gunawan, Rudiyanto; Doyle, Francis J

    2007-04-01

    As a biological clock, circadian rhythms evolve to accomplish a stable (robust) entrainment to environmental cycles, of which light is the most obvious. The mechanism of photic entrainment is not known, but two models of entrainment have been proposed based on whether light has a continuous (parametric) or discrete (nonparametric) effect on the circadian pacemaker. A novel sensitivity analysis is developed to study the circadian entrainment in silico based on a limit cycle approach and applied to a model of Drosophila circadian rhythm. The comparative analyses of complete and skeleton photoperiods suggest a trade-off between the contribution of period modulation (parametric effect) and phase shift (nonparametric effect) in Drosophila circadian entrainment. The results also give suggestions for an experimental study to (in)validate the two models of entrainment.

  2. Sensitivity Analysis of the Static Aeroelastic Response of a Wing

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.

    1993-01-01

    A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.

  3. Design and performance of a positron-sensitive surgical probe

    NASA Astrophysics Data System (ADS)

    Liu, Fang

    We report the design and performance of a portable positron-sensitive surgical imaging probe. The probe is designed to be sensitive to positrons and capable of rejecting background gammas including 511 keV. The probe consists of a multi-anode PMT and an 8 x 8 array of thin 2 mm x 2 mm plastic scintillators coupled 1:1 to GSO crystals. The probe uses three selection criteria to identify positrons. An energy threshold on the plastic signals reduces the false positron signals in the plastic due to background gammas; a second energy threshold on the PMT sum signal greatly reduces background gammas in the GSO. Finally, a timing window accepts only 511 keV gammas from the GSO that arrive within 15 ns of the plastic signals, reducing accidental coincidences to a negligible level. The first application being investigated is sentinel lymph node (SLN) surgery, to identify in real-time the location of SLNs in the axilla with high 18F-FDG uptake, which may indicate metastasis. Our simulations and measurements show that the probe's pixel separation ability in terms of peak-to-valley ratio is ˜3.5. The performance measurements also show that the 64-pixel probe has a sensitivity of 4.7 kcps/muCi using optimal signal selection criteria. For example, it is able to detect in 10 seconds a ˜4 mm lesion with a true-to-background ratio of ˜3 at a tumor uptake ratio of ˜8:1. The signal selection criteria can be fine-tuned, either for higher sensitivity, or for a higher image contrast.

  4. A new u-statistic with superior design sensitivity in matched observational studies.

    PubMed

    Rosenbaum, Paul R

    2011-09-01

    In an observational or nonrandomized study of treatment effects, a sensitivity analysis indicates the magnitude of bias from unmeasured covariates that would need to be present to alter the conclusions of a naïve analysis that presumes adjustments for observed covariates suffice to remove all bias. The power of sensitivity analysis is the probability that it will reject a false hypothesis about treatment effects allowing for a departure from random assignment of a specified magnitude; in particular, if this specified magnitude is "no departure" then this is the same as the power of a randomization test in a randomized experiment. A new family of u-statistics is proposed that includes Wilcoxon's signed rank statistic but also includes other statistics with substantially higher power when a sensitivity analysis is performed in an observational study. Wilcoxon's statistic has high power to detect small effects in large randomized experiments-that is, it often has good Pitman efficiency-but small effects are invariably sensitive to small unobserved biases. Members of this family of u-statistics that emphasize medium to large effects can have substantially higher power in a sensitivity analysis. For example, in one situation with 250 pair differences that are Normal with expectation 1/2 and variance 1, the power of a sensitivity analysis that uses Wilcoxon's statistic is 0.08 while the power of another member of the family of u-statistics is 0.66. The topic is examined by performing a sensitivity analysis in three observational studies, using an asymptotic measure called the design sensitivity, and by simulating power in finite samples. The three examples are drawn from epidemiology, clinical medicine, and genetic toxicology.

  5. Sensitivity analysis for improving nanomechanical photonic transducers biosensors

    NASA Astrophysics Data System (ADS)

    Fariña, D.; Álvarez, M.; Márquez, S.; Dominguez, C.; Lechuga, L. M.

    2015-08-01

    The achievement of high sensitivity and highly integrated transducers is one of the main challenges in the development of high-throughput biosensors. The aim of this study is to improve the final sensitivity of an opto-mechanical device to be used as a reliable biosensor. We report the analysis of the mechanical and optical properties of optical waveguide microcantilever transducers, and their dependency on device design and dimensions. The selected layout (geometry) based on two butt-coupled misaligned waveguides displays better sensitivities than an aligned one. With this configuration, we find that an optimal microcantilever thickness range between 150 nm and 400 nm would increase both microcantilever bending during the biorecognition process and increase optical sensitivity to 4.8   ×   10-2 nm-1, an order of magnitude higher than other similar opto-mechanical devices. Moreover, the analysis shows that a single mode behaviour of the propagating radiation is required to avoid modal interference that could misinterpret the readout signal.

  6. Climate sensitivity: Analysis of feedback mechanisms

    NASA Astrophysics Data System (ADS)

    Hansen, J.; Lacis, A.; Rind, D.; Russell, G.; Stone, P.; Fung, I.; Ruedy, R.; Lerner, J.

    , vegetation) to the total cooling at 18K. The temperature increase believed to have occurred in the past 130 years (approximately 0.5°C) is also found to imply a climate sensitivity of 2.5-5°C for doubled C02 (f = 2-4), if (1) the temperature increase is due to the added greenhouse gases, (2) the 1850 CO2 abundance was 270±10 ppm, and (3) the heat perturbation is mixed like a passive tracer in the ocean with vertical mixing coefficient k ˜ 1 cm2 s-1. These analyses indicate that f is substantially greater than unity on all time scales. Our best estimate for the current climate due to processes operating on the 10-100 year time scale is f = 2-4, corresponding to a climate sensitivity of 2.5-5°C for doubled CO2. The physical process contributing the greatest uncertainty to f on this time scale appears to be the cloud feedback. We show that the ocean's thermal relaxation time depends strongly on f. The e-folding time constant for response of the isolated ocean mixed layer is about 15 years, for the estimated value of f. This time is sufficiently long to allow substantial heat exchange between the mixed layer and deeper layers. For f = 3-4 the response time of the surface temperature to a heating perturbation is of order 100 years, if the perturbation is sufficiently small that it does not alter the rate of heat exchange with the deeper ocean. The climate sensitivity we have inferred is larger than that stated in the Carbon Dioxide Assessment Committee report (CDAC, 1983). Their result is based on the empirical temperature increase in the past 130 years, but their analysis did not account for the dependence of the ocean response time on climate sensitivity. Their choice of a fixed 15 year response time biased their result to low sensitivities. We infer that, because of recent increases in atmospheric CO2 and trace gases, there is a large, rapidly growing gap between current climate and the equilibrium climate for current atmospheric composition. Based on the climate

  7. Sensitivity analysis of discrete structural systems: A survey

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.

    1984-01-01

    Methods for calculating sensitivity derivatives for discrete structural systems are surveyed, primarily covering literature published during the past two decades. Methods are described for calculating derivatives of static displacements and stresses, eigenvalues and eigenvectors, transient structural response, and derivatives of optimum structural designs with respect to problem parameters. The survey is focused on publications addressed to structural analysis, but also includes a number of methods developed in nonstructural fields such as electronics, controls, and physical chemistry which are directly applicable to structural problems. Most notable among the nonstructural-based methods are the adjoint variable technique from control theory, and the Green's function and FAST methods from physical chemistry.

  8. Rheological Models of Blood: Sensitivity Analysis and Benchmark Simulations

    NASA Astrophysics Data System (ADS)

    Szeliga, Danuta; Macioł, Piotr; Banas, Krzysztof; Kopernik, Magdalena; Pietrzyk, Maciej

    2010-06-01

    Modeling of blood flow with respect to rheological parameters of the blood is the objective of this paper. Casson type equation was selected as a blood model and the blood flow was analyzed based on Backward Facing Step benchmark. The simulations were performed using ADINA-CFD finite element code. Three output parameters were selected, which characterize the accuracy of flow simulation. Sensitivity analysis of the results with Morris Design method was performed to identify rheological parameters and the model output, which control the blood flow to significant extent. The paper is the part of the work on identification of parameters controlling process of clotting.

  9. Designing and Building to ``Impossible'' Tolerances for Vibration Sensitive Equipment

    NASA Astrophysics Data System (ADS)

    Hertlein, Bernard H.

    2003-03-01

    As the precision and production capabilities of modern machines and factories increase, our expectations of them rise commensurately. Facility designers and engineers find themselves increasingly involved with measurement needs and design tolerances that were almost unthinkable a few years ago. An area of expertise that demonstrates this very clearly is the field of vibration measurement and control. Magnetic Resonance Imaging, Semiconductor manufacturing, micro-machining, surgical microscopes — These are just a few examples of equipment or techniques that need an extremely stable vibration environment. The challenge to architects, engineers and contractors is to provide that level of stability without undue cost or sacrificing the aesthetics and practicality of a structure. In addition, many facilities have run out of expansion room, so the design is often hampered by the need to reuse all or part of an existing structure, or to site vibration-sensitive equipment close to an existing vibration source. High resolution measurements and nondestructive testing techniques have proven to be invaluable additions to the engineer's toolbox in meeting these challenges. The author summarizes developments in this field over the last fifteen years or so, and lists some common errors of design and construction that can cost a lot of money in retrofit if missed, but can easily be avoided with a little foresight, an appropriate testing program and a carefully thought out checklist.

  10. An analytical approach to grid sensitivity analysis for NACA four-digit wing sections

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1992-01-01

    Sensitivity analysis in computational fluid dynamics with emphasis on grids and surface parameterization is described. An interactive algebraic grid-generation technique is employed to generate C-type grids around NACA four-digit wing sections. An analytical procedure is developed for calculating grid sensitivity with respect to design parameters of a wing section. A comparison of the sensitivity with that obtained using a finite difference approach is made. Grid sensitivity with respect to grid parameters, such as grid-stretching coefficients, are also investigated. Using the resultant grid sensitivity, aerodynamic sensitivity is obtained using the compressible two-dimensional thin-layer Navier-Stokes equations.

  11. Wear-Out Sensitivity Analysis Project Abstract

    NASA Technical Reports Server (NTRS)

    Harris, Adam

    2015-01-01

    During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.

  12. Sensitivity analysis of retrovirus HTLV-1 transactivation.

    PubMed

    Corradin, Alberto; Di Camillo, Barbara; Ciminale, Vincenzo; Toffolo, Gianna; Cobelli, Claudio

    2011-02-01

    Human T-cell leukemia virus type 1 is a human retrovirus endemic in many areas of the world. Although many studies indicated a key role of the viral protein Tax in the control of viral transcription, the mechanisms controlling HTLV-1 expression and its persistence in vivo are still poorly understood. To assess Tax effects on viral kinetics, we developed a HTLV-1 model. Two parameters that capture both its deterministic and stochastic behavior were quantified: Tax signal-to-noise ratio (SNR), which measures the effect of stochastic phenomena on Tax expression as the ratio between the protein steady-state level and the variance of the noise causing fluctuations around this value; t(1/2), a parameter representative of the duration of Tax transient expression pulses, that is, of Tax bursts due to stochastic phenomena. Sensitivity analysis indicates that the major determinant of Tax SNR is the transactivation constant, the system parameter weighting the enhancement of retrovirus transcription due to transactivation. In contrast, t(1/2) is strongly influenced by the degradation rate of the mRNA. In addition to shedding light into the mechanism of Tax transactivation, the obtained results are of potential interest for novel drug development strategies since the two parameters most affecting Tax transactivation can be experimentally tuned, e.g. by perturbing protein phosphorylation and by RNA interference.

  13. Sensitivity analysis of volume scattering phase functions.

    PubMed

    Tuchow, Noah; Broughton, Jennifer; Kudela, Raphael

    2016-08-01

    To solve the radiative transfer equation and relate inherent optical properties (IOPs) to apparent optical properties (AOPs), knowledge of the volume scattering phase function is required. Due to the difficulty of measuring the phase function, it is frequently approximated. We explore the sensitivity of derived AOPs to the phase function parameterization, and compare measured and modeled values of both the AOPs and estimated phase functions using data from Monterey Bay, California during an extreme "red tide" bloom event. Using in situ measurements of absorption and attenuation coefficients, as well as two sets of measurements of the volume scattering function (VSF), we compared output from the Hydrolight radiative transfer model to direct measurements. We found that several common assumptions used in parameterizing the radiative transfer model consistently introduced overestimates of modeled versus measured remote-sensing reflectance values. Phase functions from VSF data derived from measurements at multiple wavelengths and a single scattering single angle significantly overestimated reflectances when using the manufacturer-supplied corrections, but were substantially improved using newly published corrections; phase functions calculated from VSF measurements using three angles and three wavelengths and processed using manufacture-supplied corrections were comparable, demonstrating that reasonable predictions can be made using two commercially available instruments. While other studies have reached similar conclusions, our work extends the analysis to coastal waters dominated by an extreme algal bloom with surface chlorophyll concentrations in excess of 100 mg m-3. PMID:27505819

  14. What Makes a Good Home-Based Nocturnal Seizure Detector? A Value Sensitive Design

    PubMed Central

    van Andel, Judith; Leijten, Frans; van Delden, Hans; van Thiel, Ghislaine

    2015-01-01

    A device for the in-home detection of nocturnal seizures is currently being developed in the Netherlands, to improve care for patients with severe epilepsy. It is recognized that the design of medical technology is not value neutral: perspectives of users and developers are influential in design, and design choices influence these perspectives. However, during development processes, these influences are generally ignored and value-related choices remain implicit and poorly argued for. In the development process of the seizure detector we aimed to take values of all stakeholders into consideration. Therefore, we performed a parallel ethics study, using “value sensitive design.” Analysis of stakeholder communication (in meetings and e-mail messages) identified five important values, namely, health, trust, autonomy, accessibility, and reliability. Stakeholders were then asked to give feedback on the choice of these values and how they should be interpreted. In a next step, the values were related to design choices relevant for the device, and then the consequences (risks and benefits) of these choices were investigated. Currently the process of design and testing of the device is still ongoing. The device will be validated in a trial in which the identified consequences of design choices are measured as secondary endpoints. Value sensitive design methodology is feasible for the development of new medical technology and can help designers substantiate the choices in their design. PMID:25875320

  15. Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Newman, James C., III; Barnwell, Richard W.; Taylor, Arthur C., III; Hou, Gene J.-W.

    1998-01-01

    This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics. The focus here is on those methods particularly well- suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid computational fluid dynamics algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid computational fluid dynamics in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.

  16. Optimizing human activity patterns using global sensitivity analysis

    PubMed Central

    Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2014-01-01

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations. PMID:25580080

  17. Optimizing human activity patterns using global sensitivity analysis

    SciTech Connect

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  18. Optimizing human activity patterns using global sensitivity analysis

    DOE PAGESBeta

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less

  19. Design of a charge sensitive preamplifier on high resistivity silicon

    SciTech Connect

    Radeka, V.; Rehak, P.; Rescia, S.; Gatti, E.; Longoni, A.; Sampietro, M.; Holl, P.; Strueder, L.; Kemmer, J.

    1987-01-01

    A low noise, fast charge sensitive preamplifier was designed on high resistivity, detector grade silicon. It is built at the surface of a fully depleted region of n-type silicon. This allows the preamplifier to be placed very close to a detector anode. The preamplifier uses the classical input cascode configuration with a capacitor and a high value resistor in the feedback loop. The output stage of the preamplifier can drive a load up to 20pF. The power dissipation of the preamplifier is 13mW. The amplifying elements are ''Single Sided Gate JFETs'' developed especially for this application. Preamplifiers connected to a low capacitance anode of a drift type detector should achieve a rise time of 20ns and have an equivalent noise charge (ENC), after a suitable shaping, of less than 50 electrons. This performance translates to a position resolution better than 3..mu..m for silicon drift detectors. 6 refs., 9 figs.

  20. Design of a pulse oximeter for price sensitive emerging markets.

    PubMed

    Jones, Z; Woods, E; Nielson, D; Mahadevan, S V

    2010-01-01

    While the global market for medical devices is located primarily in developed countries, price sensitive emerging markets comprise an attractive, underserved segment in which products need a unique set of value propositions to be competitive. A pulse oximeter was designed expressly for emerging markets, and a novel feature set was implemented to reduce the cost of ownership and improve the usability of the device. Innovations included the ability of the device to generate its own electricity, a built in sensor which cuts down on operating costs, and a graphical, symbolic user interface. These features yield an average reduction of over 75% in the device cost of ownership versus comparable pulse oximeters already on the market.

  1. 5 CFR 732.201 - Sensitivity level designations and investigative requirements.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 2 2013-01-01 2013-01-01 false Sensitivity level designations and... Requirements § 732.201 Sensitivity level designations and investigative requirements. (a) For purposes of this... material adverse effect on the national security as a sensitive position at one of three sensitivity...

  2. 5 CFR 732.201 - Sensitivity level designations and investigative requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Sensitivity level designations and... Requirements § 732.201 Sensitivity level designations and investigative requirements. (a) For purposes of this... material adverse effect on the national security as a sensitive position at one of three sensitivity...

  3. 5 CFR 732.201 - Sensitivity level designations and investigative requirements.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 2 2014-01-01 2014-01-01 false Sensitivity level designations and... Requirements § 732.201 Sensitivity level designations and investigative requirements. (a) For purposes of this... material adverse effect on the national security as a sensitive position at one of three sensitivity...

  4. 5 CFR 732.201 - Sensitivity level designations and investigative requirements.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Sensitivity level designations and... Requirements § 732.201 Sensitivity level designations and investigative requirements. (a) For purposes of this... material adverse effect on the national security as a sensitive position at one of three sensitivity...

  5. 5 CFR 732.201 - Sensitivity level designations and investigative requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Sensitivity level designations and... Requirements § 732.201 Sensitivity level designations and investigative requirements. (a) For purposes of this... material adverse effect on the national security as a sensitive position at one of three sensitivity...

  6. Derivative based sensitivity analysis of gamma index.

    PubMed

    Sarkar, Biplab; Pradhan, Anirudh; Ganesh, T

    2015-01-01

    Originally developed as a tool for patient-specific quality assurance in advanced treatment delivery methods to compare between measured and calculated dose distributions, the gamma index (γ) concept was later extended to compare between any two dose distributions. It takes into effect both the dose difference (DD) and distance-to-agreement (DTA) measurements in the comparison. Its strength lies in its capability to give a quantitative value for the analysis, unlike other methods. For every point on the reference curve, if there is at least one point in the evaluated curve that satisfies the pass criteria (e.g., δDD = 1%, δDTA = 1 mm), the point is included in the quantitative score as "pass." Gamma analysis does not account for the gradient of the evaluated curve - it looks at only the minimum gamma value, and if it is <1, then the point passes, no matter what the gradient of evaluated curve is. In this work, an attempt has been made to present a derivative-based method for the identification of dose gradient. A mathematically derived reference profile (RP) representing the penumbral region of 6 MV 10 cm × 10 cm field was generated from an error function. A general test profile (GTP) was created from this RP by introducing 1 mm distance error and 1% dose error at each point. This was considered as the first of the two evaluated curves. By its nature, this curve is a smooth curve and would satisfy the pass criteria for all points in it. The second evaluated profile was generated as a sawtooth test profile (STTP) which again would satisfy the pass criteria for every point on the RP. However, being a sawtooth curve, it is not a smooth one and would be obviously poor when compared with the smooth profile. Considering the smooth GTP as an acceptable profile when it passed the gamma pass criteria (1% DD and 1 mm DTA) against the RP, the first and second order derivatives of the DDs (δD', δD") between these two curves were derived and used as the boundary values

  7. Displacement Monitoring and Sensitivity Analysis in the Observational Method

    NASA Astrophysics Data System (ADS)

    Górska, Karolina; Muszyński, Zbigniew; Rybak, Jarosław

    2013-09-01

    This work discusses the fundamentals of designing deep excavation support by means of observational method. The effective tools for optimum designing with the use of the observational method are both inclinometric and geodetic monitoring, which provide data for the systematically updated calibration of the numerical computational model. The analysis included methods for selecting data for the design (by choosing the basic random variables), as well as methods for an on-going verification of the results of numeric calculations (e.g., MES) by way of measuring the structure displacement using geodetic and inclinometric techniques. The presented example shows the sensitivity analysis of the calculation model for a cantilever wall in non-cohesive soil; that analysis makes it possible to select the data to be later subject to calibration. The paper presents the results of measurements of a sheet pile wall displacement, carried out by means of inclinometric method and, simultaneously, two geodetic methods, successively with the deepening of the excavation. This work includes also critical comments regarding the usefulness of the obtained data, as well as practical aspects of taking measurement in the conditions of on-going construction works.

  8. A diameter-sensitive flow entropy method for reliability consideration in water distribution system design

    NASA Astrophysics Data System (ADS)

    Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin

    2014-07-01

    Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.

  9. Cross Section Sensitivity and Uncertainty Analysis Including Secondary Neutron Energy and Angular Distributions.

    1991-03-12

    Version 00 SUSD calculates sensitivity coefficients for one- and two-dimensional transport problems. Variance and standard deviation of detector responses or design parameters can be obtained using cross-section covariance matrices. In neutron transport problems, this code can perform sensitivity-uncertainty analysis for secondary angular distribution (SAD) or secondary energy distribution (SED).

  10. A discourse on sensitivity analysis for discretely-modeled structures

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Haftka, Raphael T.

    1991-01-01

    A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.

  11. Extended forward sensitivity analysis of one-dimensional isothermal flow

    SciTech Connect

    Johnson, M.; Zhao, H.

    2013-07-01

    Sensitivity analysis and uncertainty quantification is an important part of nuclear safety analysis. In this work, forward sensitivity analysis is used to compute solution sensitivities on 1-D fluid flow equations typical of those found in system level codes. Time step sensitivity analysis is included as a method for determining the accumulated error from time discretization. The ability to quantify numerical error arising from the time discretization is a unique and important feature of this method. By knowing the relative sensitivity of time step with other physical parameters, the simulation is allowed to run at optimized time steps without affecting the confidence of the physical parameter sensitivity results. The time step forward sensitivity analysis method can also replace the traditional time step convergence studies that are a key part of code verification with much less computational cost. One well-defined benchmark problem with manufactured solutions is utilized to verify the method; another test isothermal flow problem is used to demonstrate the extended forward sensitivity analysis process. Through these sample problems, the paper shows the feasibility and potential of using the forward sensitivity analysis method to quantify uncertainty in input parameters and time step size for a 1-D system-level thermal-hydraulic safety code. (authors)

  12. Attainability analysis in the stochastic sensitivity control

    NASA Astrophysics Data System (ADS)

    Bashkirtseva, Irina

    2015-02-01

    For nonlinear dynamic stochastic control system, we construct a feedback regulator that stabilises an equilibrium and synthesises a required dispersion of random states around this equilibrium. Our approach is based on the stochastic sensitivity functions technique. We focus on the investigation of attainability sets for 2-D systems. A detailed parametric description of the attainability domains for various types of control inputs for stochastic Brusselator is presented. It is shown that the new regulator provides a low level of stochastic sensitivity and can suppress oscillations of large amplitude.

  13. Design optimization of structural parameters for highly sensitive photonic crystal label-free biosensors.

    PubMed

    Ju, Jonghyun; Han, Yun-ah; Kim, Seok-min

    2013-01-01

    The effects of structural design parameters on the performance of nano-replicated photonic crystal (PC) label-free biosensors were examined by the analysis of simulated reflection spectra of PC structures. The grating pitch, duty, scaled grating height and scaled TiO2 layer thickness were selected as the design factors to optimize the PC structure. The peak wavelength value (PWV), full width at half maximum of the peak, figure of merit for the bulk and surface sensitivities, and surface/bulk sensitivity ratio were also selected as the responses to optimize the PC label-free biosensor performance. A parametric study showed that the grating pitch was the dominant factor for PWV, and that it had low interaction effects with other scaled design factors. Therefore, we can isolate the effect of grating pitch using scaled design factors. For the design of PC-label free biosensor, one should consider that: (1) the PWV can be measured by the reflection peak measurement instruments, (2) the grating pitch and duty can be manufactured using conventional lithography systems, and (3) the optimum design is less sensitive to the grating height and TiO2 layer thickness variations in the fabrication process. In this paper, we suggested a design guide for highly sensitive PC biosensor in which one select the grating pitch and duty based on the limitations of the lithography and measurement system, and conduct a multi objective optimization of the grating height and TiO2 layer thickness for maximizing performance and minimizing the influence of parameter variation. Through multi-objective optimization of a PC structure with a fixed grating height of 550 nm and a duty of 50%, we obtained a surface FOM of 66.18 RIU-1 and an S/B ratio of 34.8%, with a grating height of 117 nm and TiO2 height of 210 nm.

  14. Design optimization of structural parameters for highly sensitive photonic crystal label-free biosensors.

    PubMed

    Ju, Jonghyun; Han, Yun-ah; Kim, Seok-min

    2013-01-01

    The effects of structural design parameters on the performance of nano-replicated photonic crystal (PC) label-free biosensors were examined by the analysis of simulated reflection spectra of PC structures. The grating pitch, duty, scaled grating height and scaled TiO2 layer thickness were selected as the design factors to optimize the PC structure. The peak wavelength value (PWV), full width at half maximum of the peak, figure of merit for the bulk and surface sensitivities, and surface/bulk sensitivity ratio were also selected as the responses to optimize the PC label-free biosensor performance. A parametric study showed that the grating pitch was the dominant factor for PWV, and that it had low interaction effects with other scaled design factors. Therefore, we can isolate the effect of grating pitch using scaled design factors. For the design of PC-label free biosensor, one should consider that: (1) the PWV can be measured by the reflection peak measurement instruments, (2) the grating pitch and duty can be manufactured using conventional lithography systems, and (3) the optimum design is less sensitive to the grating height and TiO2 layer thickness variations in the fabrication process. In this paper, we suggested a design guide for highly sensitive PC biosensor in which one select the grating pitch and duty based on the limitations of the lithography and measurement system, and conduct a multi objective optimization of the grating height and TiO2 layer thickness for maximizing performance and minimizing the influence of parameter variation. Through multi-objective optimization of a PC structure with a fixed grating height of 550 nm and a duty of 50%, we obtained a surface FOM of 66.18 RIU-1 and an S/B ratio of 34.8%, with a grating height of 117 nm and TiO2 height of 210 nm. PMID:23470487

  15. Launch vehicle systems design analysis

    NASA Technical Reports Server (NTRS)

    Ryan, Robert; Verderaime, V.

    1993-01-01

    Current launch vehicle design emphasis is on low life-cycle cost. This paper applies total quality management (TQM) principles to a conventional systems design analysis process to provide low-cost, high-reliability designs. Suggested TQM techniques include Steward's systems information flow matrix method, quality leverage principle, quality through robustness and function deployment, Pareto's principle, Pugh's selection and enhancement criteria, and other design process procedures. TQM quality performance at least-cost can be realized through competent concurrent engineering teams and brilliance of their technical leadership.

  16. Design of a High Sensitivity GNSS receiver for Lunar missions

    NASA Astrophysics Data System (ADS)

    Musumeci, Luciano; Dovis, Fabio; Silva, João S.; da Silva, Pedro F.; Lopes, Hugo D.

    2016-06-01

    This paper presents the design of a satellite navigation receiver architecture tailored for future Lunar exploration missions, demonstrating the feasibility of using Global Navigation Satellite Systems signals integrated with an orbital filter to achieve such a scope. It analyzes the performance of a navigation solution based on pseudorange and pseudorange rate measurements, generated through the processing of very weak signals of the Global Positioning System (GPS) L1/L5 and Galileo E1/E5 frequency bands. In critical scenarios (e.g. during manoeuvres) acceleration and attitude measurements from additional sensors complementing the GNSS measurements are integrated with the GNSS measurement to match the positioning requirement. A review of environment characteristics (dynamics, geometry and signal power) for the different phases of a reference Lunar mission is provided, focusing on the stringent requirements of the Descent, Approach and Hazard Detection and Avoidance phase. The design of High Sensitivity acquisition and tracking schemes is supported by an extensive simulation test campaign using a software receiver implementation and navigation results are validated by means of an end-to-end software simulator. Acquisition and tracking of GPS and Galileo signals of the L1/E1 and L5/E5a bands was successfully demonstrated for Carrier-to-Noise density ratios as low as 5-8 dB-Hz. The proposed navigation architecture provides acceptable performances during the considered critical phases, granting position and velocity errors below 61.4 m and 3.2 m/s, respectively, for the 99.7% of the mission time.

  17. Probability density adjoint for sensitivity analysis of the Mean of Chaos

    SciTech Connect

    Blonigan, Patrick J. Wang, Qiqi

    2014-08-01

    Sensitivity analysis, especially adjoint based sensitivity analysis, is a powerful tool for engineering design which allows for the efficient computation of sensitivities with respect to many parameters. However, these methods break down when used to compute sensitivities of long-time averaged quantities in chaotic dynamical systems. This paper presents a new method for sensitivity analysis of ergodic chaotic dynamical systems, the density adjoint method. The method involves solving the governing equations for the system's invariant measure and its adjoint on the system's attractor manifold rather than in phase-space. This new approach is derived for and demonstrated on one-dimensional chaotic maps and the three-dimensional Lorenz system. It is found that the density adjoint computes very finely detailed adjoint distributions and accurate sensitivities, but suffers from large computational costs.

  18. Introduction to special section on sensitivity analysis and summary of NCSU/USDA workshop on sensitivity analysis.

    PubMed

    Frey, H Christopher

    2002-06-01

    This guest editorial is a summary of the NCSU/USDA Workshop on Sensitivity Analysis held June 11-12, 2001 at North Carolina State University and sponsored by the U.S. Department of Agriculture's Office of Risk Assessment and Cost Benefit Analysis. The objective of the workshop was to learn across disciplines in identifying, evaluating, and recommending sensitivity analysis methods and practices for application to food-safety process risk models. The workshop included presentations regarding the Hazard Assessment and Critical Control Points (HACCP) framework used in food-safety risk assessment, a survey of sensitivity analysis methods, invited white papers on sensitivity analysis, and invited case studies regarding risk assessment of microbial pathogens in food. Based on the sharing of interdisciplinary information represented by the presentations, the workshop participants, divided into breakout sessions, responded to three trigger questions: What are the key criteria for sensitivity analysis methods applied to food-safety risk assessment? What sensitivity analysis methods are most promising for application to food safety and risk assessment? and What are the key needs for implementation and demonstration of such methods? The workshop produced agreement regarding key criteria for sensitivity analysis methods and the need to use two or more methods to try to obtain robust insights. Recommendations were made regarding a guideline document to assist practitioners in selecting, applying, interpreting, and reporting the results of sensitivity analysis.

  19. Discrete analysis of spatial-sensitivity models

    NASA Technical Reports Server (NTRS)

    Nielsen, Kenneth R. K.; Wandell, Brian A.

    1988-01-01

    Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.

  20. New Uses for Sensitivity Analysis: How Different Movement Tasks Effect Limb Model Parameter Sensitivity

    NASA Technical Reports Server (NTRS)

    Winters, J. M.; Stark, L.

    1984-01-01

    Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.

  1. Towards More Efficient and Effective Global Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin

    2014-05-01

    Sensitivity analysis (SA) is an important paradigm in the context of model development and application. There are a variety of approaches towards sensitivity analysis that formally describe different "intuitive" understandings of the sensitivity of a single or multiple model responses to different factors such as model parameters or forcings. These approaches are based on different philosophies and theoretical definitions of sensitivity and range from simple local derivatives to rigorous Sobol-type analysis-of-variance approaches. In general, different SA methods focus and identify different properties of the model response and may lead to different, sometimes even conflicting conclusions about the underlying sensitivities. This presentation revisits the theoretical basis for sensitivity analysis, critically evaluates the existing approaches in the literature, and demonstrates their shortcomings through simple examples. Important properties of response surfaces that are associated with the understanding and interpretation of sensitivities are outlined. A new approach towards global sensitivity analysis is developed that attempts to encompass the important, sensitivity-related properties of response surfaces. Preliminary results show that the new approach is superior to the standard approaches in the literature in terms of effectiveness and efficiency.

  2. Global sensitivity analysis of the Indian monsoon during the Pleistocene

    NASA Astrophysics Data System (ADS)

    Araya-Melo, P. A.; Crucifix, M.; Bounceur, N.

    2015-01-01

    The sensitivity of the Indian monsoon to the full spectrum of climatic conditions experienced during the Pleistocene is estimated using the climate model HadCM3. The methodology follows a global sensitivity analysis based on the emulator approach of Oakley and O'Hagan (2004) implemented following a three-step strategy: (1) development of an experiment plan, designed to efficiently sample a five-dimensional input space spanning Pleistocene astronomical configurations (three parameters), CO2 concentration and a Northern Hemisphere glaciation index; (2) development, calibration and validation of an emulator of HadCM3 in order to estimate the response of the Indian monsoon over the full input space spanned by the experiment design; and (3) estimation and interpreting of sensitivity diagnostics, including sensitivity measures, in order to synthesise the relative importance of input factors on monsoon dynamics, estimate the phase of the monsoon intensity response with respect to that of insolation, and detect potential non-linear phenomena. By focusing on surface temperature, precipitation, mixed-layer depth and sea-surface temperature over the monsoon region during the summer season (June-July-August-September), we show that precession controls the response of four variables: continental temperature in phase with June to July insolation, high glaciation favouring a late-phase response, sea-surface temperature in phase with May insolation, continental precipitation in phase with July insolation, and mixed-layer depth in antiphase with the latter. CO2 variations control temperature variance with an amplitude similar to that of precession. The effect of glaciation is dominated by the albedo forcing, and its effect on precipitation competes with that of precession. Obliquity is a secondary effect, negligible on most variables except sea-surface temperature. It is also shown that orography forcing reduces the glacial cooling, and even has a positive effect on precipitation

  3. Global sensitivity analysis of Indian Monsoon during the Pleistocene

    NASA Astrophysics Data System (ADS)

    Araya-Melo, P. A.; Crucifix, M.; Bounceur, N.

    2014-04-01

    The sensitivity of Indian Monsoon to the full spectrum of climatic conditions experienced during the Pleistocene is estimated using the climate model HadCM3. The methodology follows a global sensitivity analysis based on the emulator approach of Oakley and O'Hagan (2004) implemented following a three-step strategy: (1) develop an experiment plan, designed to efficiently sample a 5-dimensional input space spanning Pleistocene astronomical configurations (3 parameters), CO2 concentration and a Northern Hemisphere glaciation index, (2) develop, calibrate and validate an emulator of HadCM3, in order to estimate the response of the Indian Monsoon over the full input space spanned by the experiment design, and (3) estimate and interpret sensitivity diagnostics, including sensitivity measures, in order to synthesize the relative importance of input factors on monsoon dynamics, estimate the phase of the monsoon intensity response with respect to that of insolation, and detect potential non-linear phenomena. Specifically, we focus on four variables: summer (JJAS) temperature and precipitation over North India, and JJAS sea-surface temperature and mixed-layer depth over the north-western side of the Indian ocean. It is shown that precession controls the response of four variables: continental temperature in phase with June to July insolation, high glaciation favouring a late-phase response, sea-surface temperature in phase with May insolation, and continental precipitation in phase with July insolation, and mixed-layer depth in antiphase with the latter. CO2 variations controls temperature variance with an amplitude similar to that of precession. The effect of glaciation is dominated by the albedo forcing, and its effect on precipitation competes with that of precession. Obliquity is a secondary effect, negligible on most variables except sea-surface temperature. It is also shown that orography forcing reduces the glacial cooling, and even has a positive effect on

  4. Buckling Design and Imperfection Sensitivity of Sandwich Composite Launch-Vehicle Shell Structures

    NASA Technical Reports Server (NTRS)

    Schultz, Marc R.; Sleight, David W.; Myers, David E.; Waters, W. Allen, Jr.; Chunchu, Prasad B.; Lovejoy, Andrew W.; Hilburger, Mark W.

    2016-01-01

    Composite materials are increasingly being considered and used for launch-vehicle structures. For shell structures, such as interstages, skirts, and shrouds, honeycomb-core sandwich composites are often selected for their structural efficiency. Therefore, it is becoming increasingly important to understand the structural response, including buckling, of sandwich composite shell structures. Additionally, small geometric imperfections can significantly influence the buckling response, including considerably reducing the buckling load, of shell structures. Thus, both the response of the theoretically perfect structure and the buckling imperfection sensitivity must be considered during the design of such structures. To address the latter, empirically derived design factors, called buckling knockdown factors (KDFs), were developed by NASA in the 1960s to account for this buckling imperfection sensitivity during design. However, most of the test-article designs used in the development of these recommendations are not relevant to modern launch-vehicle constructions and material systems, and in particular, no composite test articles were considered. Herein, a two-part study on composite sandwich shells to (1) examine the relationship between the buckling knockdown factor and the areal mass of optimized designs, and (2) to interrogate the imperfection sensitivity of those optimized designs is presented. Four structures from recent NASA launch-vehicle development activities are considered. First, designs optimized for both strength and stability were generated for each of these structures using design optimization software and a range of buckling knockdown factors; it was found that the designed areal masses varied by between 6.1% and 19.6% over knockdown factors ranging from 0.6 to 0.9. Next, the buckling imperfection sensitivity of the optimized designs is explored using nonlinear finite-element analysis and the as-measured shape of a large-scale composite cylindrical

  5. Context-sensitive design and human interaction principles for usable, useful, and adoptable radars

    NASA Astrophysics Data System (ADS)

    McNamara, Laura A.; Klein, Laura M.

    2016-05-01

    The evolution of exquisitely sensitive Synthetic Aperture Radar (SAR) systems is positioning this technology for use in time-critical environments, such as search-and-rescue missions and improvised explosive device (IED) detection. SAR systems should be playing a keystone role in the United States' Intelligence, Surveillance, and Reconnaissance activities. Yet many in the SAR community see missed opportunities for incorporating SAR into existing remote sensing data collection and analysis challenges. Drawing on several years' of field research with SAR engineering and operational teams, this paper examines the human and organizational factors that mitigate against the adoption and use of SAR for tactical ISR and operational support. We suggest that SAR has a design problem, and that context-sensitive, human and organizational design frameworks are required if the community is to realize SAR's tactical potential.

  6. Design, characterization, and sensitivity of the supernova trigger system at Daya Bay

    NASA Astrophysics Data System (ADS)

    Wei, Hanyu; Lebanowski, Logan; Li, Fei; Wang, Zhe; Chen, Shaomin

    2016-02-01

    Providing an early warning of galactic supernova explosions from neutrino signals is important in studying supernova dynamics and neutrino physics. A dedicated supernova trigger system has been designed and installed in the data acquisition system at Daya Bay and integrated into the worldwide Supernova Early Warning System (SNEWS). Daya Bay's unique feature of eight identically-designed detectors deployed in three separate experimental halls makes the trigger system naturally robust against cosmogenic backgrounds, enabling a prompt analysis of online triggers and a tight control of the false-alert rate. The trigger system is estimated to be fully sensitive to 1987A-type supernova bursts throughout most of the Milky Way. The significant gain in sensitivity of the eight-detector configuration over a mass-equivalent single detector is also estimated. The experience of this online trigger system is applicable to future projects with spatially distributed detectors.

  7. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  8. Sensitivity Analysis of Offshore Wind Cost of Energy (Poster)

    SciTech Connect

    Dykes, K.; Ning, A.; Graf, P.; Scott, G.; Damiami, R.; Hand, M.; Meadows, R.; Musial, W.; Moriarty, P.; Veers, P.

    2012-10-01

    No matter the source, offshore wind energy plant cost estimates are significantly higher than for land-based projects. For instance, a National Renewable Energy Laboratory (NREL) review on the 2010 cost of wind energy found baseline cost estimates for onshore wind energy systems to be 71 dollars per megawatt-hour ($/MWh), versus 225 $/MWh for offshore systems. There are many ways that innovation can be used to reduce the high costs of offshore wind energy. However, the use of such innovation impacts the cost of energy because of the highly coupled nature of the system. For example, the deployment of multimegawatt turbines can reduce the number of turbines, thereby reducing the operation and maintenance (O&M) costs associated with vessel acquisition and use. On the other hand, larger turbines may require more specialized vessels and infrastructure to perform the same operations, which could result in higher costs. To better understand the full impact of a design decision on offshore wind energy system performance and cost, a system analysis approach is needed. In 2011-2012, NREL began development of a wind energy systems engineering software tool to support offshore wind energy system analysis. The tool combines engineering and cost models to represent an entire offshore wind energy plant and to perform system cost sensitivity analysis and optimization. Initial results were collected by applying the tool to conduct a sensitivity analysis on a baseline offshore wind energy system using 5-MW and 6-MW NREL reference turbines. Results included information on rotor diameter, hub height, power rating, and maximum allowable tip speeds.

  9. A Small Range Six-Axis Accelerometer Designed with High Sensitivity DCB Elastic Element

    PubMed Central

    Sun, Zhibo; Liu, Jinhao; Yu, Chunzhan; Zheng, Yili

    2016-01-01

    This paper describes a small range six-axis accelerometer (the measurement range of the sensor is ±g) with high sensitivity DCB (Double Cantilever Beam) elastic element. This sensor is developed based on a parallel mechanism because of the reliability. The accuracy of sensors is affected by its sensitivity characteristics. To improve the sensitivity, a DCB structure is applied as the elastic element. Through dynamic analysis, the dynamic model of the accelerometer is established using the Lagrange equation, and the mass matrix and stiffness matrix are obtained by a partial derivative calculation and a conservative congruence transformation, respectively. By simplifying the structure of the accelerometer, a model of the free vibration is achieved, and the parameters of the sensor are designed based on the model. Through stiffness analysis of the DCB structure, the deflection curve of the beam is calculated. Compared with the result obtained using a finite element analysis simulation in ANSYS Workbench, the coincidence rate of the maximum deflection is 89.0% along the x-axis, 88.3% along the y-axis and 87.5% along the z-axis. Through strain analysis of the DCB elastic element, the sensitivity of the beam is obtained. According to the experimental result, the accuracy of the theoretical analysis is found to be 90.4% along the x-axis, 74.9% along the y-axis and 78.9% along the z-axis. The measurement errors of linear accelerations ax, ay and az in the experiments are 2.6%, 0.6% and 1.31%, respectively. The experiments prove that accelerometer with DCB elastic element performs great sensitive and precision characteristics. PMID:27657089

  10. Probabilistic Finite Element Analysis & Design Optimization for Structural Designs

    NASA Astrophysics Data System (ADS)

    Deivanayagam, Arumugam

    This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that

  11. Estimate design sensitivity to process variation for the 14nm node

    NASA Astrophysics Data System (ADS)

    Landié, Guillaume; Farys, Vincent

    2016-03-01

    Looking for the highest density and best performance, the 14nm technological node saw the development of aggressive designs, with design rules as close as possible to the limit of the process. Edge placement error (EPE) budget is now tighter and Reticle Enhancement Techniques (RET) must take into account the highest number of parameters to be able to get the best printability and guaranty yield requirements. Overlay is a parameter that must be taken into account earlier during the design library development to avoid design structures presenting a high risk of performance failure. This paper presents a method taking into account the overlay variation and the Resist Image simulation across the process window variation to estimate the design sensitivity to overlay. Areas in the design are classified with specific metrics, from the highest to the lowest overlay sensitivity. This classification can be used to evaluate the robustness of a full chip product to process variability or to work with designers during the design library development. The ultimate goal is to evaluate critical structures in different contexts and report the most critical ones. In this paper, we study layers interacting together, such as Contact/Poly area overlap or Contact/Active distance. ASML-Brion tooling allowed simulating the different resist contours and applying the overlay value to one of the layers. Lithography Manufacturability Check (LMC) detectors are then set to extract the desired values for analysis. Two different approaches have been investigated. The first one is a systematic overlay where we apply the same overlay everywhere on the design. The second one is using a real overlay map which has been measured and applied to the LMC tools. The data are then post-processed and compared to the design target to create a classification and show the error distribution. Figure:

  12. Global sensitivity analysis of analytical vibroacoustic transmission models

    NASA Astrophysics Data System (ADS)

    Christen, Jean-Loup; Ichchou, Mohamed; Troclet, Bernard; Bareille, Olivier; Ouisse, Morvan

    2016-04-01

    Noise reduction issues arise in many engineering problems. One typical vibroacoustic problem is the transmission loss (TL) optimisation and control. The TL depends mainly on the mechanical parameters of the considered media. At early stages of the design, such parameters are not well known. Decision making tools are therefore needed to tackle this issue. In this paper, we consider the use of the Fourier Amplitude Sensitivity Test (FAST) for the analysis of the impact of mechanical parameters on features of interest. FAST is implemented with several structural configurations. FAST method is used to estimate the relative influence of the model parameters while assuming some uncertainty or variability on their values. The method offers a way to synthesize the results of a multiparametric analysis with large variability. Results are presented for transmission loss of isotropic, orthotropic and sandwich plates excited by a diffuse field on one side. Qualitative trends found to agree with the physical expectation. Design rules can then be set up for vibroacoustic indicators. The case of a sandwich plate is taken as an example of the use of this method inside an optimisation process and for uncertainty quantification.

  13. Workstation analysis for nuclear design

    SciTech Connect

    Kohn, J.; Cecil, A.; Hardin, D.; Hartwell, D.; Long, J.

    1985-07-02

    This report contains an analysis of workstation needs for code development in the Nuclear Design Program of Lawrence Livermore National Laboratory. The purpose of this analysis was to identify those features of existing workstations that would significantly enhance the effectiveness and productivity of programmers and code developer physicists in their daily interaction with Cray supercomputers located on the Octopus Network at the Laboratory. The analysis took place from March 1985 through June 1985. The analysis report is broken into two parts. Part 1 identifies the end users and their working environment. Definitions are given for terms used throughout the remainder of the report. Part 2 lists the characteristics that an ideal workstation ought to have to be useful for code development in the Nuclear Design Program.

  14. BEHAVIOR OF SENSITIVITIES IN THE ONE-DIMENSIONAL ADVECTION-DISPERSION EQUATION: IMPLICATIONS FOR PARAMETER ESTIMATION AND SAMPLING DESIGN.

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1987-01-01

    The spatial and temporal variability of sensitivities has a significant impact on parameter estimation and sampling design for studies of solute transport in porous media. Physical insight into the behavior of sensitivities is offered through an analysis of analytically derived sensitivities for the one-dimensional form of the advection-dispersion equation. When parameters are estimated in regression models of one-dimensional transport, the spatial and temporal variability in sensitivities influences variance and covariance of parameter estimates. Several principles account for the observed influence of sensitivities on parameter uncertainty. (1) Information about a physical parameter may be most accurately gained at points in space and time. (2) As the distance of observation points from the upstream boundary increases, maximum sensitivity to velocity during passage of the solute front increases. (3) The frequency of sampling must be 'in phase' with the S shape of the dispersion sensitivity curve to yield the most information on dispersion. (4) The sensitivity to the dispersion coefficient is usually at least an order of magnitude less than the sensitivity to velocity. (5) The assumed probability distribution of random error in observations of solute concentration determines the form of the sensitivities. (6) If variance in random error in observations is large, trends in sensitivities of observation points may be obscured by noise. (7) Designs that minimize the variance of one parameter may not necessarily minimize the variance of other parameters.

  15. Programmable ion-sensitive transistor interfaces. III. Design considerations, signal generation, and sensitivity enhancement.

    PubMed

    Jayant, Krishna; Auluck, Kshitij; Rodriguez, Sergio; Cao, Yingqiu; Kan, Edwin C

    2014-05-01

    We report on factors that affect DNA hybridization detection using ion-sensitive field-effect transistors (ISFETs). Signal generation at the interface between the transistor and immobilized biomolecules is widely ascribed to unscreened molecular charges causing a shift in surface potential and hence the transistor output current. Traditionally, the interaction between DNA and the dielectric or metal sensing interface is modeled by treating the molecular layer as a sheet charge and the ionic profile with a Poisson-Boltzmann distribution. The surface potential under this scenario is described by the Graham equation. This approximation, however, often fails to explain large hybridization signals on the order of tens of mV. More realistic descriptions of the DNA-transistor interface which include factors such as ion permeation, exclusion, and packing constraints have been proposed with little or no corroboration against experimental findings. In this study, we examine such physical models by their assumptions, range of validity, and limitations. We compare simulations against experiments performed on electrolyte-oxide-semiconductor capacitors and foundry-ready floating-gate ISFETs. We find that with weakly charged interfaces (i.e., low intrinsic interface charge), pertinent to the surfaces used in this study, the best agreement between theory and experiment exists when ions are completely excluded from the DNA layer. The influence of various factors such as bulk pH, background salinity, chemical reactivity of surface groups, target molecule concentration, and surface coatings on signal generation is studied. Furthermore, in order to overcome Debye screening limited detection, we suggest two signal enhancement strategies. We first describe frequency domain biosensing, highlighting the ability to sort short DNA strands based on molecular length, and then describe DNA biosensing in multielectrolytes comprising trace amounts of higher-valency salt in a background of

  16. Programmable ion-sensitive transistor interfaces. III. Design considerations, signal generation, and sensitivity enhancement.

    PubMed

    Jayant, Krishna; Auluck, Kshitij; Rodriguez, Sergio; Cao, Yingqiu; Kan, Edwin C

    2014-05-01

    We report on factors that affect DNA hybridization detection using ion-sensitive field-effect transistors (ISFETs). Signal generation at the interface between the transistor and immobilized biomolecules is widely ascribed to unscreened molecular charges causing a shift in surface potential and hence the transistor output current. Traditionally, the interaction between DNA and the dielectric or metal sensing interface is modeled by treating the molecular layer as a sheet charge and the ionic profile with a Poisson-Boltzmann distribution. The surface potential under this scenario is described by the Graham equation. This approximation, however, often fails to explain large hybridization signals on the order of tens of mV. More realistic descriptions of the DNA-transistor interface which include factors such as ion permeation, exclusion, and packing constraints have been proposed with little or no corroboration against experimental findings. In this study, we examine such physical models by their assumptions, range of validity, and limitations. We compare simulations against experiments performed on electrolyte-oxide-semiconductor capacitors and foundry-ready floating-gate ISFETs. We find that with weakly charged interfaces (i.e., low intrinsic interface charge), pertinent to the surfaces used in this study, the best agreement between theory and experiment exists when ions are completely excluded from the DNA layer. The influence of various factors such as bulk pH, background salinity, chemical reactivity of surface groups, target molecule concentration, and surface coatings on signal generation is studied. Furthermore, in order to overcome Debye screening limited detection, we suggest two signal enhancement strategies. We first describe frequency domain biosensing, highlighting the ability to sort short DNA strands based on molecular length, and then describe DNA biosensing in multielectrolytes comprising trace amounts of higher-valency salt in a background of

  17. Partial Differential Algebraic Sensitivity Analysis Code

    1995-05-15

    PDASAC solves stiff, nonlinear initial-boundary-value in a timelike dimension t and a space dimension x. Plane, circular cylindrical or spherical boundaries can be handled. Mixed-order systems of partial differential and algebraic equations can be analyzed with members of order or 0 or 1 in t, 0,1 or 2 in x. Parametric sensitivities of the calculated states are compted simultaneously on request, via the Jacobian of the state equations. Initial and boundary conditions are efficiently reconciled.more » Local error control (in the max-norm or the 2-norm) is provided for the state vector and can include the parametric sensitivites if desired.« less

  18. Sensitivity analysis of limit cycles with application to the Brusselator

    SciTech Connect

    Larter, R.; Rabitz, H.; Kramer, M.

    1984-05-01

    Sensitivity analysis, by which it is possible to determine the dependence of the solution of a system of differential equations to variations in the parameters, is applied to systems which have a limit cycle solution in some region of parameter space. The resulting expressions for the sensitivity coefficients, which are the gradients of the limit cycle solution in parameter space, are analyzed by a Fourier series approach; the sensitivity coefficients are found to contain information on the sensitivity of the period and other features of the limit cycle. The intimate relationship between Lyapounov stability analysis and sensitivity analysis is discussed. The results of our general derivation are applied to two limit cycle oscillators: (1) an exactly soluble two-species oscillator and (2) the Brusselator.

  19. Structural Analysis and Design Software

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Collier Research and Development Corporation received a one-of-a-kind computer code for designing exotic hypersonic aircraft called ST-SIZE in the first ever Langley Research Center software copyright license agreement. Collier transformed the NASA computer code into a commercial software package called HyperSizer, which integrates with other Finite Element Modeling and Finite Analysis private-sector structural analysis program. ST-SIZE was chiefly conceived as a means to improve and speed the structural design of a future aerospace plane for Langley Hypersonic Vehicles Office. Including the NASA computer code into HyperSizer has enabled the company to also apply the software to applications other than aerospace, including improved design and construction for offices, marine structures, cargo containers, commercial and military aircraft, rail cars, and a host of everyday consumer products.

  20. Global and Local Sensitivity Analysis Methods for a Physical System

    ERIC Educational Resources Information Center

    Morio, Jerome

    2011-01-01

    Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…

  1. Validation of FSP Reactor Design with Sensitivity Studies of Beryllium-Reflected Critical Assemblies

    SciTech Connect

    John D. Bess; Margaret A. Marshall

    2013-02-01

    The baseline design for space nuclear power is a fission surface power (FSP) system: sodium-potassium (NaK) cooled, fast spectrum reactor with highly-enriched-uranium (HEU)-O2 fuel, stainless steel (SS) cladding, and beryllium reflectors with B4C control drums. Previous studies were performed to evaluate modeling capabilities and quantify uncertainties and biases associated with analysis methods and nuclear data. Comparison of Zero Power Plutonium Reactor (ZPPR)-20 benchmark experiments with the FSP design indicated that further reduction of the total design model uncertainty requires the reduction in uncertainties pertaining to beryllium and uranium cross-section data. Further comparison with three beryllium-reflected HEU-metal benchmark experiments performed at the Oak Ridge Critical Experiments Facility (ORCEF) concluded the requirement that experimental validation data have similar cross section sensitivities to those found in the FSP design. A series of critical experiments was performed at ORCEF in the 1960s to support the Medium Power Reactor Experiment (MPRE) space reactor design. The small, compact critical assembly (SCCA) experiments were graphite- or beryllium-reflected assemblies of SS-clad, HEU-O2 fuel on a vertical lift machine. All five configurations were evaluated as benchmarks. Two of the five configurations were beryllium reflected, and further evaluated using the sensitivity and uncertainty analysis capabilities of SCALE 6.1. Validation of the example FSP design model was successful in reducing the primary uncertainty constituent, the Be(n,n) reaction, from 0.28 %dk/k to 0.0004 %dk/k. Further assessment of additional reactor physics measurements performed on the SCCA experiments may serve to further validate FSP design and operation.

  2. Advanced Fuel Cycle Economic Sensitivity Analysis

    SciTech Connect

    David Shropshire; Kent Williams; J.D. Smith; Brent Boore

    2006-12-01

    A fuel cycle economic analysis was performed on four fuel cycles to provide a baseline for initial cost comparison using the Gen IV Economic Modeling Work Group G4 ECON spreadsheet model, Decision Programming Language software, the 2006 Advanced Fuel Cycle Cost Basis report, industry cost data, international papers, the nuclear power related cost study from MIT, Harvard, and the University of Chicago. The analysis developed and compared the fuel cycle cost component of the total cost of energy for a wide range of fuel cycles including: once through, thermal with fast recycle, continuous fast recycle, and thermal recycle.

  3. Habitat Design Optimization and Analysis

    NASA Technical Reports Server (NTRS)

    SanSoucie, Michael P.; Hull, Patrick V.; Tinker, Michael L.

    2006-01-01

    Long-duration surface missions to the Moon and Mars will require habitats for the astronauts. The materials chosen for the habitat walls play a direct role in the protection against the harsh environments found on the surface. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Advanced optimization techniques are necessary for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat design optimization tool utilizing genetic algorithms has been developed. Genetic algorithms use a "survival of the fittest" philosophy, where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multi-objective formulation of structural analysis, heat loss, radiation protection, and meteoroid protection. This paper presents the research and development of this tool.

  4. Sensitivity Analysis in Complex Plasma Chemistry Models

    NASA Astrophysics Data System (ADS)

    Turner, Miles

    2015-09-01

    The purpose of a plasma chemistry model is prediction of chemical species densities, including understanding the mechanisms by which such species are formed. These aims are compromised by an uncertain knowledge of the rate constants included in the model, which directly causes uncertainty in the model predictions. We recently showed that this predictive uncertainty can be large--a factor of ten or more in some cases. There is probably no context in which a plasma chemistry model might be used where the existence of uncertainty on this scale could not be a matter of concern. A question that at once follows is: Which rate constants cause such uncertainty? In the present paper we show how this question can be answered by applying a systematic screening procedure--the so-called Morris method--to identify sensitive rate constants. We investigate the topical example of the helium-oxygen chemistry. Beginning with a model with almost four hundred reactions, we show that only about fifty rate constants materially affect the model results, and as few as ten cause most of the uncertainty. This means that the model can be improved, and the uncertainty substantially reduced, by focussing attention on this tractably small set of rate constants. Work supported by Science Foundation Ireland under grant08/SRC/I1411, and by COST Action MP1101 ``Biomedical Applications of Atmospheric Pressure Plasmas.''

  5. Selecting step sizes in sensitivity analysis by finite differences

    NASA Technical Reports Server (NTRS)

    Iott, J.; Haftka, R. T.; Adelman, H. M.

    1985-01-01

    This paper deals with methods for obtaining near-optimum step sizes for finite difference approximations to first derivatives with particular application to sensitivity analysis. A technique denoted the finite difference (FD) algorithm, previously described in the literature and applicable to one derivative at a time, is extended to the calculation of several simultaneously. Both the original and extended FD algorithms are applied to sensitivity analysis for a data-fitting problem in which derivatives of the coefficients of an interpolation polynomial are calculated with respect to uncertainties in the data. The methods are also applied to sensitivity analysis of the structural response of a finite-element-modeled swept wing. In a previous study, this sensitivity analysis of the swept wing required a time-consuming trial-and-error effort to obtain a suitable step size, but it proved to be a routine application for the extended FD algorithm herein.

  6. Parameter sensitivity analysis for pesticide impacts on honeybee colonies

    EPA Science Inventory

    We employ Monte Carlo simulation and linear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed that simulate hive population trajectories, taking into account queen strength, foraging success, weather, colo...

  7. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    PubMed

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.

  8. Sensitivity Analysis of the Gap Heat Transfer Model in BISON.

    SciTech Connect

    Swiler, Laura Painton; Schmidt, Rodney C.; Williamson, Richard; Perez, Danielle

    2014-10-01

    This report summarizes the result of a NEAMS project focused on sensitivity analysis of the heat transfer model in the gap between the fuel rod and the cladding used in the BISON fuel performance code of Idaho National Laboratory. Using the gap heat transfer models in BISON, the sensitivity of the modeling parameters and the associated responses is investigated. The study results in a quantitative assessment of the role of various parameters in the analysis of gap heat transfer in nuclear fuel.

  9. Mathematical Modeling and Sensitivity Analysis of Acid Deposition

    NASA Astrophysics Data System (ADS)

    Cho, Seog-Yeon

    Atmospheric processes influencing acid deposition are investigated by using mathematical model and sensitivity analysis. Sensitivity analysis techniques including Green's function analysis, constraint sensitivities, and lumped sensitivities are applied to temporal problems describing gas and liquid phase chemistry and to space-time problems describing pollutant transport and deposition. The sensitivity analysis techniques are used to; (1) investigate the chemical and physical processes related to acid depositions and (2) evaluate the linearity hypothesis, and source and receptor relationships. Results from analysis of the chemistry processes show that the relationship between SO(,2) concentration and the amount of sulfate produced is linear in gas phase but it may be nonlinear in liquid phase when there exists an excess amount of SO(,2) compared to H(,2)O(,2). Under the simulated conditions, the deviation of linearity between ambient sulfur present and the amount of sulfur deposited after 2 hours, is less than 10% in a convective storm situation when the liquid phase chemistry, gas phases chemistry, and cloud processes are considered simultaneously. Efficient ways of sensitivity analysis of time-space problems are also developed and used to evaluate the source and receptor relationships in an Eulerian transport, chemistry, removal model.

  10. Behavioral metabolomics analysis identifies novel neurochemical signatures in methamphetamine sensitization

    PubMed Central

    Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.

    2014-01-01

    Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544

  11. Behavioral metabolomics analysis identifies novel neurochemical signatures in methamphetamine sensitization.

    PubMed

    Adkins, D E; McClay, J L; Vunck, S A; Batman, A M; Vann, R E; Clark, S L; Souza, R P; Crowley, J J; Sullivan, P F; van den Oord, E J C G; Beardsley, P M

    2013-11-01

    Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In this study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine (MA)-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate, FDR <0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent MA levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization.

  12. Plans for a sensitivity analysis of bridge-scour computations

    USGS Publications Warehouse

    Dunn, David D.; Smith, Peter N.

    1993-01-01

    Plans for an analysis of the sensitivity of Level 2 bridge-scour computations are described. Cross-section data from 15 bridge sites in Texas are modified to reflect four levels of field effort ranging from no field surveys to complete surveys. Data from United States Geological Survey (USGS) topographic maps will be used to supplement incomplete field surveys. The cross sections are used to compute the water-surface profile through each bridge for several T-year recurrence-interval design discharges. The effect of determining the downstream energy grade-line slope from topographic maps is investigated by systematically varying the starting slope of each profile. The water-surface profile analyses are then used to compute potential scour resulting from each of the design discharges. The planned results will be presented in the form of exceedance-probability versus scour-depth plots with the maximum and minimum scour depths at each T-year discharge presented as error bars.

  13. Sensitivity analysis on an AC600 aluminum skin component

    NASA Astrophysics Data System (ADS)

    Mendiguren, J.; Agirre, J.; Mugarra, E.; Galdos, L.; Saenz de Argandoña, E.

    2016-08-01

    New materials are been introduced on the car body in order to reduce weight and fulfil the international CO2 emission regulations. Among them, the application of aluminum alloys is increasing for skin panels. Even if these alloys are beneficial for the car design, the manufacturing of these components become more complex. In this regard, numerical simulations have become a necessary tool for die designers. There are multiple factors affecting the accuracy of these simulations e.g. hardening, anisotropy, lubrication, elastic behavior. Numerous studies have been conducted in the last years on high strength steels component stamping and on developing new anisotropic models for aluminum cup drawings. However, the impact of the correct modelling on the latest aluminums for the manufacturing of skin panels has been not yet analyzed. In this work, first, the new AC600 aluminum alloy of JLR-Novelis is characterized for anisotropy, kinematic hardening, friction coefficient, elastic behavior. Next, a sensitivity analysis is conducted on the simulation of a U channel (with drawbeads). Then, the numerical an experimental results are correlated in terms of springback and failure. Finally, some conclusions are drawn.

  14. Lock Acquisition and Sensitivity Analysis of Advanced LIGO Interferometers

    NASA Astrophysics Data System (ADS)

    Martynov, Denis

    Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe. The initial phase of LIGO started in 2002, and since then data was collected during the six science runs. Instrument sensitivity improved from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010. In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation of detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted until 2014. This thesis describes results of commissioning work done at the LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers. The first part of this thesis is devoted to the description of methods for bringing the interferometer into linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details. Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in

  15. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    PubMed

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms.

  16. Software Design for Smile Analysis

    PubMed Central

    Sodagar, A.; Rafatjoo, R.; Gholami Borujeni, D.; Noroozi, H.; Sarkhosh, A.

    2010-01-01

    Introduction: Esthetics and attractiveness of the smile is one of the major demands in contemporary orthodontic treatment. In order to improve a smile design, it is necessary to record “posed smile” as an intentional, non-pressure, static, natural and reproducible smile. The record then should be analyzed to determine its characteristics. In this study, we intended to design and introduce a software to analyze the smile rapidly and precisely in order to produce an attractive smile for the patients. Materials and Methods: For this purpose, a practical study was performed to design multimedia software “Smile Analysis” which can receive patients’ photographs and videographs. After giving records to the software, the operator should mark the points and lines which are displayed on the system’s guide and also define the correct scale for each image. Thirty-three variables are measured by the software and displayed on the report page. Reliability of measurements in both image and video was significantly high (α=0.7–1). Results: In order to evaluate intra- operator and inter-operator reliability, five cases were selected randomly. Statistical analysis showed that calculations performed in smile analysis software were both valid and highly reliable (for both video and photo). Conclusion: The results obtained from smile analysis could be used in diagnosis, treatment planning and evaluation of the treatment progress. PMID:21998792

  17. Analysis-Based Message Design: Rethinking Screen Design Guidelines.

    ERIC Educational Resources Information Center

    Beriswill, Joanne E.

    This article describes the evolution of computer interface research issues from text-based interface design guidelines to more complex issues, including media selection, interface design, and visual design. This research is then integrated into the Analysis-based Message Design (AMD) process. The AMD process divides the interface design process…

  18. Designing novel nano-immunoassays: antibody orientation versus sensitivity

    NASA Astrophysics Data System (ADS)

    Puertas, S.; Moros, M.; Fernández-Pacheco, R.; Ibarra, M. R.; Grazú, V.; de la Fuente, J. M.

    2010-12-01

    There is a growing interest in the use of magnetic nanoparticles (MNPs) for their application in quantitative and highly sensitive biosensors. Their use as labels of biological recognition events and their detection by means of some magnetic method constitute a very promising strategy for quantitative high-sensitive lateral-flow assays. In this paper, we report the importance of nanoparticle functionalization for the improvement of sensitivity for a lateral-flow immunoassay. More precisely, we have found that immobilization of IgG anti-hCG through its polysaccharide moieties on MNPs allows more successful recognition of the hCG hormone. Although we have used the detection of hCG as a model in this work, the strategy of binding antibodies to MNPs through its sugar chains reported here is applicable to other antibodies. It has huge potential as it will be very useful for the development of quantitative and high-sensitive lateral-flow assays for its use on human and veterinary, medicine, food and beverage manufacturing, pharmaceutical, medical biologics and personal care product production, environmental remediation, etc.

  19. Bi-harmonic cantilever design for improved measurement sensitivity in tapping-mode atomic force microscopy.

    PubMed

    Loganathan, Muthukumaran; Bristow, Douglas A

    2014-04-01

    This paper presents a method and cantilever design for improving the mechanical measurement sensitivity in the atomic force microscopy (AFM) tapping mode. The method uses two harmonics in the drive signal to generate a bi-harmonic tapping trajectory. Mathematical analysis demonstrates that the wide-valley bi-harmonic tapping trajectory is as much as 70% more sensitive to changes in the sample topography than the standard single-harmonic trajectory typically used. Although standard AFM cantilevers can be driven in the bi-harmonic tapping trajectory, they require large forcing at the second harmonic. A design is presented for a bi-harmonic cantilever that has a second resonant mode at twice its first resonant mode, thereby capable of generating bi-harmonic trajectories with small forcing signals. Bi-harmonic cantilevers are fabricated by milling a small cantilever on the interior of a standard cantilever probe using a focused ion beam. Bi-harmonic drive signals are derived for standard cantilevers and bi-harmonic cantilevers. Experimental results demonstrate better than 30% improvement in measurement sensitivity using the bi-harmonic cantilever. Images obtained through bi-harmonic tapping exhibit improved sharpness and surface tracking, especially at high scan speeds and low force fields.

  20. Bi-harmonic cantilever design for improved measurement sensitivity in tapping-mode atomic force microscopy

    SciTech Connect

    Loganathan, Muthukumaran; Bristow, Douglas A.

    2014-04-15

    This paper presents a method and cantilever design for improving the mechanical measurement sensitivity in the atomic force microscopy (AFM) tapping mode. The method uses two harmonics in the drive signal to generate a bi-harmonic tapping trajectory. Mathematical analysis demonstrates that the wide-valley bi-harmonic tapping trajectory is as much as 70% more sensitive to changes in the sample topography than the standard single-harmonic trajectory typically used. Although standard AFM cantilevers can be driven in the bi-harmonic tapping trajectory, they require large forcing at the second harmonic. A design is presented for a bi-harmonic cantilever that has a second resonant mode at twice its first resonant mode, thereby capable of generating bi-harmonic trajectories with small forcing signals. Bi-harmonic cantilevers are fabricated by milling a small cantilever on the interior of a standard cantilever probe using a focused ion beam. Bi-harmonic drive signals are derived for standard cantilevers and bi-harmonic cantilevers. Experimental results demonstrate better than 30% improvement in measurement sensitivity using the bi-harmonic cantilever. Images obtained through bi-harmonic tapping exhibit improved sharpness and surface tracking, especially at high scan speeds and low force fields.

  1. Parameter sensitivity analysis of a simplified electrochemical and thermal model for Li-ion batteries aging

    NASA Astrophysics Data System (ADS)

    Edouard, C.; Petit, M.; Forgez, C.; Bernard, J.; Revel, R.

    2016-09-01

    In this work, a simplified electrochemical and thermal model that can predict both physicochemical and aging behavior of Li-ion batteries is studied. A sensitivity analysis of all its physical parameters is performed in order to find out their influence on the model output based on simulations under various conditions. The results gave hints on whether a parameter needs particular attention when measured or identified and on the conditions (e.g. temperature, discharge rate) under which it is the most sensitive. A specific simulation profile is designed for parameters involved in aging equations in order to determine their sensitivity. Finally, a step-wise method is followed to limit the influence of parameter values when identifying some of them, according to their relative sensitivity from the study. This sensitivity analysis and the subsequent step-wise identification method show very good results, such as a better fitting of the simulated cell voltage with experimental data.

  2. Design and operational parameters of a rooftop rainwater harvesting system: definition, sensitivity and verification.

    PubMed

    Mun, J S; Han, M Y

    2012-01-01

    The appropriate design and evaluation of a rainwater harvesting (RWH) system is necessary to improve system performance and the stability of the water supply. The main design parameters (DPs) of an RWH system are rainfall, catchment area, collection efficiency, tank volume and water demand. Its operational parameters (OPs) include rainwater use efficiency (RUE), water saving efficiency (WSE) and cycle number (CN). The sensitivity analysis of a rooftop RWH system's DPs to its OPs reveals that the ratio of tank volume to catchment area (V/A) for an RWH system in Seoul, South Korea is recommended between 0.03 and 0.08 in terms of rate of change in RUE. The appropriate design value of V/A is varied with D/A. The extra tank volume up to V/A of 0.15∼0.2 is also available, if necessary to secure more water. Accordingly, we should figure out suitable value or range of DPs based on the sensitivity analysis to optimize design of an RWH system or improve operation efficiency. The operational data employed in this study, which was carried out to validate the design and evaluation method of an RWH system, were obtained from the system in use at a dormitory complex at Seoul National University (SNU) in Korea. The results of these operational data are in good agreement with those used in the initial simulation. The proposed method and the results of this research will be useful in evaluating and comparing the performance of RWH systems. It is found that RUE can be increased by expanding the variety of rainwater uses, particularly in the high rainfall season.

  3. Robust global sensitivity analysis of a river management model

    NASA Astrophysics Data System (ADS)

    Peeters, L. J. M.; Podger, G. M.; Smith, T.; Pickett, T.; Bark, R.; Cuddy, S. M.

    2014-03-01

    The simulation of routing and distribution of water through a regulated river system with a river management model will quickly results in complex and non-linear model behaviour. A robust sensitivity analysis increases the transparency of the model and provide both the modeller and the system manager with better understanding and insight on how the model simulates reality and management operations. In this study, a robust, density-based sensitivity analysis, developed by Plischke et al. (2013), is applied to an eWater Source river management model. The sensitivity analysis is extended to not only account for main but also for interaction effects and is able to identify major linear effects as well as subtle minor and non-linear effects. The case study is an idealised river management model representing typical conditions of the Southern Murray-Darling Basin in Australia for which the sensitivity of a variety of model outcomes to variations in the driving forces, inflow to the system, rainfall and potential evapotranspiration, is examined. The model outcomes are most sensitive to the inflow to the system, but the sensitivity analysis identified minor effects of potential evapotranspiration as well as non-linear interaction effects between inflow and potential evapotranspiration.

  4. Novel design of dual-core microstructured fiber with enhanced longitudinal strain sensitivity

    NASA Astrophysics Data System (ADS)

    Szostkiewicz, Lukasz; Tenderenda, T.; Napierala, M.; Szymański, M.; Murawski, M.; Mergo, P.; Lesiak, P.; Marc, P.; Jaroszewicz, L. R.; Nasilowski, T.

    2014-05-01

    Constantly refined technology of manufacturing increasingly complex photonic crystal fibers (PCF) leads to new optical fiber sensor concepts. The ways of enhancing the influence of external factors (such as hydrostatic pressure, temperature, acceleration) on the fiber propagating conditions are commonly investigated in literature. On the other hand longitudinal strain analysis, due to the calculation difficulties caused by the three dimensional computation, are somehow neglected. In this paper we show results of such a 3D numerical simulation and report methods of tuning the fiber strain sensitivity by changing the fiber microstructure and core doping level. Furthermore our approach allows to control whether the modes' effective refractive index is increasing or decreasing with strain, with the possibility of achieving zero strain sensitivity with specific fiber geometries. The presented numerical analysis is compared with experimental results of the fabricated fibers characterization. Basing on the aforementioned methodology we propose a novel dual-core fiber design with significantly increased sensitivity to longitudinal strain for optical fiber sensor applications. Furthermore the reported fiber satisfies all conditions necessary for commercial applications like good mode matching with standard single-mode fiber, low confinement loss and ease of manufacturing with the stack-and-draw technique. Such fiber may serve as an integrated Mach-Zehnder interferometer when highly coherent source is used. With the optimization of single mode transmission to 850 nm, we propose a VCSEL source to be used in order to achieve a low-cost, reliable and compact strain sensing transducer.

  5. DESIGN ANALYSIS FOR THE NAVAL SNF WASTE PACKAGE

    SciTech Connect

    T.L. Mitchell

    2000-05-31

    The purpose of this analysis is to demonstrate the design of the naval spent nuclear fuel (SNF) waste package (WP) using the Waste Package Department's (WPD) design methodologies and processes described in the ''Waste Package Design Methodology Report'' (CRWMS M&O [Civilian Radioactive Waste Management System Management and Operating Contractor] 2000b). The calculations that support the design of the naval SNF WP will be discussed; however, only a sub-set of such analyses will be presented and shall be limited to those identified in the ''Waste Package Design Sensitivity Report'' (CRWMS M&O 2000c). The objective of this analysis is to describe the naval SNF WP design method and to show that the design of the naval SNF WP complies with the ''Naval Spent Nuclear Fuel Disposal Container System Description Document'' (CRWMS M&O 1999a) and Interface Control Document (ICD) criteria for Site Recommendation. Additional criteria for the design of the naval SNF WP have been outlined in Section 6.2 of the ''Waste Package Design Sensitivity Report'' (CRWMS M&O 2000c). The scope of this analysis is restricted to the design of the naval long WP containing one naval long SNF canister. This WP is representative of the WPs that will contain both naval short SNF and naval long SNF canisters. The following items are included in the scope of this analysis: (1) Providing a general description of the applicable design criteria; (2) Describing the design methodology to be used; (3) Presenting the design of the naval SNF waste package; and (4) Showing compliance with all applicable design criteria. The intended use of this analysis is to support Site Recommendation reports and assist in the development of WPD drawings. Activities described in this analysis were conducted in accordance with the technical product development plan (TPDP) ''Design Analysis for the Naval SNF Waste Package (CRWMS M&O 2000a).

  6. A Comparative Review of Sensitivity and Uncertainty Analysis of Large-Scale Systems - II: Statistical Methods

    SciTech Connect

    Cacuci, Dan G.; Ionescu-Bujor, Mihaela

    2004-07-15

    Part II of this review paper highlights the salient features of the most popular statistical methods currently used for local and global sensitivity and uncertainty analysis of both large-scale computational models and indirect experimental measurements. These statistical procedures represent sampling-based methods (random sampling, stratified importance sampling, and Latin Hypercube sampling), first- and second-order reliability algorithms (FORM and SORM, respectively), variance-based methods (correlation ratio-based methods, the Fourier Amplitude Sensitivity Test, and the Sobol Method), and screening design methods (classical one-at-a-time experiments, global one-at-a-time design methods, systematic fractional replicate designs, and sequential bifurcation designs). It is emphasized that all statistical uncertainty and sensitivity analysis procedures first commence with the 'uncertainty analysis' stage and only subsequently proceed to the 'sensitivity analysis' stage; this path is the exact reverse of the conceptual path underlying the methods of deterministic sensitivity and uncertainty analysis where the sensitivities are determined prior to using them for uncertainty analysis. By comparison to deterministic methods, statistical methods for uncertainty and sensitivity analysis are relatively easier to develop and use but cannot yield exact values of the local sensitivities. Furthermore, current statistical methods have two major inherent drawbacks as follows: 1. Since many thousands of simulations are needed to obtain reliable results, statistical methods are at best expensive (for small systems) or, at worst, impracticable (e.g., for large time-dependent systems).2. Since the response sensitivities and parameter uncertainties are inherently and inseparably amalgamated in the results produced by these methods, improvements in parameter uncertainties cannot be directly propagated to improve response uncertainties; rather, the entire set of simulations and

  7. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    NASA Technical Reports Server (NTRS)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral

  8. Design Spectrum Analysis in NASTRAN

    NASA Technical Reports Server (NTRS)

    Butler, T. G.

    1984-01-01

    The utility of Design Spectrum Analysis is to give a mode by mode characterization of the behavior of a design under a given loading. The theory of design spectrum is discussed after operations are explained. User instructions are taken up here in three parts: Transient Preface, Maximum Envelope Spectrum, and RMS Average Spectrum followed by a Summary Table. A single DMAP ALTER packet will provide for all parts of the design spectrum operations. The starting point for getting a modal break-down of the response to acceleration loading is the Modal Transient rigid format. After eigenvalue extraction, modal vectors need to be isolated in the full set of physical coordinates (P-sized as opposed to the D-sized vectors in RF 12). After integration for transient response the results are scanned over the solution time interval for the peak values and for the times that they occur. A module called SCAN was written to do this job, that organizes these maxima into a diagonal output matrix. The maximum amplifier in each mode is applied to the eigenvector of each mode which then reveals the maximum displacements, stresses, forces and boundary reactions that the structure will experience for a load history, mode by mode. The standard NASTRAN output processors have been modified for this task. It is required that modes be normalized to mass.

  9. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

    NASA Technical Reports Server (NTRS)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2016-01-01

    Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

  10. MAP stability, design, and analysis

    NASA Technical Reports Server (NTRS)

    Ericsson-Jackson, A. J.; Andrews, S. F.; O'Donnell, J. R., Jr.; Markley, F. L.

    1998-01-01

    The Microwave Anisotropy Probe (MAP) is a follow-on to the Differential Microwave Radiometer (DMR) instrument on the Cosmic Background Explorer (COBE) spacecraft. The design and analysis of the MAP attitude control system (ACS) have been refined since work previously reported. The full spacecraft and instrument flexible model was developed in NASTRAN, and the resulting flexible modes were plotted and reduced with the Modal Significance Analysis Package (MSAP). The reduced-order model was used to perform the linear stability analysis for each control mode, the results of which are presented in this paper. Although MAP is going to a relatively disturbance-free Lissajous orbit around the Earth-Sun L(2) Lagrange point, a detailed disturbance-torque analysis is required because there are only a small number of opportunities for momentum unloading each year. Environmental torques, including solar pressure at L(2), aerodynamic and gravity gradient during phasing-loop orbits, were calculated and simulated. Thruster plume impingement torques that could affect the performance of the thruster modes were estimated and simulated, and a simple model of fuel slosh was derived to model its effect on the motion of the spacecraft. In addition, a thruster mode linear impulse controller was developed to meet the accuracy requirements of the phasing loop burns. A dynamic attitude error limiter was added to improve the performance of the ACS during large attitude slews. The result of this analysis is a stable ACS subsystem that meets all of the mission's requirements.

  11. Computational aspects of sensitivity calculations in linear transient structural analysis

    NASA Technical Reports Server (NTRS)

    Greene, W. H.; Haftka, R. T.

    1991-01-01

    The calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, and transient response problems is studied. Several existing sensitivity calculation methods and two new methods are compared for three example problems. Approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite model. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. This was found to result in poor convergence of stress sensitivities in several cases. Two semianalytical techniques are developed to overcome this poor convergence. Both new methods result in very good convergence of the stress sensitivities; the computational cost is much less than would result if the vibration modes were recalculated and then used in an overall finite difference method.

  12. Phase sensitive signal analysis for bi-tapered optical fibers

    NASA Astrophysics Data System (ADS)

    Ben Harush Negari, Amit; Jauregui, Daniel; Sierra Hernandez, Juan M.; Garcia Mina, Diego; King, Branden J.; Idehenre, Ighodalo; Powers, Peter E.; Hansen, Karolyn M.; Haus, Joseph W.

    2016-03-01

    Our study examines the transmission characteristics of bi-tapered optical fibers, i.e. fibers that have a tapered down and up span with a waist length separating them. The applications to aqueous and vapor phase biomolecular sensing demand high sensitivity. A bi-tapered optical fiber platform is suited for label-free biomolecular detection and can be optimized by modification of the length, diameter and surface properties of the tapered region. We have developed a phase sensitive method based on interference of two or more modes of the fiber and we demonstrate that our fiber sensitivity is of order 10-4 refractive index units. Higher sensitivity can be achieved, as needed, by enhancing the fiber design characteristics.

  13. Shape design sensitivities using fully automatic 3-D mesh generation

    NASA Technical Reports Server (NTRS)

    Botkin, M. E.

    1990-01-01

    Previous work in three dimensional shape optimization involved specifying design variables by associating parameters directly with mesh points. More recent work has shown the use of fully-automatic mesh generation based upon a parameterized geometric representation. Design variables have been associated with a mathematical model of the part rather than the discretized representation. The mesh generation procedure uses a nonuniform grid intersection technique to place nodal points directly on the surface geometry. Although there exists an associativity between the mesh and the geometrical/topological entities, there is no mathematical functional relationship. This poses a problem during certain steps in the optimization process in which geometry modification is required. For the large geometrical changes which occur at the beginning of each optimization step, a completely new mesh is created. However, for gradient calculations many small changes must be made and it would be too costly to regenerate the mesh for each design variable perturbation. For that reason, a local remeshing procedure has been implemented which operates only on the specific edges and faces associated with the design variable being perturbed. Two realistic design problems are presented which show the efficiency of this process and test the accuracy of the gradient computations.

  14. Sensitivity analysis for missing data in regulatory submissions.

    PubMed

    Permutt, Thomas

    2016-07-30

    The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA. PMID:26567763

  15. Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC

    NASA Astrophysics Data System (ADS)

    Yang, J.; Castelli, F.; Chen, Y.

    2014-10-01

    Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ɛ-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more

  16. Sensitivity analysis as an aid in modelling and control of (poorly-defined) ecological systems. [closed ecological systems

    NASA Technical Reports Server (NTRS)

    Hornberger, G. M.; Rastetter, E. B.

    1982-01-01

    A literature review of the use of sensitivity analyses in modelling nonlinear, ill-defined systems, such as ecological interactions is presented. Discussions of previous work, and a proposed scheme for generalized sensitivity analysis applicable to ill-defined systems are included. This scheme considers classes of mathematical models, problem-defining behavior, analysis procedures (especially the use of Monte-Carlo methods), sensitivity ranking of parameters, and extension to control system design.

  17. Sensitivity analysis approach to multibody systems described by natural coordinates

    NASA Astrophysics Data System (ADS)

    Li, Xiufeng; Wang, Yabin

    2014-03-01

    The classical natural coordinate modeling method which removes the Euler angles and Euler parameters from the governing equations is particularly suitable for the sensitivity analysis and optimization of multibody systems. However, the formulation has so many principles in choosing the generalized coordinates that it hinders the implementation of modeling automation. A first order direct sensitivity analysis approach to multibody systems formulated with novel natural coordinates is presented. Firstly, a new selection method for natural coordinate is developed. The method introduces 12 coordinates to describe the position and orientation of a spatial object. On the basis of the proposed natural coordinates, rigid constraint conditions, the basic constraint elements as well as the initial conditions for the governing equations are derived. Considering the characteristics of the governing equations, the newly proposed generalized-α integration method is used and the corresponding algorithm flowchart is discussed. The objective function, the detailed analysis process of first order direct sensitivity analysis and related solving strategy are provided based on the previous modeling system. Finally, in order to verify the validity and accuracy of the method presented, the sensitivity analysis of a planar spinner-slider mechanism and a spatial crank-slider mechanism are conducted. The test results agree well with that of the finite difference method, and the maximum absolute deviation of the results is less than 3%. The proposed approach is not only convenient for automatic modeling, but also helpful for the reduction of the complexity of sensitivity analysis, which provides a practical and effective way to obtain sensitivity for the optimization problems of multibody systems.

  18. Design and analysis of a micromachined gyroscope

    NASA Astrophysics Data System (ADS)

    Zarei, Nilgoon; Leung, Albert; Jones, John D.

    2012-03-01

    This paper describes the simulation and design of a MEMS thermal gyroscope and optimizing the design for increased sensitivity through the use of the Comsol Multiphysics software package. Two different designs are described, and the effects of working fluid properties are explored. A prototype of this device has been fabricated using techniques for rapid prototyping of MEMS transducers.

  19. Self-validated Variance-based Methods for Sensitivity Analysis of Model Outputs

    SciTech Connect

    Tong, C

    2009-04-20

    Global sensitivity analysis (GSA) has the advantage over local sensitivity analysis in that GSA does not require strong model assumptions such as linearity or monotonicity. As a result, GSA methods such as those based on variance decomposition are well-suited to multi-physics models, which are often plagued by large nonlinearities. However, as with many other sampling-based methods, inadequate sample size can badly pollute the result accuracies. A natural remedy is to adaptively increase the sample size until sufficient accuracy is obtained. This paper proposes an iterative methodology comprising mechanisms for guiding sample size selection and self-assessing result accuracy. The elegant features in the the proposed methodology are the adaptive refinement strategies for stratified designs. We first apply this iterative methodology to the design of a self-validated first-order sensitivity analysis algorithm. We also extend this methodology to design a self-validated second-order sensitivity analysis algorithm based on refining replicated orthogonal array designs. Several numerical experiments are given to demonstrate the effectiveness of these methods.

  20. Theoretical design and screening of alkyne bridged triphenyl zinc porphyrins as sensitizer candidates for dye-sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Zhang, Xianxi; Chen, Qianqian; Sun, Huafei; Pan, Tingting; Hu, Guiqi; Ma, Ruimin; Dou, Jianmin; Li, Dacheng; Pan, Xu

    2014-01-01

    Alkyne bridged porphyrins have been proved very promising sensitizers for dye-sensitized solar cells (DSSCs) with the highest photo-to-electric conversion efficiencies of 11.9% solely and 12.3% co-sensitized with other sensitizers achieved. Developing better porphyrin sensitizers with wider electronic absorption spectra to further improve the efficiencies of corresponding solar cells is still of great significance for the application of DSSCs. A series of triphenyl zinc porphyrins (ZnTriPP) differing in the nature of a pendant acceptor group and the conjugated bridge between the porphyrin nucleus and the acceptor unit were modeled and their electronic and spectral properties calculated using density functional theory. As compared with each other and the experimental results of the compounds used in DSSCs previously, the molecules with a relatively longer conjugative linker and a strong electron-withdrawing group such as cyanide adjacent to the carboxyl acid group seem to provide wider electronic absorption spectra and higher photo-to-electric conversion efficiencies. The dye candidates ZnTriPPE, ZnTriPPM, ZnTriPPQ, ZnTriPPR and ZnTriPPS designed in the current work were found promising to provide comparable photo-to-electric conversion efficiencies to the record 11.9% of the alkyne bridged porphyrin sensitizer YD2-o-C8 reported previously.

  1. Sensitivity analysis of dynamic biological systems with time-delays

    PubMed Central

    2010-01-01

    Background Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. Results We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. Conclusions By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex

  2. Sensitivity analysis for handling uncertainty in an economic evaluation.

    PubMed

    Limwattananon, Supon

    2014-05-01

    To meet updated international standards, this paper revises the previous Thai guidelines for conducting sensitivity analyses as part of the decision analysis model for health technology assessment. It recommends both deterministic and probabilistic sensitivity analyses to handle uncertainty of the model parameters, which are best represented graphically. Two new methodological issues are introduced-a threshold analysis of medicines' unit prices for fulfilling the National Lists of Essential Medicines' requirements and the expected value of information for delaying decision-making in contexts where there are high levels of uncertainty. Further research is recommended where parameter uncertainty is significant and where the cost of conducting the research is not prohibitive. PMID:24964700

  3. Sensitivity analysis of the fission gas behavior model in BISON.

    SciTech Connect

    Swiler, Laura Painton; Pastore, Giovanni; Perez, Danielle; Williamson, Richard

    2013-05-01

    This report summarizes the result of a NEAMS project focused on sensitivity analysis of a new model for the fission gas behavior (release and swelling) in the BISON fuel performance code of Idaho National Laboratory. Using the new model in BISON, the sensitivity of the calculated fission gas release and swelling to the involved parameters and the associated uncertainties is investigated. The study results in a quantitative assessment of the role of intrinsic uncertainties in the analysis of fission gas behavior in nuclear fuel.

  4. What Constitutes a "Good" Sensitivity Analysis? Elements and Tools for a Robust Sensitivity Analysis with Reduced Computational Cost

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin

    2016-04-01

    Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  5. Sensitivity Analysis and Optimization of Aerodynamic Configurations with Blend Surfaces

    NASA Technical Reports Server (NTRS)

    Thomas, A. M.; Tiwari, S. N.

    1997-01-01

    A novel (geometrical) parametrization procedure using solutions to a suitably chosen fourth order partial differential equation is used to define a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. The general airplane configuration has wing, fuselage, vertical tail and horizontal tail. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. A graphic interface software is developed which dynamically changes the surface of the airplane configuration with the change in input design variable. The software is made user friendly and is targeted towards the initial conceptual development of any aerodynamic configurations. Grid sensitivity with respect to surface design parameters and aerodynamic sensitivity coefficients based on potential flow is obtained using an Automatic Differentiation precompiler software tool ADIFOR. Aerodynamic shape optimization of the complete aircraft with twenty four design variables is performed. Unstructured and structured volume grids and Euler solutions are obtained with standard software to demonstrate the feasibility of the new surface definition.

  6. Efficient sensitivity analysis method for chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao

    2016-05-01

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.

  7. Bayesian sensitivity analysis of a nonlinear finite element model

    NASA Astrophysics Data System (ADS)

    Becker, W.; Oakley, J. E.; Surace, C.; Gili, P.; Rowson, J.; Worden, K.

    2012-10-01

    A major problem in uncertainty and sensitivity analysis is that the computational cost of propagating probabilistic uncertainty through large nonlinear models can be prohibitive when using conventional methods (such as Monte Carlo methods). A powerful solution to this problem is to use an emulator, which is a mathematical representation of the model built from a small set of model runs at specified points in input space. Such emulators are massively cheaper to run and can be used to mimic the "true" model, with the result that uncertainty analysis and sensitivity analysis can be performed for a greatly reduced computational cost. The work here investigates the use of an emulator known as a Gaussian process (GP), which is an advanced probabilistic form of regression. The GP is particularly suited to uncertainty analysis since it is able to emulate a wide class of models, and accounts for its own emulation uncertainty. Additionally, uncertainty and sensitivity measures can be estimated analytically, given certain assumptions. The GP approach is explained in detail here, and a case study of a finite element model of an airship is used to demonstrate the method. It is concluded that the GP is a very attractive way of performing uncertainty and sensitivity analysis on large models, provided that the dimensionality is not too high.

  8. The Design and Operation of Ultra-Sensitive and Tunable Radio-Frequency Interferometers

    PubMed Central

    Cui, Yan; Wang, Pingshan

    2015-01-01

    Dielectric spectroscopy (DS) is an important technique for scientific and technological investigations in various areas. DS sensitivity and operating frequency ranges are critical for many applications, including lab-on-chip development where sample volumes are small with a wide range of dynamic processes to probe. In this work, we present the design and operation considerations of radio-frequency (RF) interferometers that are based on power-dividers (PDs) and quadrature-hybrids (QHs). Such interferometers are proposed to address the sensitivity and frequency tuning challenges of current DS techniques. Verified algorithms together with mathematical models are presented to quantify material properties from scattering parameters for three common transmission line sensing structures, i.e., coplanar waveguides (CPWs), conductor-backed CPWs, and microstrip lines. A high-sensitivity and stable QH-based interferometer is demonstrated by measuring glucose–water solution at a concentration level that is ten times lower than some recent RF sensors while our sample volume is ~1 nL. Composition analysis of ternary mixture solutions are also demonstrated with a PD-based interferometer. Further work is needed to address issues like system automation, model improvement at high frequencies, and interferometer scaling. PMID:26549891

  9. Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.

    2007-01-01

    To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.

  10. Sensitivity analysis in a Lassa fever deterministic mathematical model

    NASA Astrophysics Data System (ADS)

    Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman

    2015-05-01

    Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.

  11. The Volatility of Data Space: Topology Oriented Sensitivity Analysis

    PubMed Central

    Du, Jing; Ligmann-Zielinska, Arika

    2015-01-01

    Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929

  12. Sensitive analysis of a finite element model of orthogonal cutting

    NASA Astrophysics Data System (ADS)

    Brocail, J.; Watremez, M.; Dubar, L.

    2011-01-01

    This paper presents a two-dimensional finite element model of orthogonal cutting. The proposed model has been developed with Abaqus/explicit software. An Arbitrary Lagrangian-Eulerian (ALE) formulation is used to predict chip formation, temperature, chip-tool contact length, chip thickness, and cutting forces. This numerical model of orthogonal cutting will be validated by comparing these process variables to experimental and numerical results obtained by Filice et al. [1]. This model can be considered to be reliable enough to make qualitative analysis of entry parameters related to cutting process and frictional models. A sensitivity analysis is conducted on the main entry parameters (coefficients of the Johnson-Cook law, and contact parameters) with the finite element model. This analysis is performed with two levels for each factor. The sensitivity analysis realised with the numerical model on the entry parameters has allowed the identification of significant parameters and the margin identification of parameters.

  13. Multi-Scale Distributed Sensitivity Analysis of Radiative Transfer Model

    NASA Astrophysics Data System (ADS)

    Neelam, M.; Mohanty, B.

    2015-12-01

    Amidst nature's great variability and complexity and Soil Moisture Active Passive (SMAP) mission aims to provide high resolution soil moisture products for earth sciences applications. One of the biggest challenges still faced by the remote sensing community are the uncertainties, heterogeneities and scaling exhibited by soil, land cover, topography, precipitation etc. At each spatial scale, there are different levels of uncertainties and heterogeneities. Also, each land surface variable derived from various satellite mission comes with their own error margins. As such, soil moisture retrieval accuracy is affected as radiative model sensitivity changes with space, time, and scale. In this paper, we explore the distributed sensitivity analysis of radiative model under different hydro-climates and spatial scales, 1.5 km, 3 km, 9km and 39km. This analysis is conducted in three different regions Iowa, U.S.A (SMEX02), Arizona, USA (SMEX04) and Winnipeg, Canada (SMAPVEX12). Distributed variables such as soil moisture, soil texture, vegetation and temperature are assumed to be uncertain and are conditionally simulated to obtain uncertain maps, whereas roughness data which is spatially limited are assumed a probability distribution. The relative contribution of the uncertain model inputs to the aggregated model output is also studied, using various aggregation techniques. We use global sensitivity analysis (GSA) to conduct this analysis across spatio-temporal scales. Keywords: Soil moisture, radiative transfer, remote sensing, sensitivity, SMEX02, SMAPVEX12.

  14. Beyond the GUM: variance-based sensitivity analysis in metrology

    NASA Astrophysics Data System (ADS)

    Lira, I.

    2016-07-01

    Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand.

  15. Omitted Variable Sensitivity Analysis with the Annotated Love Plot

    ERIC Educational Resources Information Center

    Hansen, Ben B.; Fredrickson, Mark M.

    2014-01-01

    The goal of this research is to make sensitivity analysis accessible not only to empirical researchers but also to the various stakeholders for whom educational evaluations are conducted. To do this it derives anchors for the omitted variable (OV)-program participation association intrinsically, using the Love plot to present a wide range of…

  16. Bayesian Sensitivity Analysis of Statistical Models with Missing Data

    PubMed Central

    ZHU, HONGTU; IBRAHIM, JOSEPH G.; TANG, NIANSHENG

    2013-01-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures. PMID:24753718

  17. Sensitivity analysis of a ground-water-flow model

    USGS Publications Warehouse

    Torak, Lynn J.; ,

    1991-01-01

    A sensitivity analysis was performed on 18 hydrological factors affecting steady-state groundwater flow in the Upper Floridan aquifer near Albany, southwestern Georgia. Computations were based on a calibrated, two-dimensional, finite-element digital model of the stream-aquifer system and the corresponding data inputs. Flow-system sensitivity was analyzed by computing water-level residuals obtained from simulations involving individual changes to each hydrological factor. Hydrological factors to which computed water levels were most sensitive were those that produced the largest change in the sum-of-squares of residuals for the smallest change in factor value. Plots of the sum-of-squares of residuals against multiplier or additive values that effect change in the hydrological factors are used to evaluate the influence of each factor on the simulated flow system. The shapes of these 'sensitivity curves' indicate the importance of each hydrological factor to the flow system. Because the sensitivity analysis can be performed during the preliminary phase of a water-resource investigation, it can be used to identify the types of hydrological data required to accurately characterize the flow system prior to collecting additional data or making management decisions.

  18. LSENS - GENERAL CHEMICAL KINETICS AND SENSITIVITY ANALYSIS CODE

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1994-01-01

    which provides the relationships between the predictions of a kinetics model and the input parameters of the problem. LSENS provides for efficient and accurate chemical kinetics computations and includes sensitivity analysis for a variety of problems, including nonisothermal conditions. LSENS replaces the previous NASA general chemical kinetics codes GCKP and GCKP84. LSENS is designed for flexibility, convenience and computational efficiency. A variety of chemical reaction models can be considered. The models include static system, steady one-dimensional inviscid flow, reaction behind an incident shock wave including boundary layer correction, and the perfectly stirred (highly backmixed) reactor. In addition, computations of equilibrium properties can be performed for the following assigned states, enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static problems LSENS computes sensitivity coefficients with respect to the initial values of the dependent variables and/or the three rates coefficient parameters of each chemical reaction. To integrate the ODEs describing chemical kinetics problems, LSENS uses the packaged code LSODE, the Livermore Solver for Ordinary Differential Equations, because it has been shown to be the most efficient and accurate code for solving such problems. The sensitivity analysis computations use the decoupled direct method, as implemented by Dunker and modified by Radhakrishnan. This method has shown greater efficiency and stability with equal or better accuracy than other methods of sensitivity analysis. LSENS is written in FORTRAN 77 with the exception of the NAMELIST extensions used for input. While this makes the code fairly machine independent, execution times on IBM PC compatibles would be unacceptable to most users. LSENS has been successfully implemented on a Sun4 running SunOS and a DEC VAX running VMS. With minor modifications, it should also be easily implemented on other

  19. Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit

    NASA Astrophysics Data System (ADS)

    Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie

    2015-09-01

    The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity

  20. Double Precision Differential/Algebraic Sensitivity Analysis Code

    1995-06-02

    DDASAC solves nonlinear initial-value problems involving stiff implicit systems of ordinary differential and algebraic equations. Purely algebraic nonlinear systems can also be solved, given an initial guess within the region of attraction of a solution. Options include automatic reconciliation of inconsistent initial states and derivatives, automatic initial step selection, direct concurrent parametric sensitivity analysis, and stopping at a prescribed value of any user-defined functional of the current solution vector. Local error control (in the max-normmore » or the 2-norm) is provided for the state vector and can include the sensitivities on request.« less

  1. A sensitivity analysis for subverting randomization in controlled trials.

    PubMed

    Marcus, S M

    2001-02-28

    In some randomized controlled trials, subjects with a better prognosis may be diverted into the treatment group. This subverting of randomization results in an unobserved non-compliance with the originally intended treatment assignment. Consequently, the estimate of treatment effect from these trials may be biased. This paper clarifies the determinants of the magnitude of the bias and gives a sensitivity analysis that associates the amount that randomization is subverted and the resulting bias in treatment effect estimation. The methods are illustrated with a randomized controlled trial that evaluates the efficacy of a culturally sensitive AIDS education video.

  2. Superconducting Accelerating Cavity Pressure Sensitivity Analysis and Stiffening

    SciTech Connect

    Rodnizki, J; Ben Aliz, Y; Grin, A; Horvitz, Z; Perry, A; Weissman, L; Davis, G Kirk; Delayen, Jean R.

    2014-12-01

    The Soreq Applied Research Accelerator Facility (SARAF) design is based on a 40 MeV 5 mA light ions superconducting RF linac. Phase-I of SARAF delivers up to 2 mA CW proton beams in an energy range of 1.5 - 4.0 MeV. The maximum beam power that we have reached is 5.7 kW. Today, the main limiting factor to reach higher ion energy and beam power is related to the HWR sensitivity to the liquid helium coolant pressure fluctuations. The HWR sensitivity to helium pressure is about 60 Hz/mbar. The cavities had been designed, a decade ago, to be soft in order to enable tuning of their novel shape. However, the cavities turned out to be too soft. In this work we found that increasing the rigidity of the cavities in the vicinity of the external drift tubes may reduce the cavity sensitivity by a factor of three. A preliminary design to increase the cavity rigidity is presented.

  3. Computational aspects of sensitivity calculations in transient structural analysis

    NASA Technical Reports Server (NTRS)

    Greene, William H.; Haftka, Raphael T.

    1989-01-01

    A key step in the application of formal automated design techniques to structures under transient loading is the calculation of sensitivities of response quantities to the design parameters. This paper considers response quantities to the design parameters. This paper considers structures with general forms of damping acted on by general transient loading and addresses issues of computational errors and computational efficiency. The equations of motion are reduced using the traditional basis of vibration modes and then integrated using a highly accurate, explicit integration technique. A critical point constraint formulation is used to place constraints on the magnitude of each response quantity as a function of time. Three different techniques for calculating sensitivities of the critical point constraints are presented. The first two are based on the straightforward application of the forward and central difference operators, respectively. The third is based on explicit differentiation of the equations of motion. Condition errors, finite difference truncation errors, and modal convergence errors for the three techniques are compared by applying them to a simple five-span-beam problem. Sensitivity results are presented for two different transient loading conditions and for both damped and undamped cases.

  4. Design Sensitivities of the Superconducting Parallel-Bar Cavity

    SciTech Connect

    De Silva, Subashini U.; Delayen, Jean D.

    2010-09-01

    The superconducting parallel-bar cavity has properties that makes it attractive as a deflecting or crabbing rf structure. For example it is under consideration as an rf separator for the Jefferson Lab 12 GeV upgrade and as a crabbing structure for a possible LHC luminosity upgrade. In order to maintain the purity of the deflecting mode and avoid mixing with the near accelerating mode caused by geometrical imperfection, a minimum frequency separation is needed which depends on the expected deviations from perfect symmetry. We have done an extensive analysis of the impact of several geometrical imperfections on the properties of the parallel-bar cavities and the effects on the beam, and present the results in this paper.

  5. Pressure-Sensitive Paints Advance Rotorcraft Design Testing

    NASA Technical Reports Server (NTRS)

    2013-01-01

    The rotors of certain helicopters can spin at speeds as high as 500 revolutions per minute. As the blades slice through the air, they flex, moving into the wind and back out, experiencing pressure changes on the order of thousands of times a second and even higher. All of this makes acquiring a true understanding of rotorcraft aerodynamics a difficult task. A traditional means of acquiring aerodynamic data is to conduct wind tunnel tests using a vehicle model outfitted with pressure taps and other sensors. These sensors add significant costs to wind tunnel testing while only providing measurements at discrete locations on the model's surface. In addition, standard sensor solutions do not work for pulling data from a rotor in motion. "Typical static pressure instrumentation can't handle that," explains Neal Watkins, electronics engineer in Langley Research Center s Advanced Sensing and Optical Measurement Branch. "There are dynamic pressure taps, but your costs go up by a factor of five to ten if you use those. In addition, recovery of the pressure tap readings is accomplished through slip rings, which allow only a limited amount of sensors and can require significant maintenance throughout a typical rotor test." One alternative to sensor-based wind tunnel testing is pressure sensitive paint (PSP). A coating of a specialized paint containing luminescent material is applied to the model. When exposed to an LED or laser light source, the material glows. The glowing material tends to be reactive to oxygen, explains Watkins, which causes the glow to diminish. The more oxygen that is present (or the more air present, since oxygen exists in a fixed proportion in air), the less the painted surface glows. Imaged with a camera, the areas experiencing greater air pressure show up darker than areas of less pressure. "The paint allows for a global pressure map as opposed to specific points," says Watkins. With PSP, each pixel recorded by the camera becomes an optical pressure

  6. Improved PID controller design for unstable time delay processes based on direct synthesis method and maximum sensitivity

    NASA Astrophysics Data System (ADS)

    Vanavil, B.; Krishna Chaitanya, K.; Seshagiri Rao, A.

    2015-06-01

    In this paper, a proportional-integral-derivative controller in series with a lead-lag filter is designed for control of the open-loop unstable processes with time delay based on direct synthesis method. Study of the performance of the designed controllers has been carried out on various unstable processes. Set-point weighting is considered to reduce the undesirable overshoot. The proposed scheme consists of only one tuning parameter, and systematic guidelines are provided for selection of the tuning parameter based on the peak value of the sensitivity function (Ms). Robustness analysis has been carried out based on sensitivity and complementary sensitivity functions. Nominal and robust control performances are achieved with the proposed method and improved closed-loop performances are obtained when compared to the recently reported methods in the literature.

  7. Shape sensitivity analysis of flutter response of a laminated wing

    NASA Technical Reports Server (NTRS)

    Bergen, Fred D.; Kapania, Rakesh K.

    1988-01-01

    A method is presented for calculating the shape sensitivity of a wing aeroelastic response with respect to changes in geometric shape. Yates' modified strip method is used in conjunction with Giles' equivalent plate analysis to predict the flutter speed, frequency, and reduced frequency of the wing. Three methods are used to calculate the sensitivity of the eigenvalue. The first method is purely a finite difference calculation of the eigenvalue derivative directly from the solution of the flutter problem corresponding to the two different values of the shape parameters. The second method uses an analytic expression for the eigenvalue sensitivities of a general complex matrix, where the derivatives of the aerodynamic, mass, and stiffness matrices are computed using a finite difference approximation. The third method also uses an analytic expression for the eigenvalue sensitivities, but the aerodynamic matrix is computed analytically. All three methods are found to be in good agreement with each other. The sensitivities of the eigenvalues were used to predict the flutter speed, frequency, and reduced frequency. These approximations were found to be in good agreement with those obtained using a complete reanalysis.

  8. Design of Sensitivity Function of Multi-Rate VCM Control System

    NASA Astrophysics Data System (ADS)

    Kisaka, Masashi

    The method for designing the sensitivity function of a multiple-input single-output servo system is proposed. The method does not require weight or a weight function unlike linear quadratic (LQ) or H∞ design. First, a controller candidate is derived by taking into consideration the specification of robustness of the plant system. Then, the sensitivity function is derived from the gain specification of the sensitivity function. As the design of a multi-rate controller can be shown to be equivalent to the multiple-input single-output system, the method is employed to design the multi-rate VCM position control system. The multi-rate controller is designed such that at frequencies higher than the Nyquist frequency, the desired robustness is achieved.

  9. On the design of wave digital filters with low sensitivity properties.

    NASA Technical Reports Server (NTRS)

    Renner, K.; Gupta, S. C.

    1973-01-01

    The wave digital filter patterned after doubly terminated maximum available power (MAP) networks by means of the Richard's transformation has been shown to have low-coefficient-sensitivity properties. This paper examines the exact nature of the relationship between the wave-digital-filter structure and the MAP networks and how the sensitivity property arises, which permits implementation of the digital structure with a lower coefficient word length than that possible with the conventional structures. The proper design procedure is specified and the nature of the unique complementary outputs is discussed. Finally, an example is considered which illustrates the design, the conversion techniques, and the low sensitivity properties.

  10. Sensitivity Analysis and Optimal Control of Anthroponotic Cutaneous Leishmania

    PubMed Central

    Zamir, Muhammad; Zaman, Gul; Alshomrani, Ali Saleh

    2016-01-01

    This paper is focused on the transmission dynamics and optimal control of Anthroponotic Cutaneous Leishmania. The threshold condition R0 for initial transmission of infection is obtained by next generation method. Biological sense of the threshold condition is investigated and discussed in detail. The sensitivity analysis of the reproduction number is presented and the most sensitive parameters are high lighted. On the basis of sensitivity analysis, some control strategies are introduced in the model. These strategies positively reduce the effect of the parameters with high sensitivity indices, on the initial transmission. Finally, an optimal control strategy is presented by taking into account the cost associated with control strategies. It is also shown that an optimal control exists for the proposed control problem. The goal of optimal control problem is to minimize, the cost associated with control strategies and the chances of infectious humans, exposed humans and vector population to become infected. Numerical simulations are carried out with the help of Runge-Kutta fourth order procedure. PMID:27505634

  11. Ducted propeller design and analysis

    SciTech Connect

    Weir, R.J.

    1987-10-01

    The theory and implementation of the design of a ducted propeller blade are presented and discussed. Straightener (anti-torque) vane design is also discussed. Comparisons are made to an existing propeller design and the results and performance of two example propeller blades are given. The inflow velocity at the propeller plane is given special attention and two dimensionless parameters independent of RPM are discussed. Errors in off-design performance are also investigated. 11 refs., 26 figs.

  12. Graphical methods for the sensitivity analysis in discriminant analysis

    DOE PAGESBeta

    Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang

    2015-09-30

    Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern ofmore » the change.« less

  13. Graphical methods for the sensitivity analysis in discriminant analysis

    SciTech Connect

    Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang

    2015-09-30

    Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern of the change.

  14. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    SciTech Connect

    Wang, Qiqi Hu, Rui Blonigan, Patrick

    2014-06-15

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.

  15. Sensitivity analysis of transport modeling in a fractured gneiss aquifer

    NASA Astrophysics Data System (ADS)

    Abdelaziz, Ramadan; Merkel, Broder J.

    2015-03-01

    Modeling solute transport in fractured aquifers is still challenging for scientists and engineers. Tracer tests are a powerful tool to investigate fractured aquifers with complex geometry and variable heterogeneity. This research focuses on obtaining hydraulic and transport parameters from an experimental site with several wells. At the site, a tracer test with NaCl was performed under natural gradient conditions. Observed concentrations of tracer test were used to calibrate a conservative solute transport model by inverse modeling based on UCODE2013, MODFLOW, and MT3DMS. In addition, several statistics are employed for sensitivity analysis. Sensitivity analysis results indicate that hydraulic conductivity and immobile porosity play important role in the late arrive for breakthrough curve. The results proved that the calibrated model fits well with the observed data set.

  16. Control of a mechanical aeration process via topological sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Abdelwahed, M.; Hassine, M.; Masmoudi, M.

    2009-06-01

    The topological sensitivity analysis method gives the variation of a criterion with respect to the creation of a small hole in the domain. In this paper, we use this method to control the mechanical aeration process in eutrophic lakes. A simplified model based on incompressible Navier-Stokes equations is used, only considering the liquid phase, which is the dominant one. The injected air is taken into account through local boundary conditions for the velocity, on the injector holes. A 3D numerical simulation of the aeration effects is proposed using a mixed finite element method. In order to generate the best motion in the fluid for aeration purposes, the optimization of the injector location is considered. The main idea is to carry out topological sensitivity analysis with respect to the insertion of an injector. Finally, a topological optimization algorithm is proposed and some numerical results, showing the efficiency of our approach, are presented.

  17. Sensitivity analysis techniques for models of human behavior.

    SciTech Connect

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  18. Objective analysis of the ARM IOP data: method and sensitivity

    SciTech Connect

    Cedarwall, R; Lin, J L; Xie, S C; Yio, J J; Zhang, M H

    1999-04-01

    Motivated by the need of to obtain accurate objective analysis of field experimental data to force physical parameterizations in numerical models, this paper -first reviews the existing objective analysis methods and interpolation schemes that are used to derive atmospheric wind divergence, vertical velocity, and advective tendencies. Advantages and disadvantages of each method are discussed. It is shown that considerable uncertainties in the analyzed products can result from the use of different analysis schemes and even more from different implementations of a particular scheme. The paper then describes a hybrid approach to combine the strengths of the regular grid method and the line-integral method, together with a variational constraining procedure for the analysis of field experimental data. In addition to the use of upper air data, measurements at the surface and at the top-of-the-atmosphere are used to constrain the upper air analysis to conserve column-integrated mass, water, energy, and momentum. Analyses are shown for measurements taken in the Atmospheric Radiation Measurement Programs (ARM) July 1995 Intensive Observational Period (IOP). Sensitivity experiments are carried out to test the robustness of the analyzed data and to reveal the uncertainties in the analysis. It is shown that the variational constraining process significantly reduces the sensitivity of the final data products.

  19. Computational aspects of sensitivity calculations in transient structural analysis

    NASA Technical Reports Server (NTRS)

    Greene, William H.; Haftka, Raphael T.

    1988-01-01

    A key step in the application of formal automated design techniques to structures under transient loading is the calculation of sensitivities of response quantities to the design parameters. This paper considers structures with general forms of damping acted on by general transient loading and addresses issues of computational errors and computational efficiency. The equations of motion are reduced using the traditional basis of vibration modes and then integrated using a highly accurate, explicit integration technique. A critical point constraint formulation is used to place constraints on the magnitude of each response quantity as a function of time. Three different techniques for calculating sensitivities of the critical point constraints are presented. The first two are based on the straightforward application of the forward and central difference operators, respectively. The third is based on explicit differentiation of the equations of motion. Condition errors, finite difference truncation errors, and modal convergence errors for the three techniques are compared by applying them to a simple five-span-beam problem. Sensitivity results are presented for two different transient loading conditions and for both damped and undamped cases.

  20. Wing-Design And -Analysis Code

    NASA Technical Reports Server (NTRS)

    Darden, Christine M.; Carlson, Harry W.

    1990-01-01

    WINGDES2 computer program provides wing-design algorithm based on modified linear theory taking into account effects of attainable leading-edge thrust. Features improved numerical accuracy and additional capabilities. Provides analysis as well as design capability and applicable to both subsonic and supersonic flow. Replaces earlier wing-design code designated WINGDES (see LAR-13315). Written in FORTRAN V.

  1. Considerations in the design and sensitivity optimization of the micro tactile sensor.

    PubMed

    Murayama, Yoshinobu; Omata, Sadao

    2005-03-01

    Although miniaturization has been considered the only technology with which to increase sensitivity of tactile sensors, we recently developed the micro tactile sensor (MTS) that performs with high sensitivity without microfabrication. In this study, we examined design and sensitivity optimization of the MTS using theory based upon Mason's equivalent circuit. The touch probe, which is attached to the lead zirconate titanate (PZT) element, was expressed as a purely inductive circuit component. Resonance frequency was calculated as a function of the length of the touch probe, and sensitivity was predicted to be dependent on the length. Furthermore, many kinds of MTS were fabricated with different touch probe lengths, and actual sensitivity was measured as phase shift between nonloaded and loaded conditions. And, from the consideration of theory and experimental data, a sensitivity coefficient was proposed and found to be useful.

  2. Sensitivity analysis in multiple imputation in effectiveness studies of psychotherapy

    PubMed Central

    Crameri, Aureliano; von Wyl, Agnes; Koemeda, Margit; Schulthess, Peter; Tschuschke, Volker

    2015-01-01

    The importance of preventing and treating incomplete data in effectiveness studies is nowadays emphasized. However, most of the publications focus on randomized clinical trials (RCT). One flexible technique for statistical inference with missing data is multiple imputation (MI). Since methods such as MI rely on the assumption of missing data being at random (MAR), a sensitivity analysis for testing the robustness against departures from this assumption is required. In this paper we present a sensitivity analysis technique based on posterior predictive checking, which takes into consideration the concept of clinical significance used in the evaluation of intra-individual changes. We demonstrate the possibilities this technique can offer with the example of irregular longitudinal data collected with the Outcome Questionnaire-45 (OQ-45) and the Helping Alliance Questionnaire (HAQ) in a sample of 260 outpatients. The sensitivity analysis can be used to (1) quantify the degree of bias introduced by missing not at random data (MNAR) in a worst reasonable case scenario, (2) compare the performance of different analysis methods for dealing with missing data, or (3) detect the influence of possible violations to the model assumptions (e.g., lack of normality). Moreover, our analysis showed that ratings from the patient's and therapist's version of the HAQ could significantly improve the predictive value of the routine outcome monitoring based on the OQ-45. Since analysis dropouts always occur, repeated measurements with the OQ-45 and the HAQ analyzed with MI are useful to improve the accuracy of outcome estimates in quality assurance assessments and non-randomized effectiveness studies in the field of outpatient psychotherapy. PMID:26283989

  3. Dimensions of Culturally Sensitive Factors in the Design and Development of Learning Objects

    ERIC Educational Resources Information Center

    Qi, Mei; Boyle, Tom

    2010-01-01

    Open educational resources (OERs) are designed to be globally reusable. Yet comparatively little attention has been given to the cultural issues. This paper addresses the issue of culturally sensitive factors that may influence the design of reusable learning objects. These influences are often subtle and hard to manage. The paper proposes a…

  4. Disclosure of sensitive behaviors across self-administered survey modes: a meta-analysis.

    PubMed

    Gnambs, Timo; Kaspar, Kai

    2015-12-01

    In surveys, individuals tend to misreport behaviors that are in contrast to prevalent social norms or regulations. Several design features of the survey procedure have been suggested to counteract this problem; particularly, computerized surveys are supposed to elicit more truthful responding. This assumption was tested in a meta-analysis of survey experiments reporting 460 effect sizes (total N =125,672). Self-reported prevalence rates of several sensitive behaviors for which motivated misreporting has been frequently observed were compared across self-administered paper-and-pencil versus computerized surveys. The results revealed that computerized surveys led to significantly more reporting of socially undesirable behaviors than comparable surveys administered on paper. This effect was strongest for highly sensitive behaviors and surveys administered individually to respondents. Moderator analyses did not identify interviewer effects or benefits of audio-enhanced computer surveys. The meta-analysis highlighted the advantages of computerized survey modes for the assessment of sensitive topics.

  5. Post-Optimality Analysis In Aerospace Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, Robert D.; Kroo, Ilan M.; Gage, Peter J.

    1993-01-01

    This analysis pertains to the applicability of optimal sensitivity information to aerospace vehicle design. An optimal sensitivity (or post-optimality) analysis refers to computations performed once the initial optimization problem is solved. These computations may be used to characterize the design space about the present solution and infer changes in this solution as a result of constraint or parameter variations, without reoptimizing the entire system. The present analysis demonstrates that post-optimality information generated through first-order computations can be used to accurately predict the effect of constraint and parameter perturbations on the optimal solution. This assessment is based on the solution of an aircraft design problem in which the post-optimality estimates are shown to be within a few percent of the true solution over the practical range of constraint and parameter variations. Through solution of a reusable, single-stage-to-orbit, launch vehicle design problem, this optimal sensitivity information is also shown to improve the efficiency of the design process, For a hierarchically decomposed problem, this computational efficiency is realized by estimating the main-problem objective gradient through optimal sep&ivity calculations, By reducing the need for finite differentiation of a re-optimized subproblem, a significant decrease in the number of objective function evaluations required to reach the optimal solution is obtained.

  6. Sensitivity analysis of fine sediment models using heterogeneous data

    NASA Astrophysics Data System (ADS)

    Kamel, A. M. Yousif; Bhattacharya, B.; El Serafy, G. Y.; van Kessel, T.; Solomatine, D. P.

    2012-04-01

    Sediments play an important role in many aquatic systems. Their transportation and deposition has significant implication on morphology, navigability and water quality. Understanding the dynamics of sediment transportation in time and space is therefore important in drawing interventions and making management decisions. This research is related to the fine sediment dynamics in the Dutch coastal zone, which is subject to human interference through constructions, fishing, navigation, sand mining, etc. These activities do affect the natural flow of sediments and sometimes lead to environmental concerns or affect the siltation rates in harbours and fairways. Numerical models are widely used in studying fine sediment processes. Accuracy of numerical models depends upon the estimation of model parameters through calibration. Studying the model uncertainty related to these parameters is important in improving the spatio-temporal prediction of suspended particulate matter (SPM) concentrations, and determining the limits of their accuracy. This research deals with the analysis of a 3D numerical model of North Sea covering the Dutch coast using the Delft3D modelling tool (developed at Deltares, The Netherlands). The methodology in this research was divided into three main phases. The first phase focused on analysing the performance of the numerical model in simulating SPM concentrations near the Dutch coast by comparing the model predictions with SPM concentrations estimated from NASA's MODIS sensors at different time scales. The second phase focused on carrying out a sensitivity analysis of model parameters. Four model parameters were identified for the uncertainty and sensitivity analysis: the sedimentation velocity, the critical shear stress above which re-suspension occurs, the shields shear stress for re-suspension pick-up, and the re-suspension pick-up factor. By adopting different values of these parameters the numerical model was run and a comparison between the

  7. Floquet theoretic approach to sensitivity analysis for periodic systems

    NASA Astrophysics Data System (ADS)

    Larter, Raima

    1986-12-01

    The mathematical relationship between sensitivity analysis and Floquet theory is explored. The former technique has been used in recent years to study the parameter sensitivity of numerical models in chemical kinetics, scattering theory, and other problems in chemistry. In the present work, we derive analytical expressions for the sensitivity coefficients for models of oscillating chemical reactions. These reactions have been the subject of increased interest in recent years because of their relationship to fundamental biological problems, such as development, and because of their similarity to related phenomena in fields such as hydrodynamics, plasma physics, meteorology, geology, etc. The analytical form of the sensitivity coefficients derived here can be used to determine the explicit time dependence of the initial transient and any secular term. The method is applicable to unstable as well as stable oscillations and is illustrated by application to the Brusselator and to a three variable model due to Hassard, Kazarinoff, and Wan. It is shown that our results reduce to those previously derived by Edelson, Rabitz, and others in certain limits. The range of validity of these formerly derived expressions is thus elucidated.

  8. Species sensitivity analysis of heavy metals to freshwater organisms.

    PubMed

    Xin, Zheng; Wenchao, Zang; Zhenguang, Yan; Yiguo, Hong; Zhengtao, Liu; Xianliang, Yi; Xiaonan, Wang; Tingting, Liu; Liming, Zhou

    2015-10-01

    Acute toxicity data of six heavy metals [Cu, Hg, Cd, Cr(VI), Pb, Zn] to aquatic organisms were collected and screened. Species sensitivity distributions (SSD) curves of vertebrate and invertebrate were constructed by log-logistic model separately. The comprehensive comparisons of the sensitivities of different trophic species to six typical heavy metals were performed. The results indicated invertebrate taxa to each heavy metal exhibited higher sensitivity than vertebrates. However, with respect to the same taxa species, Cu had the most adverse effect on vertebrate, followed by Hg, Cd, Zn and Cr. When datasets from all species were included, Cu and Hg were still more toxic than the others. In particular, the toxicities of Pb to vertebrate and fish were complicated as the SSD curves of Pb intersected with those of other heavy metals, while the SSD curves of Pb constructed by total species no longer crossed with others. The hazardous concentrations for 5 % of the species (HC5) affected were derived to determine the concentration protecting 95 % of species. The HC5 values of the six heavy metals were in the descending order: Zn > Pb > Cr > Cd > Hg > Cu, indicating toxicities in opposite order. Moreover, potential affected fractions were calculated to assess the ecological risks of different heavy metals at certain concentrations of the selected heavy metals. Evaluations of sensitivities of the species at various trophic levels and toxicity analysis of heavy metals are necessary prior to derivation of water quality criteria and the further environmental protection.

  9. A novel optimal sensitivity design scheme for yarn tension sensor using surface acoustic wave device.

    PubMed

    Lei, Bingbing; Lu, Wenke; Zhu, Changchun; Liu, Qinghong; Zhang, Haoxin

    2014-08-01

    In this paper, we propose a novel optimal sensitivity design scheme for the yarn tension sensor using surface acoustic wave (SAW) device. In order to obtain the best sensitivity, the regression model between the size of the SAW yarn tension sensor substrate and the sensitivity of the SAW yarn tension sensor was established using the least square method. The model was validated too. Through analyzing the correspondence between the regression function monotonicity and its partial derivative sign, the effect of the SAW yarn tension sensor substrate size on the sensitivity of the SAW yarn tension sensor was investigated. Based on the regression model, a linear programming model was established to gain the optimal sensitivity of the SAW yarn tension sensor. The linear programming result shows that the maximum sensitivity will be achieved when the SAW yarn tension sensor substrate length is equal to 15 mm and its width is equal to 3mm within a fixed interval of the substrate size. An experiment of SAW yarn tension sensor about 15 mm long and 3mm wide was presented. Experimental results show that the maximum sensitivity 1982.39 Hz/g was accomplished, which confirms that the optimal sensitivity design scheme is useful and effective.

  10. Planar Inlet Design and Analysis Process (PINDAP)

    NASA Technical Reports Server (NTRS)

    Slater, John W.; Gruber, Christopher R.

    2005-01-01

    The Planar Inlet Design and Analysis Process (PINDAP) is a collection of software tools that allow the efficient aerodynamic design and analysis of planar (two-dimensional and axisymmetric) inlets. The aerodynamic analysis is performed using the Wind-US computational fluid dynamics (CFD) program. A major element in PINDAP is a Fortran 90 code named PINDAP that can establish the parametric design of the inlet and efficiently model the geometry and generate the grid for CFD analysis with design changes to those parameters. The use of PINDAP is demonstrated for subsonic, supersonic, and hypersonic inlets.

  11. Multi-resolution multi-sensitivity design for parallel-hole SPECT collimators.

    PubMed

    Li, Yanzhao; Xiao, Peng; Zhu, Xiaohua; Xie, Qingguo

    2016-07-21

    Multi-resolution multi-sensitivity (MRMS) collimator offering adjustable trade-off between resolution and sensitivity, can make a SPECT system adaptive. We propose in this paper a new idea for MRMS design based on, for the first time, parallel-hole collimators for clinical SPECT. Multiple collimation states with varied resolution/sensitivity trade-offs can be formed by slightly changing the collimator's inner structure. To validate the idea, the GE LEHR collimator is selected as the design prototype and is modeled using a ray-tracing technique. Point images are generated for several states of the design. Results show that the collimation states of the design can obtain similar point response characteristics to parallel-hole collimators, and can be used just like parallel-hole collimators in clinical SPECT imaging. Ray-tracing modeling also shows that the proposed design can offer varied resolution/sensitivity trade-offs: at 100 mm before the collimator, the highest resolution state provides 6.9 mm full width at a half maximum (FWHM) with a nearly minimum sensitivity of about 96.2 cps MBq(-1), while the lowest resolution state obtains 10.6 mm FWHM with the highest sensitivity of about 167.6 cps MBq(-1). Further comparisons of the states on image qualities are conducted through Monte Carlo simulation of a hot-spot phantom which contains five hot spots with varied sizes. Contrast-to-noise ratios (CNR) of the spots are calculated and compared, showing that different spots can prefer different collimation states: the larger spots obtain better CNRs by using the larger sensitivity states, and the smaller spots prefer the higher resolution states. In conclusion, the proposed idea can be an effective approach for MRMS design for parallel-hole SPECT collimators. PMID:27359049

  12. Multi-resolution multi-sensitivity design for parallel-hole SPECT collimators

    NASA Astrophysics Data System (ADS)

    Li, Yanzhao; Xiao, Peng; Zhu, Xiaohua; Xie, Qingguo

    2016-07-01

    Multi-resolution multi-sensitivity (MRMS) collimator offering adjustable trade-off between resolution and sensitivity, can make a SPECT system adaptive. We propose in this paper a new idea for MRMS design based on, for the first time, parallel-hole collimators for clinical SPECT. Multiple collimation states with varied resolution/sensitivity trade-offs can be formed by slightly changing the collimator’s inner structure. To validate the idea, the GE LEHR collimator is selected as the design prototype and is modeled using a ray-tracing technique. Point images are generated for several states of the design. Results show that the collimation states of the design can obtain similar point response characteristics to parallel-hole collimators, and can be used just like parallel-hole collimators in clinical SPECT imaging. Ray-tracing modeling also shows that the proposed design can offer varied resolution/sensitivity trade-offs: at 100 mm before the collimator, the highest resolution state provides 6.9 mm full width at a half maximum (FWHM) with a nearly minimum sensitivity of about 96.2 cps MBq‑1, while the lowest resolution state obtains 10.6 mm FWHM with the highest sensitivity of about 167.6 cps MBq‑1. Further comparisons of the states on image qualities are conducted through Monte Carlo simulation of a hot-spot phantom which contains five hot spots with varied sizes. Contrast-to-noise ratios (CNR) of the spots are calculated and compared, showing that different spots can prefer different collimation states: the larger spots obtain better CNRs by using the larger sensitivity states, and the smaller spots prefer the higher resolution states. In conclusion, the proposed idea can be an effective approach for MRMS design for parallel-hole SPECT collimators.

  13. Multi-resolution multi-sensitivity design for parallel-hole SPECT collimators.

    PubMed

    Li, Yanzhao; Xiao, Peng; Zhu, Xiaohua; Xie, Qingguo

    2016-07-21

    Multi-resolution multi-sensitivity (MRMS) collimator offering adjustable trade-off between resolution and sensitivity, can make a SPECT system adaptive. We propose in this paper a new idea for MRMS design based on, for the first time, parallel-hole collimators for clinical SPECT. Multiple collimation states with varied resolution/sensitivity trade-offs can be formed by slightly changing the collimator's inner structure. To validate the idea, the GE LEHR collimator is selected as the design prototype and is modeled using a ray-tracing technique. Point images are generated for several states of the design. Results show that the collimation states of the design can obtain similar point response characteristics to parallel-hole collimators, and can be used just like parallel-hole collimators in clinical SPECT imaging. Ray-tracing modeling also shows that the proposed design can offer varied resolution/sensitivity trade-offs: at 100 mm before the collimator, the highest resolution state provides 6.9 mm full width at a half maximum (FWHM) with a nearly minimum sensitivity of about 96.2 cps MBq(-1), while the lowest resolution state obtains 10.6 mm FWHM with the highest sensitivity of about 167.6 cps MBq(-1). Further comparisons of the states on image qualities are conducted through Monte Carlo simulation of a hot-spot phantom which contains five hot spots with varied sizes. Contrast-to-noise ratios (CNR) of the spots are calculated and compared, showing that different spots can prefer different collimation states: the larger spots obtain better CNRs by using the larger sensitivity states, and the smaller spots prefer the higher resolution states. In conclusion, the proposed idea can be an effective approach for MRMS design for parallel-hole SPECT collimators.

  14. Sensitivity-analysis techniques: self-teaching curriculum

    SciTech Connect

    Iman, R.L.; Conover, W.J.

    1982-06-01

    This self teaching curriculum on sensitivity analysis techniques consists of three parts: (1) Use of the Latin Hypercube Sampling Program (Iman, Davenport and Ziegler, Latin Hypercube Sampling (Program User's Guide), SAND79-1473, January 1980); (2) Use of the Stepwise Regression Program (Iman, et al., Stepwise Regression with PRESS and Rank Regression (Program User's Guide) SAND79-1472, January 1980); and (3) Application of the procedures to sensitivity and uncertainty analyses of the groundwater transport model MWFT/DVM (Campbell, Iman and Reeves, Risk Methodology for Geologic Disposal of Radioactive Waste - Transport Model Sensitivity Analysis; SAND80-0644, NUREG/CR-1377, June 1980: Campbell, Longsine, and Reeves, The Distributed Velocity Method of Solving the Convective-Dispersion Equation, SAND80-0717, NUREG/CR-1376, July 1980). This curriculum is one in a series developed by Sandia National Laboratories for transfer of the capability to use the technology developed under the NRC funded High Level Waste Methodology Development Program.

  15. Analysis of frequency characteristics and sensitivity of compliant mechanisms

    NASA Astrophysics Data System (ADS)

    Liu, Shanzeng; Dai, Jiansheng; Li, Aimin; Sun, Zhaopeng; Feng, Shizhe; Cao, Guohua

    2016-07-01

    Based on a modified pseudo-rigid-body model, the frequency characteristics and sensitivity of the large-deformation compliant mechanism are studied. Firstly, the pseudo-rigid-body model under the static and kinetic conditions is modified to enable the modified pseudo-rigid-body model to be more suitable for the dynamic analysis of the compliant mechanism. Subsequently, based on the modified pseudo-rigid-body model, the dynamic equations of the ordinary compliant four-bar mechanism are established using the analytical mechanics. Finally, in combination with the finite element analysis software ANSYS, the frequency characteristics and sensitivity of the compliant mechanism are analyzed by taking the compliant parallel-guiding mechanism and the compliant bistable mechanism as examples. From the simulation results, the dynamic characteristics of compliant mechanism are relatively sensitive to the structure size, section parameter, and characteristic parameter of material on mechanisms. The results could provide great theoretical significance and application values for the structural optimization of compliant mechanisms, the improvement of their dynamic properties and the expansion of their application range.

  16. LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    2000-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).

  17. Initial Multidisciplinary Design and Analysis Framework

    NASA Technical Reports Server (NTRS)

    Ozoroski, L. P.; Geiselhart, K. A.; Padula, S. L.; Li, W.; Olson, E. D.; Campbell, R. L.; Shields, E. W.; Berton, J. J.; Gray, J. S.; Jones, S. M.; Naiman, C. G.; Seidel, J. A.; Moore, K. T.; Naylor, B. A.; Townsend, S.

    2010-01-01

    Within the Supersonics (SUP) Project of the Fundamental Aeronautics Program (FAP), an initial multidisciplinary design & analysis framework has been developed. A set of low- and intermediate-fidelity discipline design and analysis codes were integrated within a multidisciplinary design and analysis framework and demonstrated on two challenging test cases. The first test case demonstrates an initial capability to design for low boom and performance. The second test case demonstrates rapid assessment of a well-characterized design. The current system has been shown to greatly increase the design and analysis speed and capability, and many future areas for development were identified. This work has established a state-of-the-art capability for immediate use by supersonic concept designers and systems analysts at NASA, while also providing a strong base to build upon for future releases as more multifidelity capabilities are developed and integrated.

  18. Sensitivity Analysis of Hardwired Parameters in GALE Codes

    SciTech Connect

    Geelhood, Kenneth J.; Mitchell, Mark R.; Droppo, James G.

    2008-12-01

    The U.S. Nuclear Regulatory Commission asked Pacific Northwest National Laboratory to provide a data-gathering plan for updating the hardwired data tables and parameters of the Gaseous and Liquid Effluents (GALE) codes to reflect current nuclear reactor performance. This would enable the GALE codes to make more accurate predictions about the normal radioactive release source term applicable to currently operating reactors and to the cohort of reactors planned for construction in the next few years. A sensitivity analysis was conducted to define the importance of hardwired parameters in terms of each parameter’s effect on the emission rate of the nuclides that are most important in computing potential exposures. The results of this study were used to compile a list of parameters that should be updated based on the sensitivity of these parameters to outputs of interest.

  19. A sensitivity analysis of regional and small watershed hydrologic models

    NASA Technical Reports Server (NTRS)

    Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.

    1975-01-01

    Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.

  20. High derivatives for fast sensitivity analysis in linear magnetodynamics

    SciTech Connect

    Petin, P. |; Coulomb, J.L.; Conraux, P.

    1997-03-01

    In this article, the authors present a method of sensitivity analysis using high derivatives and Taylor development. The principle is to find a polynomial approximation of the finite elements solution towards the sensitivity parameters. While presenting the method, they explain why this method is applicable with special parameters only. They applied it on a magnetodynamic problem, simple enough to be able to find the analytical solution with a formal calculus tool. They then present the implementation and the good results obtained with the polynomial, first by comparing the derivatives themselves, then by comparing the approximate solution with the theoretical one. After this validation, the authors present results on a real 2D application and they underline the possibilities of reuse in other fields of physics.

  1. SENSITIVITY ANALYSIS OF A TPB DEGRADATION RATE MODEL

    SciTech Connect

    Crawford, C; Tommy Edwards, T; Bill Wilmarth, B

    2006-08-01

    A tetraphenylborate (TPB) degradation model for use in aggregating Tank 48 material in Tank 50 is developed in this report. The influential factors for this model are listed as the headings in the table below. A sensitivity study of the predictions of the model over intervals of values for the influential factors affecting the model was conducted. These intervals bound the levels of these factors expected during Tank 50 aggregations. The results from the sensitivity analysis were used to identify settings for the influential factors that yielded the largest predicted TPB degradation rate. Thus, these factor settings are considered as those that yield the ''worst-case'' scenario for TPB degradation rate for Tank 50 aggregation, and, as such they would define the test conditions that should be studied in a waste qualification program whose dual purpose would be the investigation of the introduction of Tank 48 material for aggregation in Tank 50 and the bounding of TPB degradation rates for such aggregations.

  2. Molecular-beacon-based array for sensitive DNA analysis.

    PubMed

    Yao, Gang; Tan, Weihong

    2004-08-15

    Molecular beacon (MB) DNA probes provide a new way for sensitive label-free DNA/protein detection in homogeneous solution and biosensor development. However, a relatively low fluorescence enhancement after the hybridization of the surface-immobilized MB hinders its effective biotechnological applications. We have designed new molecular beacon probes to enable a larger separation between the surface and the surface-bound MBs. Using these MB probes, we have developed a DNA array on avidin-coated cover slips and have improved analytical sensitivity. A home-built wide-field optical setup was used for imaging the array. Our results show that linker length, pH, and ionic strength have obvious effects on the performance of the surface-bound MBs. The fluorescence enhancement of the new MBs after hybridization has been increased from 2 to 5.5. The MB-based DNA array could be used for DNA detection with high sensitivity, enabling simultaneous multiple-target bioanalysis in a variety of biotechnological applications.

  3. Ion thruster design and analysis

    NASA Technical Reports Server (NTRS)

    Kami, S.; Schnelker, D. E.

    1976-01-01

    Questions concerning the mechanical design of a thruster are considered, taking into account differences in the design of an 8-cm and a 30-cm model. The components of a thruster include the thruster shell assembly, the ion extraction electrode assembly, the cathode isolator vaporizer assembly, the neutralizer isolator vaporizer assembly, ground screen and mask, and the main isolator vaporizer assembly. Attention is given to the materials used in thruster fabrication, the advanced manufacturing methods used, details of thruster performance, an evaluation of thruster life, structural and thermal design considerations, and questions of reliability and quality assurance.

  4. Reliability sensitivity-based correlation coefficient calculation in structural reliability analysis

    NASA Astrophysics Data System (ADS)

    Yang, Zhou; Zhang, Yimin; Zhang, Xufang; Huang, Xianzhen

    2012-05-01

    The correlation coefficients of random variables of mechanical structures are generally chosen with experience or even ignored, which cannot actually reflect the effects of parameter uncertainties on reliability. To discuss the selection problem of the correlation coefficients from the reliability-based sensitivity point of view, the theory principle of the problem is established based on the results of the reliability sensitivity, and the criterion of correlation among random variables is shown. The values of the correlation coefficients are obtained according to the proposed principle and the reliability sensitivity problem is discussed. Numerical studies have shown the following results: (1) If the sensitivity value of correlation coefficient ρ is less than (at what magnitude 0.000 01), then the correlation could be ignored, which could simplify the procedure without introducing additional error. (2) However, as the difference between ρ s, that is the most sensitive to the reliability, and ρ R , that is with the smallest reliability, is less than 0.001, ρ s is suggested to model the dependency of random variables. This could ensure the robust quality of system without the loss of safety requirement. (3) In the case of | E abs|>0.001 and also | E rel|>0.001, ρ R should be employed to quantify the correlation among random variables in order to ensure the accuracy of reliability analysis. Application of the proposed approach could provide a practical routine for mechanical design and manufactory to study the reliability and reliability-based sensitivity of basic design variables in mechanical reliability analysis and design.

  5. Biosphere dose conversion Factor Importance and Sensitivity Analysis

    SciTech Connect

    M. Wasiolek

    2004-10-15

    This report presents importance and sensitivity analysis for the environmental radiation model for Yucca Mountain, Nevada (ERMYN). ERMYN is a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis concerns the output of the model, biosphere dose conversion factors (BDCFs) for the groundwater, and the volcanic ash exposure scenarios. It identifies important processes and parameters that influence the BDCF values and distributions, enhances understanding of the relative importance of the physical and environmental processes on the outcome of the biosphere model, includes a detailed pathway analysis for key radionuclides, and evaluates the appropriateness of selected parameter values that are not site-specific or have large uncertainty.

  6. Design of Os(II) -based sensitizers for dye-sensitized solar cells: influence of heterocyclic ancillaries.

    PubMed

    Hu, Fa-Chun; Wang, Sheng-Wei; Planells, Miquel; Robertson, Neil; Padhy, Harihara; Du, Bo-Sian; Chi, Yun; Yang, Po-Fan; Lin, Hao-Wu; Lee, Gene-Hsiang; Chou, Pi-Tai

    2013-08-01

    A series of Os(II) sensitizers (TFOS-x, in which x=1, 2, or 3) with a single 4,4'-dicarboxy-2,2'-dipyridine (H2 dcbpy) anchor and two chelating 2-pyridyl (or 2-pyrimidyl) triazolate ancillaries was successfully prepared. Single-crystal X-ray structural analysis showed that the core geometry of the Os(II) -based sensitizers consisted of one H2 dcbpy unit and two eclipsed cis-triazolate fragments; this was notably different from the Ru(II) -based counterparts, in which the azolate (both pyrazolate and triazolate) fragments are located at the mutual trans-positions. The basic properties were extensively probed by using spectroscopic and electrochemical methods as well as time-dependent density functional theory (TD-DFT) calculations. Fabrication of dye-sensitized solar cells (DSCs) was then attempted by using the I(-) /I3 (-) -based electrolyte solution. One such DSC device, which utilized TFOS-2 as the sensitizer, showed promising performance characteristics with a short-circuit current density (JSC ) of 15.7 mA cm(-2) , an open-circuit voltage of 610 mV, a fill factor of 0.63, and a power conversion efficiency of 6.08 % under AM 1.5G simulated one-sun irradiation. Importantly, adequate incident photon-to-current conversion efficiency performances were observed for all TFOS derivatives over the wide spectral region of 450 to 950 nm, showing a panchromatic light harvesting capability that extended into the near-infrared regime. Our results underlined a feasible strategy for maximizing JSC and increasing the efficiency of DSCs. PMID:23843354

  7. Design, validation, and absolute sensitivity of a novel test for the molecular detection of avian pneumovirus.

    PubMed

    Cecchinato, Mattia; Catelli, Elena; Savage, Carol E; Jones, Richard C; Naylor, Clive J

    2004-11-01

    This study describes attempts to increase and measure sensitivity of molecular tests to detect avian pneumovirus (APV). Polymerase chain reaction (PCR) diagnostic tests were designed for the detection of nucleic acid from an A-type APV genome. The objective was selection of PCR oligonucleotide combinations, which would provide the greatest test sensitivity and thereby enable optimal detection when used for later testing of field materials. Relative and absolute test sensitivities could be determined because of laboratory access to known quantities of purified full-length DNA copies of APV genome derived from the same A-type virus. Four new nested PCR tests were designed in the fusion (F) protein (2 tests), small hydrophobic (SH) protein (1 test), and nucleocapsid (N) protein (1 test) genes and compared with an established test in the attachment (G) protein gene. Known amounts of full-length APV genome were serially diluted 10-fold, and these dilutions were used as templates for the different tests. Sensitivities were found to differ between the tests, the most sensitive being the established G test, which proved able to detect 6,000 copies of the G gene. The G test contained predominantly pyrimidine residues at its 3' termini, and because of this, oligonucleotides for the most sensitive F test were modified to incorporate the same residue types at their 3' termini. This was found to increase sensitivity, so that after full 3' pyrimidine substitutions, the F test became able to detect 600 copies of the F gene.

  8. Sensitivity Analysis of OECD Benchmark Tests in BISON

    SciTech Connect

    Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.; Williamson, Richard

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  9. Distributed Design and Analysis of Computer Experiments

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. Formore » example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation

  10. SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES

    SciTech Connect

    Flach, G.

    2014-10-28

    PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.

  11. Path-sensitive analysis for reducing rollback overheads

    DOEpatents

    O'Brien, John K.P.; Wang, Kai-Ting Amy; Yamashita, Mark; Zhuang, Xiaotong

    2014-07-22

    A mechanism is provided for path-sensitive analysis for reducing rollback overheads. The mechanism receives, in a compiler, program code to be compiled to form compiled code. The mechanism divides the code into basic blocks. The mechanism then determines a restore register set for each of the one or more basic blocks to form one or more restore register sets. The mechanism then stores the one or more register sets such that responsive to a rollback during execution of the compiled code. A rollback routine identifies a restore register set from the one or more restore register sets and restores registers identified in the identified restore register set.

  12. Integrated Design and Analysis for Heterogeneous Objects

    NASA Astrophysics Data System (ADS)

    Qian, Xiaoping; Yang, Pinghai

    2008-02-01

    The recent advancement of solid freeform fabrication, design techniques and fundamental understanding of material properties in functionally graded material objects has made it possible to design and fabricate multifunctional heterogeneous objects. In this paper, we present an integrated design and analysis approach for heterogeneous object realization, which employs a unified design and analysis model based on B-splines and allows for direct interaction between the design and analysis model without a laborious meshing operation. In the design module, a new approach for intuitively modeling multi-material objects, termed heterogeneous lofting, is presented. In the analysis module, a novel graded B-spline finite element solution procedure is described, which gives orders of magnitude better convergence rate in comparison with current methods, as demonstrated in several case studies. Further advantages of this approach include simplified mesh construction, exact geometry/material composition representation and easy extraction of iso-material surface for manufacturing process planning.

  13. Structural Analysis in a Conceptual Design Framework

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Robinson, Jay H.; Eldred, Lloyd B.

    2012-01-01

    Supersonic aircraft designers must shape the outer mold line of the aircraft to improve multiple objectives, such as mission performance, cruise efficiency, and sonic-boom signatures. Conceptual designers have demonstrated an ability to assess these objectives for a large number of candidate designs. Other critical objectives and constraints, such as weight, fuel volume, aeroelastic effects, and structural soundness, are more difficult to address during the conceptual design process. The present research adds both static structural analysis and sizing to an existing conceptual design framework. The ultimate goal is to include structural analysis in the multidisciplinary optimization of a supersonic aircraft. Progress towards that goal is discussed and demonstrated.

  14. Blade design and analysis using a modified Euler solver

    NASA Technical Reports Server (NTRS)

    Leonard, O.; Vandenbraembussche, R. A.

    1991-01-01

    An iterative method for blade design based on Euler solver and described in an earlier paper is used to design compressor and turbine blades providing shock free transonic flows. The method shows a rapid convergence, and indicates how much the flow is sensitive to small modifications of the blade geometry, that the classical iterative use of analysis methods might not be able to define. The relationship between the required Mach number distribution and the resulting geometry is discussed. Examples show how geometrical constraints imposed upon the blade shape can be respected by using free geometrical parameters or by relaxing the required Mach number distribution. The same code is used both for the design of the required geometry and for the off-design calculations. Examples illustrate the difficulty of designing blade shapes with optimal performance also outside of the design point.

  15. Experiment Design and Analysis Guide - Neutronics & Physics

    SciTech Connect

    Misti A Lillo

    2014-06-01

    The purpose of this guide is to provide a consistent, standardized approach to performing neutronics/physics analysis for experiments inserted into the Advanced Test Reactor (ATR). This document provides neutronics/physics analysis guidance to support experiment design and analysis needs for experiments irradiated in the ATR. This guide addresses neutronics/physics analysis in support of experiment design, experiment safety, and experiment program objectives and goals. The intent of this guide is to provide a standardized approach for performing typical neutronics/physics analyses. Deviation from this guide is allowed provided that neutronics/physics analysis details are properly documented in an analysis report.

  16. Parametric sensitivity analysis of an agro-economic model of management of irrigation water

    NASA Astrophysics Data System (ADS)

    El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse

    2015-04-01

    The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.

  17. Sensitivity and uncertainty analysis of a polyurethane foam decomposition model

    SciTech Connect

    HOBBS,MICHAEL L.; ROBINSON,DAVID G.

    2000-03-14

    Sensitivity/uncertainty analyses are not commonly performed on complex, finite-element engineering models because the analyses are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, an analytical sensitivity/uncertainty analysis is used to determine the standard deviation and the primary factors affecting the burn velocity of polyurethane foam exposed to firelike radiative boundary conditions. The complex, finite element model has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state burn velocity calculated as the derivative of the burn front location versus time. The standard deviation of the burn velocity was determined by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation is essentially determined from a second derivative that is extremely sensitive to numerical noise. To minimize the numerical noise, 50-micron elements and approximately 1-msec time steps were required to obtain stable uncertainty results. The primary effect variable was shown to be the emissivity of the foam.

  18. Sensitivity analysis for texture models applied to rust steel classification

    NASA Astrophysics Data System (ADS)

    Trujillo, Maite; Sadki, Mustapha

    2004-05-01

    The exposure of metallic structures to rust degradation during their operational life is a known problem and it affects storage tanks, steel bridges, ships, etc. In order to prevent this degradation and the potential related catastrophes, the surfaces have to be assessed and the appropriate surface treatment and coating need to be applied according to the corrosion time of the steel. We previously investigated the potential of image processing techniques to tackle this problem. Several mathematical algorithms methods were analyzed and evaluated on a database of 500 images. In this paper, we extend our previous research and provide a further analysis of the textural mathematical methods for automatic rust time steel detection. Statistical descriptors are provided to evaluate the sensitivity of the results as well as the advantages and limitations of the different methods. Finally, a selector of the classifiers algorithms is introduced and the ratio between sensitivity of the results and time response (execution time) is analyzed to compromise good classification results (high sensitivity) and acceptable time response for the automation of the system.

  19. Sensitivity Analysis of a Pharmacokinetic Model of Vaginal Anti-HIV Microbicide Drug Delivery.

    PubMed

    Jarrett, Angela M; Gao, Yajing; Hussaini, M Yousuff; Cogan, Nicholas G; Katz, David F

    2016-05-01

    Uncertainties in parameter values in microbicide pharmacokinetics (PK) models confound the models' use in understanding the determinants of drug delivery and in designing and interpreting dosing and sampling in PK studies. A global sensitivity analysis (Sobol' indices) was performed for a compartmental model of the pharmacokinetics of gel delivery of tenofovir to the vaginal mucosa. The model's parameter space was explored to quantify model output sensitivities to parameters characterizing properties for the gel-drug product (volume, drug transport, initial loading) and host environment (thicknesses of the mucosal epithelium and stroma and the role of ambient vaginal fluid in diluting gel). Greatest sensitivities overall were to the initial drug concentration in gel, gel-epithelium partition coefficient for drug, and rate constant for gel dilution by vaginal fluid. Sensitivities for 3 PK measures of drug concentration values were somewhat different than those for the kinetic PK measure. Sensitivities in the stromal compartment (where tenofovir acts against host cells) and a simulated biopsy also depended on thicknesses of epithelium and stroma. This methodology and results here contribute an approach to help interpret uncertainties in measures of vaginal microbicide gel properties and their host environment. In turn, this will inform rational gel design and optimization. PMID:27012224

  20. A global sensitivity analysis of crop virtual water content

    NASA Astrophysics Data System (ADS)

    Tamea, S.; Tuninetti, M.; D'Odorico, P.; Laio, F.; Ridolfi, L.

    2015-12-01

    The concepts of virtual water and water footprint are becoming widely used in the scientific literature and they are proving their usefulness in a number of multidisciplinary contexts. With such growing interest a measure of data reliability (and uncertainty) is becoming pressing but, as of today, assessments of data sensitivity to model parameters, performed at the global scale, are not known. This contribution aims at filling this gap. Starting point of this study is the evaluation of the green and blue virtual water content (VWC) of four staple crops (i.e. wheat, rice, maize, and soybean) at a global high resolution scale. In each grid cell, the crop VWC is given by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield, where evapotranspiration is determined with a detailed daily soil water balance and actual yield is estimated using country-based data, adjusted to account for spatial variability. The model provides estimates of the VWC at a 5x5 arc minutes and it improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The model is then used as the basis for a sensitivity analysis, in order to evaluate the role of model parameters in affecting the VWC and to understand how uncertainties in input data propagate and impact the VWC accounting. In each cell, small changes are exerted to one parameter at a time, and a sensitivity index is determined as the ratio between the relative change of VWC and the relative change of the input parameter with respect to its reference value. At the global scale, VWC is found to be most sensitive to the planting date, with a positive (direct) or negative (inverse) sensitivity index depending on the typical season of crop planting date. VWC is also markedly dependent on the length of the growing period, with an increase in length always producing an increase of VWC, but with higher spatial variability for rice than for

  1. Accuracy analysis and design of A3 parallel spindle head

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan

    2016-03-01

    As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.

  2. Shipping Cask Design Review Analysis.

    1998-01-04

    Version 01 SCANS (Shipping Cask ANalysis System) is a microcomputer based system of computer programs and databases for evaluating safety analysis reports on spent fuel shipping casks. SCANS calculates the global response to impact loads, pressure loads, and thermal conditions, providing reviewers with an independent check on analyses submitted by licensees. Analysis options are based on regulatory cases described in the Code of Federal Regulations (1983) and Regulatory Guides published by the NRC in 1977more » and 1978. The system is composed of a series of menus and input entry cask analysis, and output display programs. An analysis is performed by preparing the necessary input data and then selecting the appropriate analysis: impact, thermal (heat transfer), thermally-induced stress, or pressure-induced stress. All data are entered through input screens with descriptive data requests, and, where possible, default values are provided. Output (i.e., impact force, moment and sheer time histories; impact animation; thermal/stress geometry and thermal/stress element outlines; temperature distributions as isocontours or profiles; and temperature time histories) is displayed graphically and can also be printed.« less

  3. Analysis of Transition-Sensitized Turbulent Transport Equations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Thacker, William D.; Gatski, Thomas B.; Grosch, Chester E,

    2005-01-01

    The dynamics of an ensemble of linear disturbances in boundary-layer flows at various Reynolds numbers is studied through an analysis of the transport equations for the mean disturbance kinetic energy and energy dissipation rate. Effects of adverse and favorable pressure-gradients on the disturbance dynamics are also included in the analysis Unlike the fully turbulent regime where nonlinear phase scrambling of the fluctuations affects the flow field even in proximity to the wall, the early stage transition regime fluctuations studied here are influenced cross the boundary layer by the solid boundary. The dominating dynamics in the disturbance kinetic energy and dissipation rate equations are described. These results are then used to formulate transition-sensitized turbulent transport equations, which are solved in a two-step process and applied to zero-pressure-gradient flow over a flat plate. Computed results are in good agreement with experimental data.

  4. Design Through Analysis (DTA) roadmap vision.

    SciTech Connect

    Blacker, Teddy Dean; Adams, Charles R.; Hoffman, Edward L.; White, David Roger; Sjaardema, Gregory D.

    2004-10-01

    The Design through Analysis Realization Team (DART) will provide analysts with a complete toolset that reduces the time to create, generate, analyze, and manage the data generated in a computational analysis. The toolset will be both easy to learn and easy to use. The DART Roadmap Vision provides for progressive improvements that will reduce the Design through Analysis (DTA) cycle time by 90-percent over a three-year period while improving both the quality and accountability of the analyses.

  5. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2005-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  6. Using an Adjoint Approach to Eliminate Mesh Sensitivities in Computational Design

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Park, Michael A.

    2006-01-01

    An algorithm for efficiently incorporating the effects of mesh sensitivities in a computational design framework is introduced. The method is based on an adjoint approach and eliminates the need for explicit linearizations of the mesh movement scheme with respect to the geometric parameterization variables, an expense that has hindered practical large-scale design optimization using discrete adjoint methods. The effects of the mesh sensitivities can be accounted for through the solution of an adjoint problem equivalent in cost to a single mesh movement computation, followed by an explicit matrix-vector product scaling with the number of design variables and the resolution of the parameterized surface grid. The accuracy of the implementation is established and dramatic computational savings obtained using the new approach are demonstrated using several test cases. Sample design optimizations are also shown.

  7. Theoretical Noise Analysis on a Position-sensitive Metallic Magnetic Calorimeter

    NASA Technical Reports Server (NTRS)

    Smith, Stephen J.

    2007-01-01

    We report on the theoretical noise analysis for a position-sensitive Metallic Magnetic Calorimeter (MMC), consisting of MMC read-out at both ends of a large X-ray absorber. Such devices are under consideration as alternatives to other cryogenic technologies for future X-ray astronomy missions. We use a finite-element model (FEM) to numerically calculate the signal and noise response at the detector outputs and investigate the correlations between the noise measured at each MMC coupled by the absorber. We then calculate, using the optimal filter concept, the theoretical energy and position resolution across the detector and discuss the trade-offs involved in optimizing the detector design for energy resolution, position resolution and count rate. The results show, theoretically, the position-sensitive MMC concept offers impressive spectral and spatial resolving capabilities compared to pixel arrays and similar position-sensitive cryogenic technologies using Transition Edge Sensor (TES) read-out.

  8. Comparative Analysis of State Fish Consumption Advisories Targeting Sensitive Populations

    PubMed Central

    Scherer, Alison C.; Tsuchiya, Ami; Younglove, Lisa R.; Burbacher, Thomas M.; Faustman, Elaine M.

    2008-01-01

    Objective Fish consumption advisories are issued to warn the public of possible toxicological threats from consuming certain fish species. Although developing fetuses and children are particularly susceptible to toxicants in fish, fish also contain valuable nutrients. Hence, formulating advice for sensitive populations poses challenges. We conducted a comparative analysis of advisory Web sites issued by states to assess health messages that sensitive populations might access. Data sources We evaluated state advisories accessed via the National Listing of Fish Advisories issued by the U.S. Environmental Protection Agency. Data extraction We created criteria to evaluate advisory attributes such as risk and benefit message clarity. Data synthesis All 48 state advisories issued at the time of this analysis targeted children, 90% (43) targeted pregnant women, and 58% (28) targeted women of childbearing age. Only six advisories addressed single contaminants, while the remainder based advice on 2–12 contaminants. Results revealed that advisories associated a dozen contaminants with specific adverse health effects. Beneficial health effects of any kind were specifically associated only with omega-3 fatty acids found in fish. Conclusions These findings highlight the complexity of assessing and communicating information about multiple contaminant exposure from fish consumption. Communication regarding potential health benefits conferred by specific fish nutrients was minimal and focused primarily on omega-3 fatty acids. This overview suggests some lessons learned and highlights a lack of both clarity and consistency in providing the breadth of information that sensitive populations such as pregnant women need to make public health decisions about fish consumption during pregnancy. PMID:19079708

  9. Simple Sensitivity Analysis for Orion Guidance Navigation and Control

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch. We describe in this paper a sensitivity analysis tool ("Critical Factors Tool" or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  10. Trends in sensitivity analysis practice in the last decade.

    PubMed

    Ferretti, Federico; Saltelli, Andrea; Tarantola, Stefano

    2016-10-15

    The majority of published sensitivity analyses (SAs) are either local or one factor-at-a-time (OAT) analyses, relying on unjustified assumptions of model linearity and additivity. Global approaches to sensitivity analyses (GSA) which would obviate these shortcomings, are applied by a minority of researchers. By reviewing the academic literature on SA, we here present a bibliometric analysis of the trends of different SA practices in last decade. The review has been conducted both on some top ranking journals (Nature and Science) and through an extended analysis in the Elsevier's Scopus database of scientific publications. After correcting for the global growth in publications, the amount of papers performing a generic SA has notably increased over the last decade. Even if OAT is still the most largely used technique in SA, there is a clear increase in the use of GSA with preference respectively for regression and variance-based techniques. Even after adjusting for the growth of publications in the sole modelling field, to which SA and GSA normally apply, the trend is confirmed. Data about regions of origin and discipline are also briefly discussed. The results above are confirmed when zooming on the sole articles published in chemical modelling, a field historically proficient in the use of SA methods. PMID:26934843

  11. Trends in sensitivity analysis practice in the last decade.

    PubMed

    Ferretti, Federico; Saltelli, Andrea; Tarantola, Stefano

    2016-10-15

    The majority of published sensitivity analyses (SAs) are either local or one factor-at-a-time (OAT) analyses, relying on unjustified assumptions of model linearity and additivity. Global approaches to sensitivity analyses (GSA) which would obviate these shortcomings, are applied by a minority of researchers. By reviewing the academic literature on SA, we here present a bibliometric analysis of the trends of different SA practices in last decade. The review has been conducted both on some top ranking journals (Nature and Science) and through an extended analysis in the Elsevier's Scopus database of scientific publications. After correcting for the global growth in publications, the amount of papers performing a generic SA has notably increased over the last decade. Even if OAT is still the most largely used technique in SA, there is a clear increase in the use of GSA with preference respectively for regression and variance-based techniques. Even after adjusting for the growth of publications in the sole modelling field, to which SA and GSA normally apply, the trend is confirmed. Data about regions of origin and discipline are also briefly discussed. The results above are confirmed when zooming on the sole articles published in chemical modelling, a field historically proficient in the use of SA methods.

  12. Sensitivity analysis of infectious disease models: methods, advances and their application.

    PubMed

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V

    2013-09-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods-scatter plots, the Morris and Sobol' methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method-and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  13. Design optimization of high pressure and high temperature piezoresistive pressure sensor for high sensitivity

    NASA Astrophysics Data System (ADS)

    Niu, Zhe; Zhao, Yulong; Tian, Bian

    2014-01-01

    This paper describes a design method for optimizing sensitivity of piezoresistive pressure sensor in high-pressure and high-temperature environment. In order to prove the method, a piezoresistive pressure sensor (HPTSS) is designed. With the purpose of increasing sensitivity and to improve the measurement range, the piezoresistive sensor adopts rectangular membrane and thick film structure. The configuration of piezoresistors is arranged according to the characteristic of the rectangular membrane. The structure and configuration of the sensor chip are analyzed theoretically and simulated by the finite element method. This design enables the sensor chip to operate in high pressure condition (such as 150 MPa) with a high sensitivity and accuracy. The silicon on insulator wafer is selected to guarantee the thermo stability of the sensor chip. In order to optimize the fabrication and improve the yield of production, an electric conduction step is devised. Series of experiments demonstrates a favorable linearity of 0.13% and a high accuracy of 0.48%. And the sensitivity of HTPSS is about six times as high as a conventional square-membrane sensor chip in the experiment. Compared with the square-membrane pressure sensor and current production, the strength of HPTTS lies in sensitivity and measurement. The performance of the HPTSS indicates that it could be an ideal candidate for high-pressure and high-temperature sensing in real application.

  14. Design optimization of high pressure and high temperature piezoresistive pressure sensor for high sensitivity.

    PubMed

    Niu, Zhe; Zhao, Yulong; Tian, Bian

    2014-01-01

    This paper describes a design method for optimizing sensitivity of piezoresistive pressure sensor in high-pressure and high-temperature environment. In order to prove the method, a piezoresistive pressure sensor (HPTSS) is designed. With the purpose of increasing sensitivity and to improve the measurement range, the piezoresistive sensor adopts rectangular membrane and thick film structure. The configuration of piezoresistors is arranged according to the characteristic of the rectangular membrane. The structure and configuration of the sensor chip are analyzed theoretically and simulated by the finite element method. This design enables the sensor chip to operate in high pressure condition (such as 150 MPa) with a high sensitivity and accuracy. The silicon on insulator wafer is selected to guarantee the thermo stability of the sensor chip. In order to optimize the fabrication and improve the yield of production, an electric conduction step is devised. Series of experiments demonstrates a favorable linearity of 0.13% and a high accuracy of 0.48%. And the sensitivity of HTPSS is about six times as high as a conventional square-membrane sensor chip in the experiment. Compared with the square-membrane pressure sensor and current production, the strength of HPTTS lies in sensitivity and measurement. The performance of the HPTSS indicates that it could be an ideal candidate for high-pressure and high-temperature sensing in real application.

  15. Design of a high-sensitivity classifier based on a genetic algorithm: application to computer-aided diagnosis

    NASA Astrophysics Data System (ADS)

    Sahiner, Berkman; Chan, Heang-Ping; Petrick, Nicholas; Helvie, Mark A.; Goodsitt, Mitchell M.

    1998-10-01

    A genetic algorithm (GA) based feature selection method was developed for the design of high-sensitivity classifiers, which were tailored to yield high sensitivity with high specificity. The fitness function of the GA was based on the receiver operating characteristic (ROC) partial area index, which is defined as the average specificity above a given sensitivity threshold. The designed GA evolved towards the selection of feature combinations which yielded high specificity in the high-sensitivity region of the ROC curve, regardless of the performance at low sensitivity. This is a desirable quality of a classifier used for breast lesion characterization, since the focus in breast lesion characterization is to diagnose correctly as many benign lesions as possible without missing malignancies. The high-sensitivity classifier, formulated as the Fisher's linear discriminant using GA-selected feature variables, was employed to classify 255 biopsy-proven mammographic masses as malignant or benign. The mammograms were digitized at a pixel size of mm, and regions of interest (ROIs) containing the biopsied masses were extracted by an experienced radiologist. A recently developed image transformation technique, referred to as the rubber-band straightening transform, was applied to the ROIs. Texture features extracted from the spatial grey-level dependence and run-length statistics matrices of the transformed ROIs were used to distinguish malignant and benign masses. The classification accuracy of the high-sensitivity classifier was compared with that of linear discriminant analysis with stepwise feature selection . With proper GA training, the ROC partial area of the high-sensitivity classifier above a true-positive fraction of 0.95 was significantly larger than that of Highly sensitive Raman system for dissolved gas analysis in water.

    PubMed

    Yang, Dewang; Guo, Jinjia; Liu, Qingsheng; Luo, Zhao; Yan, Jingwen; Zheng, Ronger

    2016-09-20

    The detection of dissolved gases in seawater plays an important role in ocean observation and exploration. As a potential technique for oceanic applications, Raman spectroscopy has already proved its advantages in the simultaneous detection of multiple species during previous deep-sea explorations. Due to the low sensitivity of conventional Raman measurements, there have been many reports of Raman applications on direct seawater detection in high-concentration areas, but few on undersea dissolved gas detection. In this work, we have presented a highly sensitive Raman spectroscopy (HSRS) system with a special designed gas chamber for small amounts of underwater gas extraction. Systematic experiments have been carried out for system evaluation, and the results have shown that the Raman signals obtained by the innovation of a near-concentric cavity was about 21 times stronger than those of conventional side-scattering Raman measurements. Based on this system, we have achieved a low limit of detection of 2.32 and 0.44  μmol/L for CO2 and CH4, respectively, in the lab. A test-out experiment has also been accomplished with a gas-liquid separator coupled to the Raman system, and signals of O2 and CO2 were detected after 1 h of degasification. This system may show potential for gas detection in water, and further work would be done for the improvement of in situ detection.

  16. Highly sensitive Raman system for dissolved gas analysis in water.

    PubMed

    Yang, Dewang; Guo, Jinjia; Liu, Qingsheng; Luo, Zhao; Yan, Jingwen; Zheng, Ronger

    2016-09-20

    The detection of dissolved gases in seawater plays an important role in ocean observation and exploration. As a potential technique for oceanic applications, Raman spectroscopy has already proved its advantages in the simultaneous detection of multiple species during previous deep-sea explorations. Due to the low sensitivity of conventional Raman measurements, there have been many reports of Raman applications on direct seawater detection in high-concentration areas, but few on undersea dissolved gas detection. In this work, we have presented a highly sensitive Raman spectroscopy (HSRS) system with a special designed gas chamber for small amounts of underwater gas extraction. Systematic experiments have been carried out for system evaluation, and the results have shown that the Raman signals obtained by the innovation of a near-concentric cavity was about 21 times stronger than those of conventional side-scattering Raman measurements. Based on this system, we have achieved a low limit of detection of 2.32 and 0.44  μmol/L for CO2 and CH4, respectively, in the lab. A test-out experiment has also been accomplished with a gas-liquid separator coupled to the Raman system, and signals of O2 and CO2 were detected after 1 h of degasification. This system may show potential for gas detection in water, and further work would be done for the improvement of in situ detection. PMID:27661606

  17. Design and Synthesis of an MOF Thermometer with High Sensitivity in the Physiological Temperature Range.

    PubMed

    Zhao, Dian; Rao, Xingtang; Yu, Jiancan; Cui, Yuanjing; Yang, Yu; Qian, Guodong

    2015-12-01

    An important result of research on mixed-lanthanide metal-organic frameworks (M'LnMOFs) is the realization of highly sensitive ratiometric luminescent thermometers. Here, we report the design and synthesis of the new M'LnMOF Tb0.80Eu0.20BPDA with high relative sensitivity in the physiological temperature regime (298-318 K). The emission intensity and luminescence lifetime were investigated and compared to those of existing materials. It was found that the temperature-dependent luminescence properties of Tb0.80Eu0.20BPDA are strongly associated with the distribution of the energy levels of the ligand. Such a property can be useful in the design of highly sensitive M'LnMOF thermometers.

  18. Post-optimality analysis in aerospace vehicle design

    NASA Technical Reports Server (NTRS)

    Braun, Robert D.; Kroo, Ilan M.; Gage, Peter J.

    1993-01-01

    This analysis pertains to the applicability of optimal sensitivity information to aerospace vehicle design. The present analysis demonstrates that post-optimality information generated through first-order computations can be used to accurately predict file effect of constraint and parameter perturbations on the optimal solution. This assessment is based on the solution of an aircraft design problem in which the post-optimality estimates are shown to be within a few percent of the true solution over the practical range of constraint and parameter variations. Through solution of a reusable, single-stage-to-orbit, launch vehicle design problem, this optimal sensitivity information is also shown to improve the efficiency of the design process. For a hierarchically decomposed problem, this computational efficiency is realizable by estimating the main-problem objective gradient through optimal sensitivity calculations. By reducing the need for finite differentiation of a re-optimized subproblem, a significant decrease in the number of objective function evaluations required to reach the optimal solution is obtained.

  19. Design of a portable fluoroquinolone analyzer based on terbium-sensitized luminescence

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A portable fluoroquinolone (FQ) analyzer is developed in this laboratory based on terbium-sensitized luminescence (TSL). The optical, hardware and software design aspects are described in detail. A 327-nm light emitting diode (LED) is used in pulsed mode as the excitation source; and a photomultip...

  1. Radiometer Design Analysis Based Upon Measurement Uncertainty

    NASA Technical Reports Server (NTRS)

    Racette, Paul E.; Lang, Roger H.

    2004-01-01

    This paper introduces a method for predicting the performance of a radiometer design based on calculating the measurement uncertainty. The variety in radiometer designs and the demand for improved radiometric measurements justify the need for a more general and comprehensive method to assess system performance. Radiometric resolution, or sensitivity, is a figure of merit that has been commonly used to characterize the performance of a radiometer. However when evaluating the performance of a calibration design for a radiometer, the use of radiometric resolution has limited application. These limitations are overcome by considering instead the measurement uncertainty. A method for calculating measurement uncertainty for a generic radiometer design including its calibration algorithm is presented. The result is a generalized technique by which system calibration architectures and design parameters can be studied to optimize instrument performance for given requirements and constraints. Example applications demonstrate the utility of using measurement uncertainty as a figure of merit.

  2. TASK ANALYSIS AND TRAINING DESIGN.

    ERIC Educational Resources Information Center

    ANNETT, J.; DUNCAN, K.D.

    PERHAPS THE MAJOR PROBLEM IN TASK ANALYSIS FOR INDUSTRIAL TRAINING IS TO DETERMINE WHAT TO DESCRIBE AND ON WHAT LEVEL OF DETAIL. MANY DIFFERENT LEVELS OF DESCRIPTION MAY BE NEEDED TO ESTIMATE THE COST OF INADEQUATE PERFORMANCE TO A SYSTEM AND THE PROBABILITY OF ADEQUATE PERFORMANCE WITHOUT TRAINING--THE PROBLEM OF IDENTIFYING DIFFICULT COMPONENTS…

  3. GPU-based Integration with Application in Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Atanassov, Emanouil; Ivanovska, Sofiya; Karaivanova, Aneta; Slavov, Dimitar

    2010-05-01

    The presented work is an important part of the grid application MCSAES (Monte Carlo Sensitivity Analysis for Environmental Studies) which aim is to develop an efficient Grid implementation of a Monte Carlo based approach for sensitivity studies in the domains of Environmental modelling and Environmental security. The goal is to study the damaging effects that can be caused by high pollution levels (especially effects on human health), when the main modeling tool is the Danish Eulerian Model (DEM). Generally speaking, sensitivity analysis (SA) is the study of how the variation in the output of a mathematical model can be apportioned to, qualitatively or quantitatively, different sources of variation in the input of a model. One of the important classes of methods for Sensitivity Analysis are Monte Carlo based, first proposed by Sobol, and then developed by Saltelli and his group. In MCSAES the general Saltelli procedure has been adapted for SA of the Danish Eulerian model. In our case we consider as factors the constants determining the speeds of the chemical reactions in the DEM and as output a certain aggregated measure of the pollution. Sensitivity simulations lead to huge computational tasks (systems with up to 4 × 109 equations at every time-step, and the number of time-steps can be more than a million) which motivates its grid implementation. MCSAES grid implementation scheme includes two main tasks: (i) Grid implementation of the DEM, (ii) Grid implementation of the Monte Carlo integration. In this work we present our new developments in the integration part of the application. We have developed an algorithm for GPU-based generation of scrambled quasirandom sequences which can be combined with the CPU-based computations related to the SA. Owen first proposed scrambling of Sobol sequence through permutation in a manner that improves the convergence rates. Scrambling is necessary not only for error analysis but for parallel implementations. Good scrambling is

  4. Sensitivity analysis of an urban stormwater microorganism model.

    PubMed

    McCarthy, D T; Deletic, A; Mitchell, V G; Diaper, C

    2010-01-01

    This paper presents the sensitivity analysis of a newly developed model which predicts microorganism concentrations in urban stormwater (MOPUS--MicroOrganism Prediction in Urban Stormwater). The analysis used Escherichia coli data collected from four urban catchments in Melbourne, Australia. The MICA program (Model Independent Markov Chain Monte Carlo Analysis), used to conduct this analysis, applies a carefully constructed Markov Chain Monte Carlo procedure, based on the Metropolis-Hastings algorithm, to explore the model's posterior parameter distribution. It was determined that the majority of parameters in the MOPUS model were well defined, with the data from the MCMC procedure indicating that the parameters were largely independent. However, a sporadic correlation found between two parameters indicates that some improvements may be possible in the MOPUS model. This paper identifies the parameters which are the most important during model calibration; it was shown, for example, that parameters associated with the deposition of microorganisms in the catchment were more influential than those related to microorganism survival processes. These findings will help users calibrate the MOPUS model, and will help the model developer to improve the model, with efforts currently being made to reduce the number of model parameters, whilst also reducing the slight interaction identified.

  5. Design-for-analysis or the unintended role of analysis in the design of piping systems

    SciTech Connect

    Antaki, G.A.

    1991-12-31

    The paper discusses the evolution of piping design in the nuclear industry with its increasing reliance on dynamic analysis. While it is well recognized that the practice has evolved from ``design-by- rule `` to ``design-by-analysis,`` examples are provided of cases where the choice of analysis technique has determined the hardware configuration, which could be called ``design-for-analysis.`` The paper presents practical solutions to some of these cases and summarizes the important recent industry and regulatory developments which, if successful, will reverse the trend towards ``design-for-analysis.`` 14 refs.

  6. Design-for-analysis or the unintended role of analysis in the design of piping systems

    SciTech Connect

    Antaki, G.A.

    1991-01-01

    The paper discusses the evolution of piping design in the nuclear industry with its increasing reliance on dynamic analysis. While it is well recognized that the practice has evolved from design-by- rule '' to design-by-analysis,'' examples are provided of cases where the choice of analysis technique has determined the hardware configuration, which could be called design-for-analysis.'' The paper presents practical solutions to some of these cases and summarizes the important recent industry and regulatory developments which, if successful, will reverse the trend towards design-for-analysis.'' 14 refs.

  7. A preliminary sensitivity analysis of the Generalized Escape System Simulation (GESS) computer program

    SciTech Connect

    Holdeman, J.T.; Liepins, G.E.; Murphy, B.D.; Ohr, S.Y.; Sworski, T.J.; Warner, G.E.

    1989-06-01

    The Generalized Escape System Simulation (GESS) program is a computerized mathematical model for dynamically simulating the performance of existing or developmental aircraft ejection seat systems. The program generates trajectory predictions with 6 degrees of freedom for the aircraft, seat/occupant, occupant alone, and seat alone systems by calculating the forces and torques imposed on these elements by seat catapults, rails, rockets, stabilization and recovery systems included in most escape system configurations. User options are provided to simulate the performance of all conventional escape system designs under most environmental conditions and aircraft attitudes or trajectories. The concept of sensitivity analysis is discussed, as is the usefulness of GESS for retrospective studies, whereby one attempts to determine the aircraft configuration at ejection from the ejection outcome. A very limited and preliminary sensitivity analysis has been done with GESS to study the way the performance of the ejection system changes with certain user-specified options or parameters. A more complete analysis would study correlations, where simultaneous correlated variations of several parameters might affect performance to an extent not predictable from the individual sensitivities. Uncertainty analysis is discussed. Even with this limited analysis, a difficulty with some simulations involving a rolling aircraft has been discovered; the code produces inconsistent trajectories. One explanation is that the integration routine is not able to deal with the stiff differential equations involved. Another possible explanation is that the coding of the coordinate transformations is faulty when large angles are involved. 7 refs., 7 tabs.

  8. SAFE(R): A Matlab/Octave Toolbox (and R Package) for Global Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Pianosi, Francesca; Sarrazin, Fanny; Gollini, Isabella; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis (GSA) is increasingly used in the development and assessment of hydrological models, as well as for dominant control analysis and for scenario discovery to support water resource management under deep uncertainty. Here we present a toolbox for the application of GSA, called SAFE (Sensitivity Analysis For Everybody) that implements several established GSA methods, including method of Morris, Regional Sensitivity Analysis, variance-based sensitivity Analysis (Sobol') and FAST. It also includes new approaches and visualization tools to complement these established methods. The Toolbox is released in two versions, one running under Matlab/Octave (called SAFE) and one running in R (called SAFER). Thanks to its modular structure, SAFE(R) can be easily integrated with other toolbox and packages, and with models running in a different computing environment. Another interesting feature of SAFE(R) is that all the implemented methods include specific functions for assessing the robustness and convergence of the sensitivity estimates. Furthermore, SAFE(R) includes numerous visualisation tools for the effective investigation and communication of GSA results. The toolbox is designed to make GSA accessible to non-specialist users, and to provide a fully commented code for more experienced users to complement their own tools. The documentation includes a set of workflow scripts with practical guidelines on how to apply GSA and how to use the toolbox. SAFE(R) is open source and freely available from the following website: http://bristol.ac.uk/cabot/resources/safe-toolbox/ Ultimately, SAFE(R) aims at improving the diffusion and quality of GSA practice in the hydrological modelling community.

  9. Design of a gaze-sensitive virtual social interactive system for children with autism.

    PubMed

    Lahiri, Uttama; Warren, Zachary; Sarkar, Nilanjan

    2011-08-01

    Impairments in social communication skills are thought to be core deficits in children with autism spectrum disorder (ASD). In recent years, several assistive technologies, particularly Virtual Reality (VR), have been investigated to promote social interactions in this population. It is well known that children with ASD demonstrate atypical viewing patterns during social interactions and thus monitoring eye-gaze can be valuable to design intervention strategies. While several studies have used eye-tracking technology to monitor eye-gaze for offline analysis, there exists no real-time system that can monitor eye-gaze dynamically and provide individualized feedback. Given the promise of VR-based social interaction and the usefulness of monitoring eye-gaze in real-time, a novel VR-based dynamic eye-tracking system is developed in this work. This system, called Virtual Interactive system with Gaze-sensitive Adaptive Response Technology (VIGART), is capable of delivering individualized feedback based on a child's dynamic gaze patterns during VR-based interaction. Results from a usability study with six adolescents with ASD are presented that examines the acceptability and usefulness of VIGART. The results in terms of improvement in behavioral viewing and changes in relevant eye physiological indexes of participants while interacting with VIGART indicate the potential of this novel technology. PMID:21609889

  10. Integrated reflector antenna design and analysis

    NASA Technical Reports Server (NTRS)

    Zimmerman, M. L.; Lee, S. W.; Ni, S.; Christensen, M.; Wang, Y. M.

    1993-01-01

    Reflector antenna design is a mature field and most aspects were studied. However, of that most previous work is distinguished by the fact that it is narrow in scope, analyzing only a particular problem under certain conditions. Methods of analysis of this type are not useful for working on real-life problems since they can not handle the many and various types of perturbations of basic antenna design. The idea of an integrated design and analysis is proposed. By broadening the scope of the analysis, it becomes possible to deal with the intricacies attendant with modem reflector antenna design problems. The concept of integrated reflector antenna design is put forward. A number of electromagnetic problems related to reflector antenna design are investigated. Some of these show how tools for reflector antenna design are created. In particular, a method for estimating spillover loss for open-ended waveguide feeds is examined. The problem of calculating and optimizing beam efficiency (an important figure of merit in radiometry applications) is also solved. Other chapters deal with applications of this general analysis. The wide angle scan abilities of reflector antennas is examined and a design is proposed for the ATDRSS triband reflector antenna. The development of a general phased-array pattern computation program is discussed and how the concept of integrated design can be extended to other types of antennas is shown. The conclusions are contained in the final chapter.

  11. Global sensitivity analysis of the radiative transfer model

    NASA Astrophysics Data System (ADS)

    Neelam, Maheshwari; Mohanty, Binayak P.

    2015-04-01

    With the recently launched Soil Moisture Active Passive (SMAP) mission, it is very important to have a complete understanding of the radiative transfer model for better soil moisture retrievals and to direct future research and field campaigns in areas of necessity. Because natural systems show great variability and complexity with respect to soil, land cover, topography, precipitation, there exist large uncertainties and heterogeneities in model input factors. In this paper, we explore the possibility of using global sensitivity analysis (GSA) technique to study the influence of heterogeneity and uncertainties in model inputs on zero order radiative transfer (ZRT) model and to quantify interactions between parameters. GSA technique is based on decomposition of variance and can handle nonlinear and nonmonotonic functions. We direct our analyses toward growing agricultural fields of corn and soybean in two different regions, Iowa, USA (SMEX02) and Winnipeg, Canada (SMAPVEX12). We noticed that, there exists a spatio-temporal variation in parameter interactions under different soil moisture and vegetation conditions. Radiative Transfer Model (RTM) behaves more non-linearly in SMEX02 and linearly in SMAPVEX12, with average parameter interactions of 14% in SMEX02 and 5% in SMAPVEX12. Also, parameter interactions increased with vegetation water content (VWC) and roughness conditions. Interestingly, soil moisture shows an exponentially decreasing sensitivity function whereas parameters such as root mean square height (RMS height) and vegetation water content show increasing sensitivity with 0.05 v/v increase in soil moisture range. Overall, considering the SMAPVEX12 fields to be water rich environment (due to higher observed SM) and SMEX02 fields to be energy rich environment (due to lower SM and wide ranges of TSURF), our results indicate that first order as well as interactions between the parameters change with water and energy rich environments.

  12. Sensitivity analysis of channel-bend hydraulics influenced by vegetation

    NASA Astrophysics Data System (ADS)

    Bywater-Reyes, S.; Manners, R.; McDonald, R.; Wilcox, A. C.

    2015-12-01

    Alternating bars influence hydraulics by changing the force balance of channels as part of a morphodynamic feedback loop that dictates channel geometry. Pioneer woody riparian trees recruit on river bars and may steer flow, alter cross-stream and downstream force balances, and ultimately change channel morphology. Quantifying the influence of vegetation on stream hydraulics is difficult, and researchers increasingly rely on two-dimensional hydraulic models. In many cases, channel characteristics (channel drag and lateral eddy viscosity) and vegetation characteristics (density, frontal area, and drag coefficient) are uncertain. This study uses a beta version of FaSTMECH that models vegetation explicitly as a drag force to test the sensitivity of channel-bend hydraulics to riparian vegetation. We use a simplified, scale model of a meandering river with bars and conduct a global sensitivity analysis that ranks the influence of specified channel characteristics (channel drag and lateral eddy viscosity) against vegetation characteristics (density, frontal area, and drag coefficient) on cross-stream hydraulics. The primary influence on cross-stream velocity and shear stress is channel drag (i.e., bed roughness), followed by the near-equal influence of all vegetation parameters and lateral eddy viscosity. To test the implication of the sensitivity indices on bend hydraulics, we hold calibrated channel characteristics constant for a wandering gravel-bed river with bars (Bitterroot River, MT), and vary vegetation parameters on a bar. For a dense vegetation scenario, we find flow to be steered away from the bar, and velocity and shear stress to be reduced within the thalweg. This provides insight into how the morphodynamic evolution of vegetated bars differs from unvegetated bars.

  13. Sensitivity analysis for computer model projections of hurricane losses.

    PubMed

    Iman, Ronald L; Johnson, Mark E; Watson, Charles C

    2005-10-01

    Projecting losses associated with hurricanes is a complex and difficult undertaking that is fraught with uncertainties. Hurricane Charley, which struck southwest Florida on August 13, 2004, illustrates the uncertainty of forecasting damages from these storms. Due to shifts in the track and the rapid intensification of the storm, real-time estimates grew from 2 billion dollars to 3 billion dollars in losses late on the 12th to a peak of 50 billion dollars for a brief time as the storm appeared to be headed for the Tampa Bay area. The storm struck the resort areas of Charlotte Harbor and moved across the densely populated central part of the state, with early poststorm estimates in the 28 dollars to 31 billion dollars range, and final estimates converging at 15 billion dollars as the actual intensity at landfall became apparent. The Florida Commission on Hurricane Loss Projection Methodology (FCHLPM) has a great appreciation for the role of computer models in projecting losses from hurricanes. The FCHLPM contracts with a professional team to perform onsite (confidential) audits of computer models developed by several different companies in the United States that seek to have their models approved for use in insurance rate filings in Florida. The team's members represent the fields of actuarial science, computer science, meteorology, statistics, and wind and structural engineering. An important part of the auditing process requires uncertainty and sensitivity analyses to be performed with the applicant's proprietary model. To influence future such analyses, an uncertainty and sensitivity analysis has been completed for loss projections arising from use of a sophisticated computer model based on the Holland wind field. Sensitivity analyses presented in this article utilize standardized regression coefficients to quantify the contribution of the computer input variables to the magnitude of the wind speed.

  14. Sensitivity Analysis of a Wireless Power Transfer (WPT) System for Electric Vehicle Application

    SciTech Connect

    Chinthavali, Madhu Sudhan

    2016-01-01

    This paper presents a detailed parametric sensitivity analysis for a wireless power transfer (WPT) system in electric vehicle application. Specifically, several key parameters for sensitivity analysis of a series-parallel (SP) WPT system are derived first based on analytical modeling approach, which includes the equivalent input impedance, active / reactive power, and DC voltage gain. Based on the derivation, the impact of primary side compensation capacitance, coupling coefficient, transformer leakage inductance, and different load conditions on the DC voltage gain curve and power curve are studied and analyzed. It is shown that the desired power can be achieved by just changing frequency or voltage depending on the design value of coupling coefficient. However, in some cases both have to be modified in order to achieve the required power transfer.

  15. Sensitivity analysis for high accuracy proximity effect correction

    NASA Astrophysics Data System (ADS)

    Thrun, Xaver; Browning, Clyde; Choi, Kang-Hoon; Figueiro, Thiago; Hohle, Christoph; Saib, Mohamed; Schiavone, Patrick; Bartha, Johann W.

    2015-10-01

    A sensitivity analysis (SA) algorithm was developed and tested to comprehend the influences of different test pattern sets on the calibration of a point spread function (PSF) model with complementary approaches. Variance-based SA is the method of choice. It allows attributing the variance of the output of a model to the sum of variance of each input of the model and their correlated factors.1 The objective of this development is increasing the accuracy of the resolved PSF model in the complementary technique through the optimization of test pattern sets. Inscale® from Aselta Nanographics is used to prepare the various pattern sets and to check the consequences of development. Fraunhofer IPMS-CNT exposed the prepared data and observed those to visualize the link of sensitivities between the PSF parameters and the test pattern. First, the SA can assess the influence of test pattern sets for the determination of PSF parameters, such as which PSF parameter is affected on the employments of certain pattern. Secondly, throughout the evaluation, the SA enhances the precision of PSF through the optimization of test patterns. Finally, the developed algorithm is able to appraise what ranges of proximity effect correction is crucial on which portion of a real application pattern in the electron beam exposure.

  16. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis

    SciTech Connect

    Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad

    2015-10-02

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.

  17. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint

    SciTech Connect

    Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad

    2015-12-08

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.

  18. Flat heat pipe design, construction, and analysis

    SciTech Connect

    Voegler, G.; Boughey, B.; Cerza, M.; Lindler, K.W.

    1999-08-02

    This paper details the design, construction and partial analysis of a low temperature flat heat pipe in order to determine the feasibility of implementing flat heat pipes into thermophotovoltaic (TPV) energy conversion systems.

  19. Neutron activation analysis; A sensitive test for trace elements

    SciTech Connect

    Hossain, T.Z. . Ward Lab.)

    1992-01-01

    This paper discusses neutron activation analysis (NAA), an extremely sensitive technique for determining the elemental constituents of an unknown specimen. Currently, there are some twenty-five moderate-power TRIGA reactors scattered across the United States (fourteen of them at universities), and one of their principal uses is for NAA. NAA is procedurally simple. A small amount of the material to be tested (typically between one and one hundred milligrams) is irradiated for a period that varies from a few minutes to several hours in a neutron flux of around 10{sup 12} neutrons per square centimeter per second. A tiny fraction of the nuclei present (about 10{sup {minus}8}) is transmuted by nuclear reactions into radioactive forms. Subsequently, the nuclei decay, and the energy and intensity of the gamma rays that they emit can be measured in a gamma-ray spectrometer.

  20. Sensitivity analysis and optimization of thin-film thermoelectric coolers

    NASA Astrophysics Data System (ADS)

    Harsha Choday, Sri; Roy, Kaushik

    2013-06-01

    The cooling performance of a thermoelectric (TE) material is dependent on the figure-of-merit (ZT = S2σT/κ), where S is the Seebeck coefficient, σ and κ are the electrical and thermal conductivities, respectively. The standard definition of ZT assigns equal importance to power factor (S2σ) and thermal conductivity. In this paper, we analyze the relative importance of each thermoelectric parameter on the cooling performance using the mathematical framework of sensitivity analysis. In addition, the impact of the electrical/thermal contact parasitics on bulk and superlattice Bi2Te3 is also investigated. In the presence of significant contact parasitics, we find that the carrier concentration that results in best cooling is lower than that of the highest ZT. We also establish the level of contact parasitics that are needed such that their impact on TE cooling is negligible.

  1. Sensitivity analysis for causal inference using inverse probability weighting.

    PubMed

    Shen, Changyu; Li, Xiaochun; Li, Lingling; Were, Martin C

    2011-09-01

    Evaluation of impact of potential uncontrolled confounding is an important component for causal inference based on observational studies. In this article, we introduce a general framework of sensitivity analysis that is based on inverse probability weighting. We propose a general methodology that allows both non-parametric and parametric analyses, which are driven by two parameters that govern the magnitude of the variation of the multiplicative errors of the propensity score and their correlations with the potential outcomes. We also introduce a specific parametric model that offers a mechanistic view on how the uncontrolled confounding may bias the inference through these parameters. Our method can be readily applied to both binary and continuous outcomes and depends on the covariates only through the propensity score that can be estimated by any parametric or non-parametric method. We illustrate our method with two medical data sets.

  2. Apparatus and Method for Ultra-Sensitive trace Analysis

    SciTech Connect

    Lu, Zhengtian; Bailey, Kevin G.; Chen, Chun Yen; Li, Yimin; O'Connor, Thomas P.; Young, Linda

    2000-01-03

    An apparatus and method for conducting ultra-sensitive trace element and isotope analysis. The apparatus injects a sample through a fine nozzle to form an atomic beam. A DC discharge is used to elevate select atoms to a metastable energy level. These atoms are then acted on by a laser oriented orthogonally to the beam path to reduce the traverse velocity and to decrease the divergence angle of the beam. The beam then enters a Zeeman slower where a counter-propagating laser beam acts to slow the atoms down. Then select atoms are captured in a magneto-optical trap where they undergo fluorescence. A portion of the scattered photons are imaged onto a photo-detector, and the results analyzed to detect the presence of single atoms of the specific trace elements.

  3. NDARC - NASA Design and Analysis of Rotorcraft

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2015-01-01

    The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail

  4. NDARC NASA Design and Analysis of Rotorcraft

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne R.

    2009-01-01

    The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool intended to support both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility; a hierarchy of models; and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with lowfidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single main-rotor and

  5. Design and Comparative Evaluation of In-vitro Drug Release, Pharmacokinetics and Gamma Scintigraphic Analysis of Controlled Release Tablets Using Novel pH Sensitive Starch and Modified Starch- acrylate Graft Copolymer Matrices

    PubMed Central

    Kumar, Pankaj; Ganure, Ashok Laxmanrao; Subudhi, Bharat Bhushan; Shukla, Shubhanjali

    2015-01-01

    The present investigation deals with the development of controlled release tablets of salbutamol sulphate using graft copolymers (St-g-PMMA and Ast-g-PMMA) of starch and acetylated starch. Drug excipient compatibility was spectroscopically analyzed via FT-IR, which confirmed no interaction between drug and other excipients. Formulations were evaluated for physical characteristics like hardness, friability, weight variations, drug release and drug content analysis which satisfies all the pharmacopoeial requirement of tablet dosage form. Release rate of a model drug from formulated matrix tablets were studied at two different pH namely 1.2 and 6.8, spectrophotometrically. Drug release from the tablets of graft copolymer matrices is profoundly pH-dependent and showed a reduced release rate under acidic conditions as compared to the alkaline conditions. Study of release mechanism by Korsmeyer’s model with n values between 0.61-0.67, proved that release was governed by both diffusion and erosion. In comparison to starch and acetylated starch matrix formulations, pharmacokinetic parameters of graft copolymers matrix formulations showed a significant decrease in Cmax with an increase in tmax, indicating the effect of dosage form would last for longer duration. The gastro intestinal transit behavior of the formulation was determined by gamma scintigraphy, using 99mTc as a marker in healthy rabbits. The amount of radioactive tracer released from the labelled tablets was minimal when the tablets were in the stomach, whereas it increased as tablets reached to intestine. Thus, in-vitro and in-vivo drug release studies of starch-acrylate graft copolymers proved their controlled release behavior with preferential delivery into alkaline pH environment. PMID:26330856

  6. Computational Aspects of Sensitivity Calculations in Linear Transient Structural Analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1989-01-01

    A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.

  7. Sensitivity analysis of ecosystem service valuation in a Mediterranean watershed.

    PubMed

    Sánchez-Canales, María; López Benito, Alfredo; Passuello, Ana; Terrado, Marta; Ziv, Guy; Acuña, Vicenç; Schuhmacher, Marta; Elorza, F Javier

    2012-12-01

    The services of natural ecosystems are clearly very important to our societies. In the last years, efforts to conserve and value ecosystem services have been fomented. By way of illustration, the Natural Capital Project integrates ecosystem services into everyday decision making around the world. This project has developed InVEST (a system for Integrated Valuation of Ecosystem Services and Tradeoffs). The InVEST model is a spatially integrated modelling tool that allows us to predict changes in ecosystem services, biodiversity conservation and commodity production levels. Here, InVEST model is applied to a stakeholder-defined scenario of land-use/land-cover change in a Mediterranean region basin (the Llobregat basin, Catalonia, Spain). Of all InVEST modules and sub-modules, only the behaviour of the water provisioning one is investigated in this article. The main novel aspect of this work is the sensitivity analysis (SA) carried out to the InVEST model in order to determine the variability of the model response when the values of three of its main coefficients: Z (seasonal precipitation distribution), prec (annual precipitation) and eto (annual evapotranspiration), change. The SA technique used here is a One-At-a-Time (OAT) screening method known as Morris method, applied over each one of the one hundred and fifty four sub-watersheds in which the Llobregat River basin is divided. As a result, this method provides three sensitivity indices for each one of the sub-watersheds under consideration, which are mapped to study how they are spatially distributed. From their analysis, the study shows that, in the case under consideration and between the limits considered for each factor, the effect of the Z coefficient on the model response is negligible, while the other two need to be accurately determined in order to obtain precise output variables. The results of this study will be applicable to the others watersheds assessed in the Consolider Scarce Project.

  8. A Multivariate Analysis of Extratropical Cyclone Environmental Sensitivity

    NASA Astrophysics Data System (ADS)

    Tierney, G.; Posselt, D. J.; Booth, J. F.

    2015-12-01

    The implications of a changing climate system include more than a simple temperature increase. A changing climate also modifies atmospheric conditions responsible for shaping the genesis and evolution of atmospheric circulations. In the mid-latitudes, the effects of climate change on extratropical cyclones (ETCs) can be expressed through changes in bulk temperature, horizontal and vertical temperature gradients (leading to changes in mean state winds) as well as atmospheric moisture content. Understanding how these changes impact ETC evolution and dynamics will help to inform climate mitigation and adaptation strategies, and allow for better informed weather emergency planning. However, our understanding is complicated by the complex interplay between a variety of environmental influences, and their potentially opposing effects on extratropical cyclone strength. Attempting to untangle competing influences from a theoretical or observational standpoint is complicated by nonlinear responses to environmental perturbations and a lack of data. As such, numerical models can serve as a useful tool for examining this complex issue. We present results from an analysis framework that combines the computational power of idealized modeling with the statistical robustness of multivariate sensitivity analysis. We first establish control variables, such as baroclinicity, bulk temperature, and moisture content, and specify a range of values that simulate possible changes in a future climate. The Weather Research and Forecasting (WRF) model serves as the link between changes in climate state and ETC relevant outcomes. A diverse set of output metrics (e.g., sea level pressure, average precipitation rates, eddy kinetic energy, and latent heat release) facilitates examination of storm dynamics, thermodynamic properties, and hydrologic cycles. Exploration of the multivariate sensitivity of ETCs to changes in control parameters space is performed via an ensemble of WRF runs coupled with

  9. Uncertainty and sensitivity analysis for photovoltaic system modeling.

    SciTech Connect

    Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk

    2013-12-01

    We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.

  10. Airbreathing hypersonic vehicle design and analysis methods

    NASA Technical Reports Server (NTRS)

    Lockwood, Mary Kae; Petley, Dennis H.; Hunt, James L.; Martin, John G.

    1996-01-01

    The design, analysis, and optimization of airbreathing hypersonic vehicles requires analyses involving many highly coupled disciplines at levels of accuracy exceeding those traditionally considered in a conceptual or preliminary-level design. Discipline analysis methods including propulsion, structures, thermal management, geometry, aerodynamics, performance, synthesis, sizing, closure, and cost are discussed. Also, the on-going integration of these methods into a working environment, known as HOLIST, is described.

  11. Design and Analysis Tools for Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Slater, John W.; Folk, Thomas C.

    2009-01-01

    Computational tools are being developed for the design and analysis of supersonic inlets. The objective is to update existing tools and provide design and low-order aerodynamic analysis capability for advanced inlet concepts. The Inlet Tools effort includes aspects of creating an electronic database of inlet design information, a document describing inlet design and analysis methods, a geometry model for describing the shape of inlets, and computer tools that implement the geometry model and methods. The geometry model has a set of basic inlet shapes that include pitot, two-dimensional, axisymmetric, and stream-traced inlet shapes. The inlet model divides the inlet flow field into parts that facilitate the design and analysis methods. The inlet geometry model constructs the inlet surfaces through the generation and transformation of planar entities based on key inlet design factors. Future efforts will focus on developing the inlet geometry model, the inlet design and analysis methods, a Fortran 95 code to implement the model and methods. Other computational platforms, such as Java, will also be explored.

  12. DESIGN PACKAGE 1D SYSTEM SAFETY ANALYSIS

    SciTech Connect

    L.R. Eisler

    1995-02-02

    The purpose of this analysis is to systematically identify and evaluate hazards related to the Yucca Mountain Project Exploratory Studies Facility (ESF) Design Package 1D, Surface Facilities, (for a list of design items included in the package 1D system safety analysis see section 3). This process is an integral part of the systems engineering process; whereby safety is considered during planning, design, testing, and construction. A largely qualitative approach was used since a radiological System Safety analysis is not required. The risk assessment in this analysis characterizes the accident scenarios associated with the Design Package 1D structures/systems/components in terms of relative risk and includes recommendations for mitigating all identified risks. The priority for recommending and implementing mitigation control features is: (1) Incorporate measures to reduce risks and hazards into the structure/system/component (S/S/C) design, (2) add safety devices and capabilities to the designs that reduce risk, (3) provide devices that detect and warn personnel of hazardous conditions, and (4) develop procedures and conduct training to increase worker awareness of potential hazards, on methods to reduce exposure to hazards, and on the actions required to avoid accidents or correct hazardous conditions. The scope of this analysis is limited to the Design Package 1D structures/systems/components (S/S/Cs) during normal operations excluding hazards occurring during maintenance and ''off normal'' operations.

  13. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Arampatzis, Georgios; Katsoulakis, Markos A.

    2014-03-01

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB

  14. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations.

    PubMed

    Arampatzis, Georgios; Katsoulakis, Markos A

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB

  15. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    SciTech Connect

    Arampatzis, Georgios; Katsoulakis, Markos A.

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary

  16. New design of a passive type RADFET reader for enhanced sensitivity

    NASA Astrophysics Data System (ADS)

    Lee, Dae-Hee

    2016-07-01

    We present a new design of a passive type RADFET reader with enhanced radiation sensitivity. Using a electostatic plate, we have applied a static electric field to the gate voltage, which impacts a positive biasing on the p-type MOSFET. The resultant effect shows that 1.8 times of radiation sensitivity increased when we measured the threshold voltage shift of the RADFET exposed to 30 krad irradiation. We discuss further about the characteristic changes of a RADFET according to the positive biasing on the gate voltage.

  17. Strain response of stretchable micro-electrodes: Controlling sensitivity with serpentine designs and encapsulation

    SciTech Connect

    Gutruf, Philipp; Walia, Sumeet; Nur Ali, Md; Sriram, Sharath E-mail: madhu.bhaskaran@gmail.com; Bhaskaran, Madhu E-mail: madhu.bhaskaran@gmail.com

    2014-01-13

    The functionality of flexible electronics relies on stable performance of thin film micro-electrodes. This letter investigates the behavior of gold thin films on polyimide, a prevalent combination in flexible devices. The dynamic behavior of gold micro-electrodes has been studied by subjecting them to stress while monitoring their resistance in situ. The shape of the electrodes was systematically varied to examine resistive strain sensitivity, while an additional encapsulation was applied to characterize multilayer behavior. The realized designs show remarkable tolerance to repetitive strain, demonstrating that curvature and encapsulation are excellent approaches for minimizing resistive strain sensitivity to enable durable flexible electronics.

  18. Design and characterization of LC strain sensors with novel inductor for sensitivity enhancement

    NASA Astrophysics Data System (ADS)

    Wu, Sung-Yueh; Hsu, Wensyang

    2013-10-01

    This paper presents a LC strain sensor with a novel encapsulated serpentine helical inductor. The helical coil of the inductor is formed by serpentine wire to reduce the radial rigidity. Also the inductor is encapsulated by material with high Poisson’s ratio. When an axial deformation is applied to this encapsulated inductor, the cross-sectional area of the helical coil will have more evident change due to lower radial rigidity and encapsulation. Therefore, the variation of inductance or LC resonant frequency can be enhanced to provide better sensitivity of the LC strain sensor. By using PDMS as encapsulated material, it is shown that the sensitivity of the conventional helical inductor with or without encapsulation are both about 73.0 kHz/0.01ε, which means that encapsulation on the conventional helical inductor does not help to improve the sensitivity due to high radial rigidity of the conventional helical coil. It is also found that the encapsulated serpentine helical inductor has better sensitivity (121.9 kHz/0.01ε) than the serpentine helical inductor without encapsulation (62.7 kHz/0.01ε), which verifies the sensitivity enhancing capability of the proposed encapsulated serpentine helical inductor design. The error between simulation and measurement results on sensitivity of LC strain sensor with the encapsulated serpentine inductor is about 5.57%, which verifies the accuracy of the simulation model. The wireless sensing capability is also successfully demonstrated.

  19. Computational design of a pH-sensitive IgG binding protein

    PubMed Central

    Strauch, Eva-Maria; Fleishman, Sarel J.; Baker, David

    2014-01-01

    Computational design provides the opportunity to program protein–protein interactions for desired applications. We used de novo protein interface design to generate a pH-dependent Fc domain binding protein that buries immunoglobulin G (IgG) His-433. Using next-generation sequencing of naïve and selected pools of a library of design variants, we generated a molecular footprint of the designed binding surface, confirming the binding mode and guiding further optimization of the balance between affinity and pH sensitivity. In biolayer interferometry experiments, the optimized design binds IgG with a Kd of ∼4 nM at pH 8.2, and approximately 500-fold more weakly at pH 5.5. The protein is extremely stable, heat-resistant and highly expressed in bacteria, and allows pH-based control of binding for IgG affinity purification and diagnostic devices. PMID:24381156

  20. Sensitivity analysis for Probabilistic Tsunami Hazard Assessment (PTHA)

    NASA Astrophysics Data System (ADS)

    Spada, M.; Basili, R.; Selva, J.; Lorito, S.; Sorensen, M. B.; Zonker, J.; Babeyko, A. Y.; Romano, F.; Piatanesi, A.; Tiberti, M.

    2012-12-01

    In modern societies, probabilistic hazard assessment of natural disasters is commonly used by decision makers for designing regulatory standards and, more generally, for prioritizing risk mitigation efforts. Systematic formalization of Probabilistic Tsunami Hazard Assessment (PTHA) has started only in recent years, mainly following the giant tsunami disaster of Sumatra in 2004. Typically, PTHA for earthquake sources exploits the long-standing practices developed in probabilistic seismic hazard assessment (PSHA), even though important differences are evident. In PTHA, for example, it is known that far-field sources are more important and that physical models for tsunami propagation are needed for the highly non-isotropic propagation of tsunami waves. However, considering the high impact that PTHA may have on societies, an important effort to quantify the effect of specific assumptions should be performed. Indeed, specific standard hypotheses made in PSHA may prove inappropriate for PTHA, since tsunami waves are sensitive to different aspects of sources (e.g. fault geometry, scaling laws, slip distribution) and propagate differently. In addition, the necessity of running an explicit calculation of wave propagation for every possible event (tsunami scenario) forces analysts to finding strategies for diminishing the computational burden. In this work, we test the sensitivity of hazard results with respect to several assumptions that are peculiar of PTHA and others that are commonly accepted in PSHA. Our case study is located in the central Mediterranean Sea and considers the Western Hellenic Arc as the earthquake source with Crete and Eastern Sicily as near-field and far-field target coasts, respectively. Our suite of sensitivity tests includes: a) comparison of random seismicity distribution within area sources as opposed to systematically distributed ruptures on fault sources; b) effects of statistical and physical parameters (a- and b-value, Mc, Mmax, scaling laws

  1. Analysis of case-cohort designs.

    PubMed

    Barlow, W E; Ichikawa, L; Rosner, D; Izumi, S

    1999-12-01

    The case-cohort design is most useful in analyzing time to failure in a large cohort in which failure is rare. Covariate information is collected from all failures and a representative sample of censored observations. Sampling is done without respect to time or disease status, and, therefore, the design is more flexible than a nested case-control design. Despite the efficiency of the methods, case-cohort designs are not often used because of perceived analytic complexity. In this article, we illustrate computation of a simple variance estimator and discuss model fitting techniques in SAS. Three different weighting methods are considered. Model fitting is demonstrated in an occupational exposure study of nickel refinery workers. The design is compared to a nested case-control design with respect to analysis and efficiency in a small simulation. In this example, case-cohort sampling from the full cohort was more efficient than using a comparable nested case-control design. PMID:10580779

  2. Neural networks in structural analysis and design - An overview

    NASA Technical Reports Server (NTRS)

    Hajela, P.; Berke, L.

    1992-01-01

    The present paper provides an overview of the state-of-the-art in the application of neural networks in problems of structural analysis and design, including a survey of published applications in structural engineering. Such applications have included, among others, the use of neural networks in modeling nonlinear analysis of structures, as a rapid reanalysis capability in optimal design, and in developing problem parameter sensitivity of optimal solutions for use in multilevel decomposition based design. While most of the applications reported in the literature have been restricted to the use of the multilayer perceptron architecture and minor variations thereof, other network architectures have also been successfully explored, including the ART network, the counterpropagation network and the Hopfield-Tank model.

  3. Sensitivity Analysis and Uncertainty Propagation in a General-Purpose Thermal Analysis Code

    SciTech Connect

    Blackwell, Bennie F.; Dowding, Kevin J.

    1999-08-04

    Methods are discussed for computing the sensitivity of field variables to changes in material properties and initial/boundary condition parameters for heat transfer problems. The method we focus on is termed the ''Sensitivity Equation Method'' (SEM). It involves deriving field equations for sensitivity coefficients by differentiating the original field equations with respect to the parameters of interest and numerically solving the resulting sensitivity field equations. Uncertainty in the model parameters are then propagated through the computational model using results derived from first-order perturbation theory; this technique is identical to the methodology typically used to propagate experimental uncertainty. Numerical results are presented for the design of an experiment to estimate the thermal conductivity of stainless steel using transient temperature measurements made on prototypical hardware of a companion contact conductance experiment. Comments are made relative to extending the SEM to conjugate heat transfer problems.

  4. Spatial risk assessment for critical network infrastructure using sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Möderl, Michael; Rauch, Wolfgang

    2011-12-01

    The presented spatial risk assessment method allows for managing critical network infrastructure in urban areas under abnormal and future conditions caused e.g., by terrorist attacks, infrastructure deterioration or climate change. For the spatial risk assessment, vulnerability maps for critical network infrastructure are merged with hazard maps for an interfering process. Vulnerability maps are generated using a spatial sensitivity analysis of network transport models to evaluate performance decrease under investigated thread scenarios. Thereby parameters are varied according to the specific impact of a particular threat scenario. Hazard maps are generated with a geographical information system using raster data of the same threat scenario derived from structured interviews and cluster analysis of events in the past. The application of the spatial risk assessment is exemplified by means of a case study for a water supply system, but the principal concept is applicable likewise to other critical network infrastructure. The aim of the approach is to help decision makers in choosing zones for preventive measures.

  5. Robust and sensitive video motion detection for sleep analysis.

    PubMed

    Heinrich, Adrienne; Geng, Di; Znamenskiy, Dmitry; Vink, Jelte Peter; de Haan, Gerard

    2014-05-01

    In this paper, we propose a camera-based system combining video motion detection, motion estimation, and texture analysis with machine learning for sleep analysis. The system is robust to time-varying illumination conditions while using standard camera and infrared illumination hardware. We tested the system for periodic limb movement (PLM) detection during sleep, using EMG signals as a reference. We evaluated the motion detection performance both per frame and with respect to movement event classification relevant for PLM detection. The Matthews correlation coefficient improved by a factor of 2, compared to a state-of-the-art motion detection method, while sensitivity and specificity increased with 45% and 15%, respectively. Movement event classification improved by a factor of 6 and 3 in constant and highly varying lighting conditions, respectively. On 11 PLM patient test sequences, the proposed system achieved a 100% accurate PLM index (PLMI) score with a slight temporal misalignment of the starting time (<1 s) regarding one movement. We conclude that camera-based PLM detection during sleep is feasible and can give an indication of the PLMI score.

  6. Fault sensitivity and wear-out analysis of VLSI systems

    NASA Astrophysics Data System (ADS)

    Choi, Gwan Seung

    1994-07-01

    This thesis describes simulation approaches to conduct fault sensitivity and wear-out failure analysis of VLSI systems. A fault-injection approach to study transient impact in VLSI systems is developed. Through simulated fault injection at the device level and, subsequent fault propagation at the gate functional and software levels, it is possible to identify critical bottlenecks in dependability. Techniques to speed up the fault simulation and to perform statistical analysis of fault-impact are developed. A wear-out simulation environment is also developed to closely mimic dynamic sequences of wear-out events in a device through time, to localize weak location/aspect of target chip and to allow generation of TTF (Time-to-failure) distribution of VLSI chip as a whole. First, an accurate simulation of a target chip and its application code is performed to acquire trace data (real workload) on switch activity. Then, using this switch activity information, wear-out of the each component in the entire chip is simulated using Monte Carlo techniques.

  7. Sensitivity analysis and model reduction of nonlinear differential-algebraic systems. Final progress report

    SciTech Connect

    Petzold, L.R.; Rosen, J.B.

    1997-12-30

    Differential-algebraic equations arise in a wide variety of engineering and scientific problems. Relatively little work has been done regarding sensitivity analysis and model reduction for this class of problems. Efficient methods for sensitivity analysis are required in model development and as an intermediate step in design optimization of engineering processes. Reduced order models are needed for modelling complex physical phenomena like turbulent reacting flows, where it is not feasible to use a fully-detailed model. The objective of this work has been to develop numerical methods and software for sensitivity analysis and model reduction of nonlinear differential-algebraic systems, including large-scale systems. In collaboration with Peter Brown and Alan Hindmarsh of LLNL, the authors developed an algorithm for finding consistent initial conditions for several widely occurring classes of differential-algebraic equations (DAEs). The new algorithm is much more robust than the previous algorithm. It is also very easy to use, having been designed to require almost no information about the differential equation, Jacobian matrix, etc. in addition to what is already needed to take the subsequent time steps. The new algorithm has been implemented in a version of the software for solution of large-scale DAEs, DASPK, which has been made available on the internet. The new methods and software have been used to solve a Tokamak edge plasma problem at LLNL which could not be solved with the previous methods and software because of difficulties in finding consistent initial conditions. The capability of finding consistent initial values is also needed for the sensitivity and optimization efforts described in this paper.

  8. SIRTF primary mirror design, analysis, and testing

    NASA Technical Reports Server (NTRS)

    Sarver, George L., III; Maa, Scott; Chang, LI

    1990-01-01

    The primary mirror assembly (PMA) requirements and concepts for the Space Infrared Telescope Facility (SIRTF) program are discussed. The PMA studies at NASA/ARC resulted in the design of two engineering test articles, the development of a mirror mount cryogenic static load testing system, and the procurement and partial testing of a full scale spherical mirror mounting system. Preliminary analysis and testing of the single arch mirror with conical mount design and the structured mirror with the spherical mount design indicate that the designs will meet all figure and environmental requirements of the SIRTF program.

  9. Neutral density filters with Risley prisms: analysis and design.

    PubMed

    Duma, Virgil-Florin; Nicolov, Mirela

    2009-05-10

    We achieve the analysis and design of optical attenuators with double-prism neutral density filters. A comparative study is performed on three possible device configurations; only two are presented in the literature but without their design calculus. The characteristic parameters of this optical attenuator with Risley translating prisms for each of the three setups are defined and their analytical expressions are derived: adjustment scale (attenuation range) and interval, minimum transmission coefficient and sensitivity. The setups are compared to select the optimal device, and, from this study, the best solution for double-prism neutral density filters, both from a mechanical and an optical point of view, is determined with two identical, symmetrically movable, no mechanical contact prisms. The design calculus of this optimal device is developed in essential steps. The parameters of the prisms, particularly their angles, are studied to improve the design, and we demonstrate the maximum attenuation range that this type of attenuator can provide.

  10. Structural analysis at aircraft conceptual design stage

    NASA Astrophysics Data System (ADS)

    Mansouri, Reza

    In the past 50 years, computers have helped by augmenting human efforts with tremendous pace. The aircraft industry is not an exception. Aircraft industry is more than ever dependent on computing because of a high level of complexity and the increasing need for excellence to survive a highly competitive marketplace. Designers choose computers to perform almost every analysis task. But while doing so, existing effective, accurate and easy to use classical analytical methods are often forgotten, which can be very useful especially in the early phases of the aircraft design where concept generation and evaluation demands physical visibility of design parameters to make decisions [39, 2004]. Structural analysis methods have been used by human beings since the very early civilization. Centuries before computers were invented; the pyramids were designed and constructed by Egyptians around 2000 B.C, the Parthenon was built by the Greeks, around 240 B.C, Dujiangyan was built by the Chinese. Persepolis, Hagia Sophia, Taj Mahal, Eiffel tower are only few more examples of historical buildings, bridges and monuments that were constructed before we had any advancement made in computer aided engineering. Aircraft industry is no exception either. In the first half of the 20th century, engineers used classical method and designed civil transport aircraft such as Ford Tri Motor (1926), Lockheed Vega (1927), Lockheed 9 Orion (1931), Douglas DC-3 (1935), Douglas DC-4/C-54 Skymaster (1938), Boeing 307 (1938) and Boeing 314 Clipper (1939) and managed to become airborne without difficulty. Evidencing, while advanced numerical methods such as the finite element analysis is one of the most effective structural analysis methods; classical structural analysis methods can also be as useful especially during the early phase of a fixed wing aircraft design where major decisions are made and concept generation and evaluation demands physical visibility of design parameters to make decisions

  11. New synthetic routes towards soluble and dissymmetric triphenodioxazine dyes designed for dye-sensitized solar cells.

    PubMed

    Nicolas, Yohann; Allama, Fouzia; Lepeltier, Marc; Massin, Julien; Castet, Frédéric; Ducasse, Laurent; Hirsch, Lionel; Boubegtiten, Zahia; Jonusauskas, Gediminas; Olivier, Céline; Toupance, Thierry

    2014-03-24

    New π-conjugated structures are constantly the subject of research in dyes and pigments industry and electronic organic field. In this context, the triphenodioxazine (TPDO) core has often been used as efficient photostable pigments and once integrated in air stable n-type organic field-effect transistor (OFET). However, little attention has been paid to the TPDO core as soluble materials for optoelectronic devices, possibly due to the harsh synthetic conditions and the insolubility of many compounds. To benefit from the photostability of TPDO in dye-sensitized solar cells (DSCs), an original synthetic pathway has been established to provide soluble and dissymmetric molecules applied to a suitable design for the sensitizers of DSC. The study has been pursued by the theoretical modeling of opto-electronic properties, the optical and electronic characterizations of dyes and elaboration of efficient devices. The discovery of new synthetic pathways opens the way to innovative designs of TPDO for materials used in organic electronics.

  12. Analysis of the measurement sensitivity of multidimensional vibrating microprobes

    NASA Astrophysics Data System (ADS)

    van Riel, M. C. J. M.; Bos, E. J. C.; Homburg, F. G. A.

    2014-07-01

    A comparison is made between tactile and vibrating microprobes regarding the measurement of typical high aspect ratio microfeatures. It is found that vibrating probes enable the use of styli with higher aspect ratios than tactile probes and are still capable of measuring with high sensitivity. In addition to the one dimensional sensitivity, the directional measurement sensitivity of a vibrating probe is investigated. A vibrating microprobe can perform measurements with high sensitivity in a space spanned by its mode shapes. If the natural frequencies that correspond to these mode shapes are different, the probe shows anisotropic and sub-optimal measurement sensitivity. It is shown that the closer the natural frequencies of the probe are, the better its performance is when regarding optimal and isotropic measurement sensitivity. A novel proof-of-principle setup of a vibrating probe with two nearly equal natural frequencies is realized. This system is able to perform measurements with high and isotropic sensitivity.

  13. Accuracy of the domain method for the material derivative approach to shape design sensitivities

    NASA Technical Reports Server (NTRS)

    Yang, R. J.; Botkin, M. E.

    1987-01-01

    Numerical accuracy for the boundary and domain methods of the material derivative approach to shape design sensitivities is investigated through the use of mesh refinement. The results show that the domain method is generally more accurate than the boundary method, using the finite element technique. It is also shown that the domain method is equivalent, under certain assumptions, to the implicit differentiation approach not only theoretically but also numerically.

  14. Key Reliability Drivers of Liquid Propulsion Engines and A Reliability Model for Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Huang, Zhao-Feng; Fint, Jeffry A.; Kuck, Frederick M.

    2005-01-01

    This paper is to address the in-flight reliability of a liquid propulsion engine system for a launch vehicle. We first establish a comprehensive list of system and sub-system reliability drivers for any liquid propulsion engine system. We then build a reliability model to parametrically analyze the impact of some reliability parameters. We present sensitivity analysis results for a selected subset of the key reliability drivers using the model. Reliability drivers identified include: number of engines for the liquid propulsion stage, single engine total reliability, engine operation duration, engine thrust size, reusability, engine de-rating or up-rating, engine-out design (including engine-out switching reliability, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction), propellant specific hazards, engine start and cutoff transient hazards, engine combustion cycles, vehicle and engine interface and interaction hazards, engine health management system, engine modification, engine ground start hold down with launch commit criteria, engine altitude start (1 in. start), Multiple altitude restart (less than 1 restart), component, subsystem and system design, manufacturing/ground operation support/pre and post flight check outs and inspection, extensiveness of the development program. We present some sensitivity analysis results for the following subset of the drivers: number of engines for the propulsion stage, single engine total reliability, engine operation duration, engine de-rating or up-rating requirements, engine-out design, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction, and engine health management system implementation (basic redlines and more advanced health management systems).

  15. Design Rules for High-Efficiency Quantum-Dot-Sensitized Solar Cells: A Multilayer Approach.

    PubMed

    Shalom, Menny; Buhbut, Sophia; Tirosh, Shay; Zaban, Arie

    2012-09-01

    The effect of multilayer sensitization in quantum-dot (QD)-sensitized solar cells is reported. A series of electrodes, consisting of multilayer CdSe QDs were assembled on a compact TiO2 layer. Photocurrent measurements along with internal quantum efficiency calculation reveal similar electron collection efficiency up to a 100 nm thickness of the QD layers. Moreover, the optical density and the internal quantum efficiency measurements reveal that the desired surface area of the TiO2 electrode should be increased only by a factor of 17 compared with a compact electrode. We show that the sensitization of low-surface-area TiO2 electrode with QD layers increases the performance of the solar cell, resulting in 3.86% efficiency. These results demonstrate a conceptual difference between the QD-sensitized solar cell and the dye-based system in which dye multilayer decreases the cell performance. The utilization of multilayer QDs opens new opportunities for a significant improvement of quantum-dot-sensitized solar cells via innovative cell design.

  16. The Design and Optimization of a Highly Sensitive and Overload-Resistant Piezoresistive Pressure Sensor.

    PubMed

    Meng, Xiawei; Zhao, Yulong

    2016-03-09

    A piezoresistive pressure sensor with a beam-membrane-dual-island structure is developed for micro-pressure monitoring in the field of aviation, which requires great sensitivity and overload resistance capacity. The design, fabrication, and test of the sensor are presented in this paper. By analyzing the stress distribution of sensitive elements using the finite element method, a novel structure incorporating sensitive beams with a traditional bossed diaphragm is built up. The proposed structure proved to be advantageous in terms of high sensitivity and high overload resistance compared with the conventional bossed diaphragm and flat diaphragm structures. Curve fittings of surface stress and deflection based on ANSYS simulation results are performed to establish the sensor equations. Fabricated on an n-type single crystal silicon wafer, the sensor chips are wire-bonded to a printed circuit board (PCB) and packaged for experiments. The static and dynamic characteristics are tested and discussed. Experimental results show that the sensor has a sensitivity as high as 17.339 μV/V/Pa in the range of 500 Pa at room temperature, and a high overload resistance of 200 times overpressure. Due to the excellent performance, the sensor can be applied in measuring micro-pressure lower than 500 Pa.

  17. The Design and Optimization of a Highly Sensitive and Overload-Resistant Piezoresistive Pressure Sensor

    PubMed Central

    Meng, Xiawei; Zhao, Yulong

    2016-01-01

    A piezoresistive pressure sensor with a beam-membrane-dual-island structure is developed for micro-pressure monitoring in the field of aviation, which requires great sensitivity and overload resistance capacity. The design, fabrication, and test of the sensor are presented in this paper. By analyzing the stress distribution of sensitive elements using the finite element method, a novel structure incorporating sensitive beams with a traditional bossed diaphragm is built up. The proposed structure proved to be advantageous in terms of high sensitivity and high overload resistance compared with the conventional bossed diaphragm and flat diaphragm structures. Curve fittings of surface stress and deflection based on ANSYS simulation results are performed to establish the sensor equations. Fabricated on an n-type single crystal silicon wafer, the sensor chips are wire-bonded to a printed circuit board (PCB) and packaged for experiments. The static and dynamic characteristics are tested and discussed. Experimental results show that the sensor has a sensitivity as high as 17.339 μV/V/Pa in the range of 500 Pa at room temperature, and a high overload resistance of 200 times overpressure. Due to the excellent performance, the sensor can be applied in measuring micro-pressure lower than 500 Pa. PMID:27005627

  18. DARHT : integration of shielding design and analysis with facility design /

    SciTech Connect

    Boudrie, R. L.; Brown, T. H.; Gilmore, W. E.; Downing, J. N. , Jr.; Hack, Alan; McClure, D. A.; Nelson, C. A.; Wadlinger, E. Alan; Zumbro, M. V.

    2002-01-01

    The design of the interior portions of the Dual Axis Radiographic Hydrodynamic Test (DARHT) Facility incorporated shielding and controls from the beginning of the installation of the Accelerators. The purpose of the design and analysis was to demonstrate the adequacy of shielding or to determine the need for additional shielding or controls. Two classes of events were considered: (1) routine operation defined as the annual production of 10,000 2000-ns pulses of electrons at a nominal energy of 20 MeV, some of which are converted to the x-ray imaging beam consisting of four nominal 60-ns pulses over the 2000-ns time frame, and (2) accident case defined as up to 100 2000-ns pulses of electrons accidentally impinging on some metallic surface, thereby producing x rays. Several locations for both classes of events were considered inside and outside of the accelerator hall buildings. The analysis method consisted of the definition of a source term for each case studied and the definition of a model of the shielding and equipment present between the source and the dose areas. A minimal model of the fixed existing or proposed shielding and equipment structures was used for a first approximation. If the resulting dose from the first approximation was below the design goal (1 rem/yr for routine operations, 5 rem for accident cases), then no further investigations were performed. If the result of the first approximation was above our design goals, the model was refined to include existing or proposed shielding and equipment. In some cases existing shielding and equipment were adequate to meet our goals and in some cases additional shielding was added or administrative controls were imposed to protect the workers. It is expected that the radiation shielding design, exclusion area designations, and access control features, will result in low doses to personnel at the DARHT Facility.

  19. Microstructure design of nanoporous TiO2 photoelectrodes for dye-sensitized solar cell modules.

    PubMed

    Hu, Linhua; Dai, Songyuan; Weng, Jian; Xiao, Shangfeng; Sui, Yifeng; Huang, Yang; Chen, Shuanghong; Kong, Fantai; Pan, Xu; Liang, Linyun; Wang, Kongjia

    2007-01-18

    The optimization of dye-sensitized solar cells, especially the design of nanoporous TiO2 film microstructure, is an urgent problem for high efficiency and future commercial applications. However, up to now, little attention has been focused on the design of nanoporous TiO2 microstructure for a high efficiency of dye-sensitized solar cell modules. The optimization and design of TiO2 photoelectrode microstructure are discussed in this paper. TiO2 photoelectrodes with three different layers, including layers of small pore size films, larger pore size films, and light-scattering particles on the conducting glass with the desirable thickness, were designed and investigated. Moreover, the photovoltaic properties showed that the different porosities, pore size distribution, and BET surface area of each layer have a dramatic influence on short-circuit current, open-circuit voltage, and fill factor of the modules. The optimization and design of TiO2 photoelectrode microstructure contribute a high efficiency of DSC modules. The photoelectric conversion efficiency around 6% with 15 x 20 cm2 modules under illumination of simulated AM1.5 sunlight (100 mW/cm2) and 40 x 60 cm2 panels with the same performance tested outdoor have been achieved by our group.

  20. Highly sensitive index of sympathetic activity based on time-frequency spectral analysis of electrodermal activity.

    PubMed

    Posada-Quintero, Hugo F; Florian, John P; Orjuela-Cañón, Álvaro D; Chon, Ki H

    2016-09-01

    Time-domain indices of electrodermal activity (EDA) have been used as a marker of sympathetic tone. However, they often show high variation between subjects and low consistency, which has precluded their general use as a marker of sympathetic tone. To examine whether power spectral density analysis of EDA can provide more consistent results, we recently performed a variety of sympathetic tone-evoking experiments (43). We found significant increase in the spectral power in the frequency range of 0.045 to 0.25 Hz when sympathetic tone-evoking stimuli were induced. The sympathetic tone assessed by the power spectral density of EDA was found to have lower variation and more sensitivity for certain, but not all, stimuli compared with the time-domain analysis of EDA. We surmise that this lack of sensitivity in certain sympathetic tone-inducing conditions with time-invariant spectral analysis of EDA may lie in its inability to characterize time-varying dynamics of the sympathetic tone. To overcome the disadvantages of time-domain and time-invariant power spectral indices of EDA, we developed a highly sensitive index of sympathetic tone, based on time-frequency analysis of EDA signals. Its efficacy was tested using experiments designed to elicit sympathetic dynamics. Twelve subjects underwent four tests known to elicit sympathetic tone arousal: cold pressor, tilt table, stand test, and the Stroop task. We hypothesize that a more sensitive measure of sympathetic control can be developed using time-varying spectral analysis. Variable frequency complex demodulation, a recently developed technique for time-frequency analysis, was used to obtain spectral amplitudes associated with EDA. We found that the time-varying spectral frequency band 0.08-0.24 Hz was most responsive to stimulation. Spectral power for frequencies higher than 0.24 Hz were determined to be not related to the sympathetic dynamics because they comprised less than 5% of the total power. The mean value of time

  1. Highly sensitive index of sympathetic activity based on time-frequency spectral analysis of electrodermal activity.

    PubMed

    Posada-Quintero, Hugo F; Florian, John P; Orjuela-Cañón, Álvaro D; Chon, Ki H

    2016-09-01

    Time-domain indices of electrodermal activity (EDA) have been used as a marker of sympathetic tone. However, they often show high variation between subjects and low consistency, which has precluded their general use as a marker of sympathetic tone. To examine whether power spectral density analysis of EDA can provide more consistent results, we recently performed a variety of sympathetic tone-evoking experiments (43). We found significant increase in the spectral power in the frequency range of 0.045 to 0.25 Hz when sympathetic tone-evoking stimuli were induced. The sympathetic tone assessed by the power spectral density of EDA was found to have lower variation and more sensitivity for certain, but not all, stimuli compared with the time-domain analysis of EDA. We surmise that this lack of sensitivity in certain sympathetic tone-inducing conditions with time-invariant spectral analysis of EDA may lie in its inability to characterize time-varying dynamics of the sympathetic tone. To overcome the disadvantages of time-domain and time-invariant power spectral indices of EDA, we developed a highly sensitive index of sympathetic tone, based on time-frequency analysis of EDA signals. Its efficacy was tested using experiments designed to elicit sympathetic dynamics. Twelve subjects underwent four tests known to elicit sympathetic tone arousal: cold pressor, tilt table, stand test, and the Stroop task. We hypothesize that a more sensitive measure of sympathetic control can be developed using time-varying spectral analysis. Variable frequency complex demodulation, a recently developed technique for time-frequency analysis, was used to obtain spectral amplitudes associated with EDA. We found that the time-varying spectral frequency band 0.08-0.24 Hz was most responsive to stimulation. Spectral power for frequencies higher than 0.24 Hz were determined to be not related to the sympathetic dynamics because they comprised less than 5% of the total power. The mean value of time

  2. DESIGN PACKAGE 1E SYSTEM SAFETY ANALYSIS

    SciTech Connect

    M. Salem

    1995-06-23

    The purpose of this analysis is to systematically identify and evaluate hazards related to the Yucca Mountain Project Exploratory Studies Facility (ESF) Design Package 1E, Surface Facilities, (for a list of design items included in the package 1E system safety analysis see section 3). This process is an integral part of the systems engineering process; whereby safety is considered during planning, design, testing, and construction. A largely qualitative approach was used since a radiological System Safety Analysis is not required. The risk assessment in this analysis characterizes the accident scenarios associated with the Design Package 1E structures/systems/components(S/S/Cs) in terms of relative risk and includes recommendations for mitigating all identified risks. The priority for recommending and implementing mitigation control features is: (1) Incorporate measures to reduce risks and hazards into the structure/system/component design, (2) add safety devices and capabilities to the designs that reduce risk, (3) provide devices that detect and warn personnel of hazardous conditions, and (4) develop procedures and conduct training to increase worker awareness of potential hazards, on methods to reduce exposure to hazards, and on the actions required to avoid accidents or correct hazardous conditions.

  3. Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis

    NASA Technical Reports Server (NTRS)

    Kallman, T.

    2006-01-01

    A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn on many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. This talk describes simple numerical experiments designed to examine some of these issues.

  4. CFD-based surrogate modeling of liquid rocket engine components via design space refinement and sensitivity assessment

    NASA Astrophysics Data System (ADS)

    Mack, Yolanda

    Computational fluid dynamics (CFD) can be used to improve the design and optimization of rocket engine components that traditionally rely on empirical calculations and limited experimentation. CFD based-design optimization can be made computationally affordable through the use of surrogate modeling which can then facilitate additional parameter sensitivity assessments. The present study investigates surrogate-based adaptive design space refinement (DSR) using estimates of surrogate uncertainty to probe the CFD analyses and to perform sensitivity assessments for complex fluid physics associated with liquid rocket engine components. Three studies were conducted. First, a surrogate-based preliminary design optimization was conducted to improve the efficiency of a compact radial turbine for an expander cycle rocket engine while maintaining low weight. Design space refinement was used to identify function constraints and to obtain a high accuracy surrogate model in the region of interest. A merit function formulation for multi-objective design point selection reduced the number of design points by an order of magnitude while maintaining good surrogate accuracy among the best trade-off points. Second, bluff body-induced flow was investigated to identify the physics and surrogate modeling issues related to the flow's mixing dynamics. Multiple surrogates and DSR were instrumental in identifying designs for which the CFD model was deficient and to help to pinpoint the nature of the deficiency. Next, a three-dimensional computational model was developed to explore the wall heat transfer of a GO2/GH2 shear coaxial single element injector. The interactions between turbulent recirculating flow structures, chemical kinetics, and heat transfer are highlighted. Finally, a simplified computational model of multi-element injector flows was constructed to explore the sensitivity of wall heating and improve combustion efficiency to injector element spacing. Design space refinement

  5. High-Sensitivity Low-Noise Miniature Fluxgate Magnetometers Using a Flip Chip Conceptual Design

    PubMed Central

    Lu, Chih-Cheng; Huang, Jeff; Chiu, Po-Kai; Chiu, Shih-Liang; Jeng, Jen-Tzong

    2014-01-01

    This paper presents a novel class of miniature fluxgate magnetometers fabricated on a print circuit board (PCB) substrate and electrically connected to each other similar to the current “flip chip” concept in semiconductor package. This sensor is soldered together by reversely flipping a 5 cm × 3 cm PCB substrate to the other identical one which includes dual magnetic cores, planar pick-up coils, and 3-D excitation coils constructed by planar Cu interconnections patterned on PCB substrates. Principles and analysis of the fluxgate sensor are introduced first, and followed by FEA electromagnetic modeling and simulation for the proposed sensor. Comprehensive characteristic experiments of the miniature fluxgate device exhibit favorable results in terms of sensitivity (or “responsivity” for magnetometers) and field noise spectrum. The sensor is driven and characterized by employing the improved second-harmonic detection technique that enables linear V-B correlation and responsivity verification. In addition, the double magnitude of responsivity measured under very low frequency (1 Hz) magnetic fields is experimentally demonstrated. As a result, the maximum responsivity of 593 V/T occurs at 50 kHz of excitation frequency with the second harmonic wave of excitation; however, the minimum magnetic field noise is found to be 0.05 nT/Hz1/2 at 1 Hz under the same excitation. In comparison with other miniature planar fluxgates published to date, the fluxgate magnetic sensor with flip chip configuration offers advances in both device functionality and fabrication simplicity. More importantly, the novel design can be further extended to a silicon-based micro-fluxgate chip manufactured by emerging CMOS-MEMS technologies, thus enriching its potential range of applications in modern engineering and the consumer electronics market. PMID:25196107

  6. High-sensitivity low-noise miniature fluxgate magnetometers using a flip chip conceptual design.

    PubMed

    Lu, Chih-Cheng; Huang, Jeff; Chiu, Po-Kai; Chiu, Shih-Liang; Jeng, Jen-Tzong

    2014-01-01

    This paper presents a novel class of miniature fluxgate magnetometers fabricated on a print circuit board (PCB) substrate and electrically connected to each other similar to the current "flip chip" concept in semiconductor package. This sensor is soldered together by reversely flipping a 5 cm × 3 cm PCB substrate to the other identical one which includes dual magnetic cores, planar pick-up coils, and 3-D excitation coils constructed by planar Cu interconnections patterned on PCB substrates. Principles and analysis of the fluxgate sensor are introduced first, and followed by FEA electromagnetic modeling and simulation for the proposed sensor. Comprehensive characteristic experiments of the miniature fluxgate device exhibit favorable results in terms of sensitivity (or "responsivity" for magnetometers) and field noise spectrum. The sensor is driven and characterized by employing the improved second-harmonic detection technique that enables linear V-B correlation and responsivity verification. In addition, the double magnitude of responsivity measured under very low frequency (1 Hz) magnetic fields is experimentally demonstrated. As a result, the maximum responsivity of 593 V/T occurs at 50 kHz of excitation frequency with the second harmonic wave of excitation; however, the minimum magnetic field noise is found to be 0.05 nT/Hz(1/2) at 1 Hz under the same excitation. In comparison with other miniature planar fluxgates published to date, the fluxgate magnetic sensor with flip chip configuration offers advances in both device functionality and fabrication simplicity. More importantly, the novel design can be further extended to a silicon-based micro-fluxgate chip manufactured by emerging CMOS-MEMS technologies, thus enriching its potential range of applications in modern engineering and the consumer electronics market. PMID:25196107

  7. High-sensitivity low-noise miniature fluxgate magnetometers using a flip chip conceptual design.

    PubMed

    Lu, Chih-Cheng; Huang, Jeff; Chiu, Po-Kai; Chiu, Shih-Liang; Jeng, Jen-Tzong

    2014-01-01

    This paper presents a novel class of miniature fluxgate magnetometers fabricated on a print circuit board (PCB) substrate and electrically connected to each other similar to the current "flip chip" concept in semiconductor package. This sensor is soldered together by reversely flipping a 5 cm × 3 cm PCB substrate to the other identical one which includes dual magnetic cores, planar pick-up coils, and 3-D excitation coils constructed by planar Cu interconnections patterned on PCB substrates. Principles and analysis of the fluxgate sensor are introduced first, and followed by FEA electromagnetic modeling and simulation for the proposed sensor. Comprehensive characteristic experiments of the miniature fluxgate device exhibit favorable results in terms of sensitivity (or "responsivity" for magnetometers) and field noise spectrum. The sensor is driven and characterized by employing the improved second-harmonic detection technique that enables linear V-B correlation and responsivity verification. In addition, the double magnitude of responsivity measured under very low frequency (1 Hz) magnetic fields is experimentally demonstrated. As a result, the maximum responsivity of 593 V/T occurs at 50 kHz of excitation frequency with the second harmonic wave of excitation; however, the minimum magnetic field noise is found to be 0.05 nT/Hz(1/2) at 1 Hz under the same excitation. In comparison with other miniature planar fluxgates published to date, the fluxgate magnetic sensor with flip chip configuration offers advances in both device functionality and fabrication simplicity. More importantly, the novel design can be further extended to a silicon-based micro-fluxgate chip manufactured by emerging CMOS-MEMS technologies, thus enriching its potential range of applications in modern engineering and the consumer electronics market.

  8. What Do We Mean By Sensitivity Analysis? The Need For A Comprehensive Characterization Of Sensitivity In Earth System Models

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Gupta, H. V.

    2014-12-01

    Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.

  9. Application of advanced multidisciplinary analysis and optimization methods to vehicle design synthesis

    NASA Technical Reports Server (NTRS)

    Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw

    1990-01-01

    Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.

  10. Sensitivity analysis of a two-dimensional probabilistic risk assessment model using analysis of variance.

    PubMed

    Mokhtari, Amirhossein; Frey, H Christopher

    2005-12-01

    This article demonstrates application of sensitivity analysis to risk assessment models with two-dimensional probabilistic frameworks that distinguish between variability and uncertainty. A microbial food safety process risk (MFSPR) model is used as a test bed. The process of identifying key controllable inputs and key sources of uncertainty using sensitivity analysis is challenged by typical characteristics of MFSPR models such as nonlinearity, thresholds, interactions, and categorical inputs. Among many available sensitivity analysis methods, analysis of variance (ANOVA) is evaluated in comparison to commonly used methods based on correlation coefficients. In a two-dimensional risk model, the identification of key controllable inputs that can be prioritized with respect to risk management is confounded by uncertainty. However, as shown here, ANOVA provided robust insights regarding controllable inputs most likely to lead to effective risk reduction despite uncertainty. ANOVA appropriately selected the top six important inputs, while correlation-based methods provided misleading insights. Bootstrap simulation is used to quantify uncertainty in ranks of inputs due to sampling error. For the selected sample size, differences in F values of 60% or more were associated with clear differences in rank order between inputs. Sensitivity analysis results identified inputs related to the storage of ground beef servings at home as the most important. Risk management recommendations are suggested in the form of a consumer advisory for better handling and storage practices.

  11. Space shuttle orbiter digital data processing system timing sensitivity analysis OFT ascent phase

    NASA Technical Reports Server (NTRS)

    Lagas, J. J.; Peterka, J. J.; Becker, D. A.

    1977-01-01

    Dynamic loads were investigated to provide simulation and analysis of the space shuttle orbiter digital data processing system (DDPS). Segments of the ascent test (OFT) configuration were modeled utilizing the information management system interpretive model (IMSIM) in a computerized simulation modeling of the OFT hardware and software workload. System requirements for simulation of the OFT configuration were defined, and sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and these sensitivity analyses, a test design was developed for adapting, parameterizing, and executing IMSIM, using varying load and stress conditions for model execution. Analyses of the computer simulation runs are documented, including results, conclusions, and recommendations for DDPS improvements.

  12. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    DOE PAGESBeta

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; Tong, Charles; Sun, Yunwei; Chu, Wei; Ye, Aizhong; Miao, Chiyuan; Di, Zhenhua

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more

  13. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    SciTech Connect

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; Tong, Charles; Sun, Yunwei; Chu, Wei; Ye, Aizhong; Miao, Chiyuan; Di, Zhenhua

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin near Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient

  14. Pressurized thermal shock probabilistic fracture mechanics sensitivity analysis for Yankee Rowe reactor pressure vessel

    SciTech Connect

    Dickson, T.L.; Cheverton, R.D.; Bryson, J.W.; Bass, B.R.; Shum, D.K.M.; Keeney, J.A.

    1993-08-01

    The Nuclear Regulatory Commission (NRC) requested Oak Ridge National Laboratory (ORNL) to perform a pressurized-thermal-shock (PTS) probabilistic fracture mechanics (PFM) sensitivity analysis for the Yankee Rowe reactor pressure vessel, for the fluences corresponding to the end of operating cycle 22, using a specific small-break-loss- of-coolant transient as the loading condition. Regions of the vessel with distinguishing features were to be treated individually -- upper axial weld, lower axial weld, circumferential weld, upper plate spot welds, upper plate regions between the spot welds, lower plate spot welds, and the lower plate regions between the spot welds. The fracture analysis methods used in the analysis of through-clad surface flaws were those contained in the established OCA-P computer code, which was developed during the Integrated Pressurized Thermal Shock (IPTS) Program. The NRC request specified that the OCA-P code be enhanced for this study to also calculate the conditional probabilities of failure for subclad flaws and embedded flaws. The results of this sensitivity analysis provide the NRC with (1) data that could be used to assess the relative influence of a number of key input parameters in the Yankee Rowe PTS analysis and (2) data that can be used for readily determining the probability of vessel failure once a more accurate indication of vessel embrittlement becomes available. This report is designated as HSST report No. 117.

  15. Sensitivity analysis of a Vision 21 coal based zero emission power plant

    NASA Astrophysics Data System (ADS)

    Verma, A.; Rao, A. D.; Samuelsen, G. S.

    The goal of the U.S. Department of Energy's (DOE's) FutureGen project initiative is to develop and demonstrate technology for ultra clean 21st century energy plants that effectively remove environmental concerns associated with the use of fossil fuels for producing electricity, and simultaneously develop highly efficient and cost-effective power plants. The design optimization of an advanced FutureGen plant consisting of an advanced transport reactor (ATR) for coal gasification to generate syngas to fuel an integrated solid oxide fuel cell (SOFC) combined cycle is presented. The overall plant analysis of a baseline system design is performed by identifying the major factors effecting plant performance; these major factors being identified by a strategy consisting of the application of design of experiments (DOEx). A steady state simulation tool is used to perform sensitivity analysis to verify the factors identified through DOEx, and then to perform parametric analysis to identify optimum values for maximum system efficiency. Modifications to baseline system design are made to attain higher system efficiency and to lower the negative impact of reducing the SOFC operating pressure on system efficiency.

  16. Development of a generalized perturbation theory method for sensitivity analysis using continuous-energy Monte Carlo methods

    DOE PAGESBeta

    Perfetti, Christopher M.; Rearden, Bradley T.

    2016-03-01

    The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less

  17. Nonparametric Bounds and Sensitivity Analysis of Treatment Effects

    PubMed Central

    Richardson, Amy; Hudgens, Michael G.; Gilbert, Peter B.; Fine, Jason P.

    2015-01-01

    This paper considers conducting inference about the effect of a treatment (or exposure) on an outcome of interest. In the ideal setting where treatment is assigned randomly, under certain assumptions the treatment effect is identifiable from the observable data and inference is straightforward. However, in other settings such as observational studies or randomized trials with noncompliance, the treatment effect is no longer identifiable without relying on untestable assumptions. Nonetheless, the observable data often do provide some information about the effect of treatment, that is, the parameter of interest is partially identifiable. Two approaches are often employed in this setting: (i) bounds are derived for the treatment effect under minimal assumptions, or (ii) additional untestable assumptions are invoked that render the treatment effect identifiable and then sensitivity analysis is conducted to assess how inference about the treatment effect changes as the untestable assumptions are varied. Approaches (i) and (ii) are considered in various settings, including assessing principal strata effects, direct and indirect effects and effects of time-varying exposures. Methods for drawing formal inference about partially identified parameters are also discussed. PMID:25663743

  18. Sensitivity analysis of near-infrared functional lymphatic imaging

    NASA Astrophysics Data System (ADS)

    Weiler, Michael; Kassis, Timothy; Dixon, J. Brandon

    2012-06-01

    Near-infrared imaging of lymphatic drainage of injected indocyanine green (ICG) has emerged as a new technology for clinical imaging of lymphatic architecture and quantification of vessel function, yet the imaging capabilities of this approach have yet to be quantitatively characterized. We seek to quantify its capabilities as a diagnostic tool for lymphatic disease. Imaging is performed in a tissue phantom for sensitivity analysis and in hairless rats for in vivo testing. To demonstrate the efficacy of this imaging approach to quantifying immediate functional changes in lymphatics, we investigate the effects of a topically applied nitric oxide (NO) donor glyceryl trinitrate ointment. Premixing ICG with albumin induces greater fluorescence intensity, with the ideal concentration being 150 μg/mL ICG and 60 g/L albumin. ICG fluorescence can be detected at a concentration of 150 μg/mL as deep as 6 mm with our system, but spatial resolution deteriorates below 3 mm, skewing measurements of vessel geometry. NO treatment slows lymphatic transport, which is reflected in increased transport time, reduced packet frequency, reduced packet velocity, and reduced effective contraction length. NIR imaging may be an alternative to invasive procedures measuring lymphatic function in vivo in real time.

  19. Sensitivity analysis and optimization of the nuclear fuel cycle

    SciTech Connect

    Passerini, S.; Kazimi, M. S.; Shwageraus, E.

    2012-07-01

    A sensitivity study has been conducted to assess the robustness of the conclusions presented in the MIT Fuel Cycle Study. The Once Through Cycle (OTC) is considered as the base-line case, while advanced technologies with fuel recycling characterize the alternative fuel cycles. The options include limited recycling in LWRs and full recycling in fast reactors and in high conversion LWRs. Fast reactor technologies studied include both oxide and metal fueled reactors. The analysis allowed optimization of the fast reactor conversion ratio with respect to desired fuel cycle performance characteristics. The following parameters were found to significantly affect the performance of recycling technologies and their penetration over time: Capacity Factors of the fuel cycle facilities, Spent Fuel Cooling Time, Thermal Reprocessing Introduction Date, and in core and Out-of-core TRU Inventory Requirements for recycling technology. An optimization scheme of the nuclear fuel cycle is proposed. Optimization criteria and metrics of interest for different stakeholders in the fuel cycle (economics, waste management, environmental impact, etc.) are utilized for two different optimization techniques (linear and stochastic). Preliminary results covering single and multi-variable and single and multi-objective optimization demonstrate the viability of the optimization scheme. (authors)

  20. Sensitivity analysis of surface runoff generation in urban flood forecasting.

    PubMed

    Simões, N E; Leitão, J P; Maksimović, C; Sá Marques, A; Pina, R

    2010-01-01

    Reliable flood forecasting requires hydraulic models capable to estimate pluvial flooding fast enough in order to enable successful operational responses. Increased computational speed can be achieved by using a 1D/1D model, since 2D models are too computationally demanding. Further changes can be made by simplifying 1D network models, removing and by changing some secondary elements. The Urban Water Research Group (UWRG) of Imperial College London developed a tool that automatically analyses, quantifies and generates 1D overland flow network. The overland flow network features (ponds and flow pathways) generated by this methodology are dependent on the number of sewer network manholes and sewer inlets, as some of the overland flow pathways start at manholes (or sewer inlets) locations. Thus, if a simplified version of the sewer network has less manholes (or sewer inlets) than the original one, the overland flow network will be consequently different. This paper compares different overland flow networks generated with different levels of sewer network skeletonisation. Sensitivity analysis is carried out in one catchment area in Coimbra, Portugal, in order to evaluate overland flow network characteristics. PMID:20453333

  1. NASA Multidisciplinary Design and Analysis Fellowship Program

    NASA Technical Reports Server (NTRS)

    Schrage, D. P.; Craig, J. I.; Mavris, D. N.; Hale, M. A.; DeLaurentis, D.

    1999-01-01

    This report summarizes the results of a multi-year training grant for the development and implementation of a Multidisciplinary Design and Analysis (MDA) Fellowship Program at Georgia Tech. The Program funded the creation of graduate MS and PhD degree programs in aerospace systems design, analysis and integration. It also provided prestigious Fellowships with associated Industry Internships for outstanding engineering students. The graduate program has become the foundation for a vigorous and productive research effort and has produced: 20 MS degrees, 7 Ph.D. degrees, and has contributed to 9 ongoing Ph.D. students. The results of the research are documented in 32 publications (23 of which are included on a companion CDROM) and 4 annual student design reports (included on a companion CDROM). The legacy of this critical funding is the Center for Aerospace Systems Analysis at Georgia Tech which is continuing the graduate program, the research, and the industry internships established by this grant.

  2. Simultaneous analysis and design. [in structural engineering

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.

    1985-01-01

    Optimization techniques are increasingly being used for performing nonlinear structural analysis. The development of element by element (EBE) preconditioned conjugate gradient (CG) techniques is expected to extend this trend to linear analysis. Under these circumstances the structural design problem can be viewed as a nested optimization problem. There are computational benefits to treating this nested problem as a large single optimization problem. The response variables (such as displacements) and the structural parameters are all treated as design variables in a unified formulation which performs simultaneously the design and analysis. Two examples are used for demonstration. A seventy-two bar truss is optimized subject to linear stress constraints and a wing box structure is optimized subject to nonlinear collapse constraints. Both examples show substantial computational savings with the unified approach as compared to the traditional nested approach.

  3. Comparison of the sensitivity of mass spectrometry atmospheric pressure ionization techniques in the analysis of porphyrinoids.

    PubMed

    Swider, Paweł; Lewtak, Jan P; Gryko, Daniel T; Danikiewicz, Witold

    2013-10-01

    The porphyrinoids chemistry is greatly dependent on the data obtained in mass spectrometry. For this reason, it is essential to determine the range of applicability of mass spectrometry ionization methods. In this study, the sensitivity of three different atmospheric pressure ionization techniques, electrospray ionization, atmospheric pressure chemical ionization and atmospheric pressure photoionization, was tested for several porphyrinods and their metallocomplexes. Electrospray ionization method was shown to be the best ionization technique because of its high sensitivity for derivatives of cyanocobalamin, free-base corroles and porphyrins. In the case of metallocorroles and metalloporphyrins, atmospheric pressure photoionization with dopant proved to be the most sensitive ionization method. It was also shown that for relatively acidic compounds, particularly for corroles, the negative ion mode provides better sensitivity than the positive ion mode. The results supply a lot of relevant information on the methodology of porphyrinoids analysis carried out by mass spectrometry. The information can be useful in designing future MS or liquid chromatography-MS experiments.

  4. A numerical analysis of a deep Mediterranean lee cyclone: sensitivity to mesoscale potential vorticity anomalies

    NASA Astrophysics Data System (ADS)

    Horvath, K.; Ivančan-Picek, B.

    2009-03-01

    A 12-15 November 2004 cyclone on the lee side of the Atlas Mountains and the related occurrence of severe bora along the eastern Adriatic coast are numerically analyzed using the MM5 mesoscale model. Motivated by the fact that sub-synoptic scales are more sensitive to initialization errors and dominate forecast error growth, this study is designed in order to assess the sensitivity of the mesoscale forecast to the intensity of mesoscale potential vorticity (PV) anomalies. Five sensitivity simulations are performed after subtracting the selected anomalies from the initial conditions, allowing for the analysis of the cyclone intensity and track, and additionally, the associated severe bora in the Adriatic. The results of the ensemble show that the cyclone is highly sensitive to the exact details of the upper-level dynamic forcing. The spread of cyclone intensities is the greatest in the mature phase of the cyclone lifecycle, due to different cyclone advection speeds towards the Mediterranean. However, the cyclone tracks diffluence appears to be the greatest during the cyclone movement out of the Atlas lee, prior to the mature stage of cyclone development, most likely due to the predominant upper-level steering control and its influence on the thermal anomaly creation in the mountain lee. Furthermore, it is quantitatively shown that the southern Adriatic bora is more sensitive to cyclone presence in the Mediterranean then bora in the northern Adriatic, due to unequal influence of the cyclone on the cross-mountain pressure gradient formation. The orographically induced pressure perturbation is strongly correlated with bora in the northern and to a lesser extent in the southern Adriatic, implying the existence of additional controlling mechanisms to bora in the southern part of the basin. In addition, it is shown that the bora intensity in the southern Adriatic is highly sensitive to the precise sub-synoptic pressure distribution in the cyclone itself, indicating a

  5. Sensitivity and uncertainty analysis applied to the JHR reactivity prediction

    SciTech Connect

    Leray, O.; Vaglio-Gaudard, C.; Hudelot, J. P.; Santamarina, A.; Noguere, G.; Di-Salvo, J.

    2012-07-01

    The on-going AMMON program in EOLE reactor at CEA Cadarache (France) provides experimental results to qualify the HORUS-3D/N neutronics calculation scheme used for the design and safety studies of the new Material Testing Jules Horowitz Reactor (JHR). This paper presents the determination of technological and nuclear data uncertainties on the core reactivity and the propagation of the latter from the AMMON experiment to JHR. The technological uncertainty propagation was performed with a direct perturbation methodology using the 3D French stochastic code TRIPOLI4 and a statistical methodology using the 2D French deterministic code APOLLO2-MOC which leads to a value of 289 pcm (1{sigma}). The Nuclear Data uncertainty propagation relies on a sensitivity study on the main isotopes and the use of a retroactive marginalization method applied to the JEFF 3.1.1 {sup 27}Al evaluation in order to obtain a realistic multi-group covariance matrix associated with the considered evaluation. This nuclear data uncertainty propagation leads to a K{sub eff} uncertainty of 624 pcm for the JHR core and 684 pcm for the AMMON reference configuration core. Finally, transposition and reduction of the prior uncertainty were made using the Representativity method which demonstrates the similarity of the AMMON experiment with JHR (the representativity factor is 0.95). The final impact of JEFF 3.1.1 nuclear data on the Begin Of Life (BOL) JHR reactivity calculated by the HORUS-3D/N V4.0 is a bias of +216 pcm with an associated posterior uncertainty of 304 pcm (1{sigma}). (authors)

  6. Aviation System Analysis Capability Executive Assistant Design

    NASA Technical Reports Server (NTRS)

    Roberts, Eileen; Villani, James A.; Osman, Mohammed; Godso, David; King, Brent; Ricciardi, Michael

    1998-01-01

    In this technical document, we describe the design developed for the Aviation System Analysis Capability (ASAC) Executive Assistant (EA) Proof of Concept (POC). We describe the genesis and role of the ASAC system, discuss the objectives of the ASAC system and provide an overview of components and models within the ASAC system, and describe the design process and the results of the ASAC EA POC system design. We also describe the evaluation process and results for applicable COTS software. The document has six chapters, a bibliography, three appendices and one attachment.

  7. Microgravity isolation system design: A modern control analysis framework

    NASA Technical Reports Server (NTRS)

    Hampton, R. D.; Knospe, C. R.; Allaire, P. E.; Grodsinsky, C. M.

    1994-01-01

    Many acceleration-sensitive, microgravity science experiments will require active vibration isolation from the manned orbiters on which they will be mounted. The isolation problem, especially in the case of a tethered payload, is a complex three-dimensional one that is best suited to modern-control design methods. These methods, although more powerful than their classical counterparts, can nonetheless go only so far in meeting the design requirements for practical systems. Once a tentative controller design is available, it must still be evaluated to determine whether or not it is fully acceptable, and to compare it with other possible design candidates. Realistically, such evaluation will be an inherent part of a necessary iterative design process. In this paper, an approach is presented for applying complex mu-analysis methods to a closed-loop vibration isolation system (experiment plus controller). An analysis framework is presented for evaluating nominal stability, nominal performance, robust stability, and robust performance of active microgravity isolation systems, with emphasis on the effective use of mu-analysis methods.

  8. Economic impact analysis for global warming: Sensitivity analysis for cost and benefit estimates

    SciTech Connect

    Ierland, E.C. van; Derksen, L.

    1994-12-31

    Proper policies for the prevention or mitigation of the effects of global warming require profound analysis of the costs and benefits of alternative policy strategies. Given the uncertainty about the scientific aspects of the process of global warming, in this paper a sensitivity analysis for the impact of various estimates of costs and benefits of greenhouse gas reduction strategies is carried out to analyze the potential social and economic impacts of climate change.

  9. Use of Forward Sensitivity Analysis Method to Improve Code Scaling, Applicability, and Uncertainty (CSAU) Methodology

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau; Nam T. Dinh

    2010-10-01

    Code Scaling, Applicability, and Uncertainty (CSAU) methodology was developed in late 1980s by US NRC to systematically quantify reactor simulation uncertainty. Basing on CSAU methodology, Best Estimate Plus Uncertainty (BEPU) methods have been developed and widely used for new reactor designs and existing LWRs power uprate. In spite of these successes, several aspects of CSAU have been criticized for further improvement: i.e., (1) subjective judgement in PIRT process; (2) high cost due to heavily relying large experimental database, needing many experts man-years work, and very high computational overhead; (3) mixing numerical errors with other uncertainties; (4) grid dependence and same numerical grids for both scaled experiments and real plants applications; (5) user effects; Although large amount of efforts have been used to improve CSAU methodology, the above issues still exist. With the effort to develop next generation safety analysis codes, new opportunities appear to take advantage of new numerical methods, better physical models, and modern uncertainty qualification methods. Forward sensitivity analysis (FSA) directly solves the PDEs for parameter sensitivities (defined as the differential of physical solution with respective to any constant parameter). When the parameter sensitivities are available in a new advanced system analysis code, CSAU could be significantly improved: (1) Quantifying numerical errors: New codes which are totally implicit and with higher order accuracy can run much faster with numerical errors quantified by FSA. (2) Quantitative PIRT (Q-PIRT) to reduce subjective judgement and improving efficiency: treat numerical errors as special sensitivities against other physical uncertainties; only parameters having large uncertainty effects on design criterions are considered. (3) Greatly reducing computational costs for uncertainty qualification by (a) choosing optimized time steps and spatial sizes; (b) using gradient information

  10. High-sensitivity intravascular photoacoustic imaging of lipid–laden plaque with a collinear catheter design

    PubMed Central

    Cao, Yingchun; Hui, Jie; Kole, Ayeeshik; Wang, Pu; Yu, Qianhuan; Chen, Weibiao; Sturek, Michael; Cheng, Ji-Xin

    2016-01-01

    A highly sensitive catheter probe is critical to catheter-based intravascular photoacoustic imaging. Here, we present a photoacoustic catheter probe design on the basis of collinear alignment of the incident optical wave and the photoacoustically generated sound wave within a miniature catheter housing for the first time. Such collinear catheter design with an outer diameter of 1.6 mm provided highly efficient overlap between optical and acoustic waves over an imaging depth of >6 mm in D2O medium. Intravascular photoacoustic imaging of lipid-laden atherosclerotic plaque and perivascular fat was demonstrated, where a lab-built 500 Hz optical parametric oscillator outputting nanosecond optical pulses at a wavelength of 1.7 μm was used for overtone excitation of C-H bonds. In addition to intravascular imaging, the presented catheter design will benefit other photoacoustic applications such as needle-based intramuscular imaging. PMID:27121894

  11. High-sensitivity intravascular photoacoustic imaging of lipid-laden plaque with a collinear catheter design.

    PubMed

    Cao, Yingchun; Hui, Jie; Kole, Ayeeshik; Wang, Pu; Yu, Qianhuan; Chen, Weibiao; Sturek, Michael; Cheng, Ji-Xin

    2016-01-01

    A highly sensitive catheter probe is critical to catheter-based intravascular photoacoustic imaging. Here, we present a photoacoustic catheter probe design on the basis of collinear alignment of the incident optical wave and the photoacoustically generated sound wave within a miniature catheter housing for the first time. Such collinear catheter design with an outer diameter of 1.6 mm provided highly efficient overlap between optical and acoustic waves over an imaging depth of >6 mm in D2O medium. Intravascular photoacoustic imaging of lipid-laden atherosclerotic plaque and perivascular fat was demonstrated, where a lab-built 500 Hz optical parametric oscillator outputting nanosecond optical pulses at a wavelength of 1.7 μm was used for overtone excitation of C-H bonds. In addition to intravascular imaging, the presented catheter design will benefit other photoacoustic applications such as needle-based intramuscular imaging. PMID:27121894

  12. DAKOTA Design Analysis Kit for Optimization and Terascale

    SciTech Connect

    Adams, Brian M.; Dalbey, Keith R.; Eldred, Michael S.; Gay, David M.; Swiler, Laura P.; Bohnhoff, William J.; Eddy, John P.; Haskell, Karen

    2010-02-24

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file and launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.

  13. DAKOTA Design Analysis Kit for Optimization and Terascale

    2010-02-24

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file andmore » launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less

  14. Altair Lander Life Support: Design Analysis Cycles 4 and 5

    NASA Technical Reports Server (NTRS)

    Anderson, Molly; Curley, Su; Rotter, Henry; Stambaugh, Imelda; Yagoda, Evan

    2011-01-01

    Life support systems are a critical part of human exploration beyond low earth orbit. NASA s Altair Lunar Lander team is pursuing efficient solutions to the technical challenges of human spaceflight. Life support design efforts up through Design Analysis Cycle (DAC) 4 focused on finding lightweight and reliable solutions for the Sortie and Outpost missions within the Constellation Program. In DAC-4 and later follow on work, changes were made to add functionality for new requirements accepted by the Altair project, and to update the design as knowledge about certain issues or hardware matured. In DAC-5, the Altair project began to consider mission architectures outside the Constellation baseline. Selecting the optimal life support system design is very sensitive to mission duration. When the mission goals and architecture change several trade studies must be conducted to determine the appropriate design. Finally, several areas of work developed through the Altair project may be applicable to other vehicle concepts for microgravity missions. Maturing the Altair life support system related analysis, design, and requirements can provide important information for developers of a wide range of other human vehicles.

  15. Altair Lander Life Support: Design Analysis Cycles 4 and 5

    NASA Technical Reports Server (NTRS)

    Anderson, Molly; Curley, Su; Rotter, Henry; Yagoda, Evan

    2010-01-01

    Life support systems are a critical part of human exploration beyond low earth orbit. NASA s Altair Lunar Lander team is pursuing efficient solutions to the technical challenges of human spaceflight. Life support design efforts up through Design Analysis Cycle (DAC) 4 focused on finding lightweight and reliable solutions for the Sortie and Outpost missions within the Constellation Program. In DAC-4 and later follow on work, changes were made to add functionality for new requirements accepted by the Altair project, and to update the design as knowledge about certain issues or hardware matured. In DAC-5, the Altair project began to consider mission architectures outside the Constellation baseline. Selecting the optimal life support system design is very sensitive to mission duration. When the mission goals and architecture change several trade studies must be conducted to determine the appropriate design. Finally, several areas of work developed through the Altair project may be applicable to other vehicle concepts for microgravity missions. Maturing the Altair life support system related analysis, design, and requirements can provide important information for developers of a wide range of other human vehicles.

  16. A common control group - optimising the experiment design to maximise sensitivity.

    PubMed

    Bate, Simon; Karp, Natasha A

    2014-01-01

    Methods for choosing an appropriate sample size in animal experiments have received much attention in the statistical and biological literature. Due to ethical constraints the number of animals used is always reduced where possible. However, as the number of animals decreases so the risk of obtaining inconclusive results increases. By using a more efficient experimental design we can, for a given number of animals, reduce this risk. In this paper two popular cases are considered, where planned comparisons are made to compare treatments back to control and when researchers plan to make all pairwise comparisons. By using theoretical and empirical techniques we show that for studies where all pairwise comparisons are made the traditional balanced design, as suggested in the literature, maximises sensitivity. For studies that involve planned comparisons of the treatment groups back to the control group, which are inherently more sensitive due to the reduced multiple testing burden, the sensitivity is maximised by increasing the number of animals in the control group while decreasing the number in the treated groups. PMID:25504147

  17. Sensitivity comparison of real-time PCR probe designs on a model DNA plasmid.

    PubMed

    Wang, L; Blasic, J R; Holden, M J; Pires, R

    2005-09-15

    We investigated three probe design strategies used in quantitative polymerase chain reaction (PCR) for sensitivity in detection of the PCR amplicon. A plasmid with a 120-bp insert served as the DNA template. The probes were TaqMan, conventional molecular beacon (MB), and shared-stem molecular beacon (ATssMB and GCssMB). A shared-stem beacon probe combines the properties of a TaqMan probe and a conventional molecular beacon. It was found that the overall sensitivities for the four PCR probes are in the order of MB>ATssMB>GCssMB>TaqMan. The fluorescence quantum yield measurements indicate that incomplete or partial enzymatic cleavage catalyzed by Taq polymerase is the likely cause of the low sensitivities of two shared-stem beacons when compared with the conventional beacon probe. A high-fluorescence background associated with the current TaqMan probe sequence contributes to the relatively low detection sensitivity and signal-to-background ratio. The study points out that the nucleotide environment surrounding the reporting fluorophore can strongly affect the probe performance in real-time PCR.

  18. Recent advances in the sensitivity analysis for the thermomechanical postbuckling of composite panels

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.

    1995-01-01

    Three recent developments in the sensitivity analysis for thermomechanical postbuckling response of composite panels are reviewed. The three developments are: (1) effective computational procedure for evaluating hierarchical sensitivity coefficients of the various response quantities with respect to the different laminate, layer, and micromechanical characteristics; (2) application of reduction methods to the sensitivity analysis of the postbuckling response; and (3) accurate evaluation of the sensitivity coefficients to transverse shear stresses. Sample numerical results are presented to demonstrate the effectiveness of the computational procedures presented. Some of the future directions for research on sensitivity analysis for the thermomechanical postbuckling response of composite and smart structures are outlined.

  19. Recent advances in the sensitivity analysis for the thermomechanical postbuckling of composite panels

    NASA Astrophysics Data System (ADS)

    Noor, Ahmed K.

    1995-04-01

    Three recent developments in the sensitivity analysis for thermomechanical postbuckling response of composite panels are reviewed. The three developments are: (1) effective computational procedure for evaluating hierarchical sensitivity coefficients of the various response quantities with respect to the different laminate, layer, and micromechanical characteristics; (2) application of reduction methods to the sensitivity analysis of the postbuckling response; and (3) accurate evaluation of the sensitivity coefficients to transverse shear stresses. Sample numerical results are presented to demonstrate the effectiveness of the computational procedures presented. Some of the future directions for research on sensitivity analysis for the thermomechanical postbuckling response of composite and smart structures are outlined.

  20. Network interface unit design options performance analysis

    NASA Technical Reports Server (NTRS)

    Miller, Frank W.

    1991-01-01

    An analysis is presented of three design options for the Space Station Freedom (SSF) onboard Data Management System (DMS) Network Interface Unit (NIU). The NIU provides the interface from the Fiber Distributed Data Interface (FDDI) local area network (LAN) to the DMS processing elements. The FDDI LAN provides the primary means for command and control and low and medium rate telemetry data transfers on board the SSF. The results of this analysis provide the basis for the implementation of the NIU.

  1. Geostationary Coastal and Air Pollution Events (GEO-CAPE) Sensitivity Analysis Experiment

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Bowman, Kevin

    2014-01-01

    Geostationary Coastal and Air pollution Events (GEO-CAPE) is a NASA decadal survey mission to be designed to provide surface reflectance at high spectral, spatial, and temporal resolutions from a geostationary orbit necessary for studying regional-scale air quality issues and their impact on global atmospheric composition processes. GEO-CAPE's Atmospheric Science Questions explore the influence of both gases and particles on air quality, atmospheric composition, and climate. The objective of the GEO-CAPE Observing System Simulation Experiment (OSSE) is to analyze the sensitivity of ozone to the global and regional NOx emissions and improve the science impact of GEO-CAPE with respect to the global air quality. The GEO-CAPE OSSE team at Jet propulsion Laboratory has developed a comprehensive OSSE framework that can perform adjoint-sensitivity analysis for a wide range of observation scenarios and measurement qualities. This report discusses the OSSE framework and presents the sensitivity analysis results obtained from the GEO-CAPE OSSE framework for seven observation scenarios and three instrument systems.

  2. Systematic parameter estimation and sensitivity analysis using a multidimensional PEMFC model coupled with DAKOTA.

    SciTech Connect

    Wang, Chao Yang; Luo, Gang; Jiang, Fangming; Carnes, Brian; Chen, Ken Shuang

    2010-05-01

    Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated in order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.

  3. Parallel Calculation of Sensitivity Derivatives for Aircraft Design using Automatic Differentiation

    NASA Technical Reports Server (NTRS)

    Bischof, c. H.; Green, L. L.; Haigler, K. J.; Knauff, T. L., Jr.

    1994-01-01

    Sensitivity derivative (SD) calculation via automatic differentiation (AD) typical of that required for the aerodynamic design of a transport-type aircraft is considered. Two ways of computing SD via code generated by the ADIFOR automatic differentiation tool are compared for efficiency and applicability to problems involving large numbers of design variables. A vector implementation on a Cray Y-MP computer is compared with a coarse-grained parallel implementation on an IBM SP1 computer, employing a Fortran M wrapper. The SD are computed for a swept transport wing in turbulent, transonic flow; the number of geometric design variables varies from 1 to 60 with coupling between a wing grid generation program and a state-of-the-art, 3-D computational fluid dynamics program, both augmented for derivative computation via AD. For a small number of design variables, the Cray Y-MP implementation is much faster. As the number of design variables grows, however, the IBM SP1 becomes an attractive alternative in terms of compute speed, job turnaround time, and total memory available for solutions with large numbers of design variables. The coarse-grained parallel implementation also can be moved easily to a network of workstations.

  4. An analytic formula for H-infinity norm sensitivity with applications to control system design

    NASA Technical Reports Server (NTRS)

    Giesy, Daniel P.; Lim, Kyong B.

    1992-01-01

    An analytic formula for the sensitivity of singular value peak variation with respect to parameter variation is derived. As a corollary, the derivative of the H-infinity norm of a stable transfer function with respect to a parameter is presented. It depends on some of the first two derivatives of the transfer function with respect to frequency and the parameter. For cases when the transfer function has a linear system realization whose matrices depend on the parameter, analytic formulas for these first two derivatives are derived, and an efficient algorithm for calculating them is discussed. Examples are given which provide numerical verification of the H-infinity norm sensitivity formula and which demonstrate its utility in designing control systems satisfying H-infinity norm constraints. In the appendix, derivative formulas for singular values are paraphrased.

  5. SNS Emittance Scanner, Increasing Sensitivity and Performance through Noise Mitigation ,Design, Implementation and Results

    NASA Astrophysics Data System (ADS)

    Pogge, J.

    2006-11-01

    The Spallation Neutron Source (SNS) accelerator systems will deliver a 1.0 GeV, 1.4 MW proton beam to a liquid mercury target for neutron scattering research. The SNS MEBT Emittance Harp consists of 16 X and 16 Y wires, located in close proximity to the RFQ, Source, and MEBT Choppers. Beam Studies for source and LINAC commissioning required an overall increase in sensitivity for halo monitoring and measurement, and at the same time several severe noise sources had to be effectively removed from the harp signals. This paper is an overview of the design approach and techniques used in increasing gain and sensitivity while maintaining a large signal to noise ratio for the emittance scanner device. A brief discussion of the identification of the noise sources, the mechanism for transmission and pick up, how the signals were improved and a summary of results.

  6. SNS Emittance Scanner, Increasing Sensitivity and Performance through Noise Mitigation ,Design, Implementation and Results

    SciTech Connect

    Pogge, J.

    2006-11-20

    The Spallation Neutron Source (SNS) accelerator systems will deliver a 1.0 GeV, 1.4 MW proton beam to a liquid mercury target for neutron scattering research. The SNS MEBT Emittance Harp consists of 16 X and 16 Y wires, located in close proximity to the RFQ, Source, and MEBT Choppers. Beam Studies for source and LINAC commissioning required an overall increase in sensitivity for halo monitoring and measurement, and at the same time several severe noise sources had to be effectively removed from the harp signals. This paper is an overview of the design approach and techniques used in increasing gain and sensitivity while maintaining a large signal to noise ratio for the emittance scanner device. A brief discussion of the identification of the noise sources, the mechanism for transmission and pick up, how the signals were improved and a summary of results.

  7. Sorption of redox-sensitive elements: critical analysis

    SciTech Connect

    Strickert, R.G.

    1980-12-01

    The redox-sensitive elements (Tc, U, Np, Pu) discussed in this report are of interest to nuclear waste management due to their long-lived isotopes which have a potential radiotoxic effect on man. In their lower oxidation states these elements have been shown to be highly adsorbed by geologic materials occurring under reducing conditions. Experimental research conducted in recent years, especially through the Waste Isolation Safety Assessment Program (WISAP) and Waste/Rock Interaction Technology (WRIT) program, has provided extensive information on the mechanisms of retardation. In general, ion-exchange probably plays a minor role in the sorption behavior of cations of the above three actinide elements. Formation of anionic complexes of the oxidized states with common ligands (OH/sup -/, CO/sup - -//sub 3/) is expected to reduce adsorption by ion exchange further. Pertechnetate also exhibits little ion-exchange sorption by geologic media. In the reduced (IV) state, all of the elements are highly charged and it appears that they form a very insoluble compound (oxide, hydroxide, etc.) or undergo coprecipitation or are incorporated into minerals. The exact nature of the insoluble compounds and the effect of temperature, pH, pe, other chemical species, and other parameters are currently being investigated. Oxidation states other than Tc (IV,VII), U(IV,VI), Np(IV,V), and Pu(IV,V) are probably not important for the geologic repository environment expected, but should be considered especially when extreme conditions exist (radiation, temperature, etc.). Various experimental techniques such as oxidation-state analysis of tracer-level isotopes, redox potential measurement and control, pH measurement, and solid phase identification have been used to categorize the behavior of the various valence states.

  8. Orion Orbit Control Design and Analysis

    NASA Technical Reports Server (NTRS)

    Jackson, Mark; Gonzalez, Rodolfo; Sims, Christopher

    2007-01-01

    The analysis of candidate thruster configurations for the Crew Exploration Vehicle (CEV) is presented. Six candidate configurations were considered for the prime contractor baseline design. The analysis included analytical assessments of control authority, control precision, efficiency and robustness, as well as simulation assessments of control performance. The principles used in the analytic assessments of controllability, robustness and fuel performance are covered and results provided for the configurations assessed. Simulation analysis was conducted using a pulse width modulated, 6 DOF reaction system control law with a simplex-based thruster selection algorithm. Control laws were automatically derived from hardware configuration parameters including thruster locations, directions, magnitude and specific impulse, as well as vehicle mass properties. This parameterized controller allowed rapid assessment of multiple candidate layouts. Simulation results are presented for final phase rendezvous and docking, as well as low lunar orbit attitude hold. Finally, on-going analysis to consider alternate Service Module designs and to assess the pilot-ability of the baseline design are discussed to provide a status of orbit control design work to date.

  9. Analysis to Design: A Technical Training Submethodology.

    ERIC Educational Resources Information Center

    Garavaglia, Paul L.

    1993-01-01

    Describes a submethodology for using information from the analysis phase during the design phase when developing technical training. The development of instructional materials is discussed; Keller's ARCS (Attention, Relevance, Confidence, and Satisfaction) motivation model and Gagne's events of instruction are compared; and the development and use…

  10. Multifidelity Analysis and Optimization for Supersonic Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Willcox, Karen; March, Andrew; Haas, Alex; Rajnarayan, Dev; Kays, Cory

    2010-01-01

    Supersonic aircraft design is a computationally expensive optimization problem and multifidelity approaches over a significant opportunity to reduce design time and computational cost. This report presents tools developed to improve supersonic aircraft design capabilities including: aerodynamic tools for supersonic aircraft configurations; a systematic way to manage model uncertainty; and multifidelity model management concepts that incorporate uncertainty. The aerodynamic analysis tools developed are appropriate for use in a multifidelity optimization framework, and include four analysis routines to estimate the lift and drag of a supersonic airfoil, a multifidelity supersonic drag code that estimates the drag of aircraft configurations with three different methods: an area rule method, a panel method, and an Euler solver. In addition, five multifidelity optimization methods are developed, which include local and global methods as well as gradient-based and gradient-free techniques.

  11. Boundary-element shape sensitivity analysis for thermal problems with nonlinear boundary conditions

    NASA Technical Reports Server (NTRS)

    Kane, James H.; Wang, Hua

    1991-01-01

    Implicit differentiation of the discretized boundary integral equations governing the conduction of heat in solid objects subjected to nonlinear boundary conditions is shown to generate an accurate and economical approach for the computation of shape sensitivities for this class of problems. This approach involves the employment of analytical derivatives of boundary-element kernel functions with respect to shape design variables. A formulation is presented that can consistently account for both temperature-dependent convection and radiation boundary conditions. Several iterative strategies are presented for the solution of the resulting sets of nonlinear equations and the computational performances examined in detail. Multizone analysis and zone condensation strategies are demonstrated to provide substantive computational economies in this process for models with either localized nonlinear boundary conditions or regions of geometric insensitivity to design variables. A series of nonlinear example problems are presented that have closed-form solutions.

  12. Design analysis, robust methods, and stress classification

    SciTech Connect

    Bees, W.J.

    1993-01-01

    This special edition publication volume is comprised of papers presented at the 1993 ASME Pressure Vessels and Piping Conference, July 25--29, 1993 in Denver, Colorado. The papers were prepared for presentations in technical sessions developed under the auspices of the PVPD Committees on Computer Technology, Design and Analysis, Operations Applications and Components. The topics included are: Analysis of Pressure Vessels and Components; Expansion Joints; Robust Methods; Stress Classification; and Non-Linear Analysis. Individual papers have been processed separately for inclusion in the appropriate data bases.

  13. Active and passive shielding design optimization and technical solutions for deep sensitivity hard x-ray focusing telescopes

    NASA Astrophysics Data System (ADS)

    Malaguti, G.; Pareschi, G.; Ferrando, P.; Caroli, E.; Di Cocco, G.; Foschini, L.; Basso, S.; Del Sordo, S.; Fiore, F.; Bonati, A.; Lesci, G.; Poulsen, J. M.; Monzani, F.; Stevoli, A.; Negri, B.

    2005-08-01

    The 10-100 keV region of the electromagnetic spectrum contains the potential for a dramatic improvement in our understanding of a number of key problems in high energy astrophysics. A deep inspection of the universe in this band is on the other hand still lacking because of the demanding sensitivity (fraction of μCrab in the 20-40 keV for 1 Ms integration time) and imaging (≈ 15" angular resolution) requirements. The mission ideas currently being proposed are based on long focal length, grazing incidence, multi-layer optics, coupled with focal plane detectors with few hundreds μm spatial resolution capability. The required large focal lengths, ranging between 8 and 50 m, can be realized by means of extendable optical benches (as foreseen e.g. for the HEXITSAT, NEXT and NuSTAR missions) or formation flight scenarios (e.g. Simbol-X and XEUS). While the final telescope design will require a detailed trade-off analysis between all the relevant parameters (focal length, plate scale value, angular resolution, field of view, detector size, and sensitivity degradation due to detector dead area and telescope vignetting), extreme attention must be dedicated to the background minimization. In this respect, key issues are represented by the passive baffling system, which in case of large focal lengths requires particular design assessments, and by the active/passive shielding geometries and materials. In this work, the result of a study of the expected background for a hard X-ray telescope is presented, and its implication on the required sensitivity, together with the possible implementation design concepts for active and passive shielding in the framework of future satellite missions, are discussed.

  14. Using EIGER for Antenna Design and Analysis

    NASA Technical Reports Server (NTRS)

    Champagne, Nathan J.; Khayat, Michael; Kennedy, Timothy F.; Fink, Patrick W.

    2007-01-01

    EIGER (Electromagnetic Interactions GenERalized) is a frequency-domain electromagnetics software package that is built upon a flexible framework, designed using object-oriented techniques. The analysis methods used include moment method solutions of integral equations, finite element solutions of partial differential equations, and combinations thereof. The framework design permits new analysis techniques (boundary conditions, Green#s functions, etc.) to be added to the software suite with a sensible effort. The code has been designed to execute (in serial or parallel) on a wide variety of platforms from Intel-based PCs and Unix-based workstations. Recently, new potential integration scheme s that avoid singularity extraction techniques have been added for integral equation analysis. These new integration schemes are required for facilitating the use of higher-order elements and basis functions. Higher-order elements are better able to model geometrical curvature using fewer elements than when using linear elements. Higher-order basis functions are beneficial for simulating structures with rapidly varying fields or currents. Results presented here will demonstrate curren t and future capabilities of EIGER with respect to analysis of installed antenna system performance in support of NASA#s mission of exploration. Examples include antenna coupling within an enclosed environment and antenna analysis on electrically large manned space vehicles.

  15. The HSCT mission analysis of waverider designs

    NASA Technical Reports Server (NTRS)

    1993-01-01

    The grant provided partial support for an investigation of wave rider design and analysis with application to High-Speed Civil Transport (HSCT) vehicles. Proposed was the development of the necessary computational fluid dynamics (CFD) tools for the direct simulation of the waverider vehicles, the development of two new wave rider design methods that would provide computational speeds and design flexibilities never before achieved in wave rider design studies, and finally the selection of a candidate waverider-based vehicle and the evaluation of the chosen vehicle for a canonical HSCT mission scenario. This, the final report, reiterates the proposed project objectives in moderate detail, and it outlines the state of completion of each portion of the study, providing references to current and forthcoming publications that resulted from this work.

  16. Lakeside: Merging Urban Design with Scientific Analysis

    ScienceCinema

    Guzowski, Leah; Catlett, Charlie; Woodbury, Ed

    2016-07-12

    Researchers at the U.S. Department of Energy's Argonne National Laboratory and the University of Chicago are developing tools that merge urban design with scientific analysis to improve the decision-making process associated with large-scale urban developments. One such tool, called LakeSim, has been prototyped with an initial focus on consumer-driven energy and transportation demand, through a partnership with the Chicago-based architectural and engineering design firm Skidmore, Owings & Merrill, Clean Energy Trust and developer McCaffery Interests. LakeSim began with the need to answer practical questions about urban design and planning, requiring a better understanding about the long-term impact of design decisions on energy and transportation demand for a 600-acre development project on Chicago's South Side - the Chicago Lakeside Development project.

  17. Lakeside: Merging Urban Design with Scientific Analysis

    SciTech Connect

    Guzowski, Leah; Catlett, Charlie; Woodbury, Ed

    2014-10-08

    Researchers at the U.S. Department of Energy's Argonne National Laboratory and the University of Chicago are developing tools that merge urban design with scientific analysis to improve the decision-making process associated with large-scale urban developments. One such tool, called LakeSim, has been prototyped with an initial focus on consumer-driven energy and transportation demand, through a partnership with the Chicago-based architectural and engineering design firm Skidmore, Owings & Merrill, Clean Energy Trust and developer McCaffery Interests. LakeSim began with the need to answer practical questions about urban design and planning, requiring a better understanding about the long-term impact of design decisions on energy and transportation demand for a 600-acre development project on Chicago's South Side - the Chicago Lakeside Development project.

  18. Analysis of designed experiments with complex aliasing

    SciTech Connect

    Hamada, M.; Wu, C.F.J. )

    1992-07-01

    Traditionally, Plackett-Burman (PB) designs have been used in screening experiments for identifying important main effects. The PB designs whose run sizes are not a power of two have been criticized for their complex aliasing patterns, which according to conventional wisdom gives confusing results. This paper goes beyond the traditional approach by proposing the analysis strategy that entertains interactions in addition to main effects. Based on the precepts of effect sparsity and effect heredity, the proposed procedure exploits the designs' complex aliasing patterns, thereby turning their 'liability' into an advantage. Demonstration of the procedure on three real experiments shows the potential for extracting important information available in the data that has, until now, been missed. Some limitations are discussed, and extentions to overcome them are given. The proposed procedure also applies to more general mixed level designs that have become increasingly popular. 16 refs.

  19. Shape sensitivity analysis of wing static aeroelastic characteristics

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M.; Bergen, Fred D.

    1988-01-01

    A method is presented to calculate analytically the sensitivity derivatives of wing static aeroelastic characteristics with respect to wing shape parameters. The wing aerodynamic response under fixed total load is predicted with Weissinger's L-method; its structural response is obtained with Giles' equivalent plate method. The characteristics of interest include the spanwise distribution of lift, trim angle of attack, rolling and pitching moments, wind induced drag, as well as the divergence dynamic pressure. The shape parameters considered are the wing area, aspect ratio, taper ratio, sweep angle, and tip twist angle. Results of sensitivity studies indicate that: (1) approximations based on analytical sensitivity derivatives can be used over wide ranges of variations of the shape parameters considered, and (2) the analytical calculation of sensitivity derivatives is significantly less expensive than the conventional finite-difference alternative.

  20. Decoupled direct method for sensitivity analysis in combustion kinetics

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    1987-01-01

    An efficient, decoupled direct method for calculating the first order sensitivity coefficients of homogeneous, batch combustion kinetic rate equations is presented. In this method the ordinary differential equations for the sensitivity coefficients are solved separately from , but sequentially with, those describing the combustion chemistry. The ordinary differential equations for the thermochemical variables are solved using an efficient, implicit method (LSODE) that automatically selects the steplength and order for each solution step. The solution procedure for the sensitivity coefficients maintains accuracy and stability by using exactly the same steplengths and numerical approximations. The method computes sensitivity coefficients with respect to any combination of the initial values of the thermochemical variables and the three rate constant parameters for the chemical reactions. The method is illustrated by application to several simple problems and, where possible, comparisons are made with exact solutions and those obtained by other techniques.