Science.gov

Sample records for design sensitivity analysis

  1. Iterative methods for design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Belegundu, A. D.; Yoon, B. G.

    1989-01-01

    A numerical method is presented for design sensitivity analysis, using an iterative-method reanalysis of the structure generated by a small perturbation in the design variable; a forward-difference scheme is then employed to obtain the approximate sensitivity. Algorithms are developed for displacement and stress sensitivity, as well as for eignevalues and eigenvector sensitivity, and the iterative schemes are modified so that the coefficient matrices are constant and therefore decomposed only once.

  2. Shape design sensitivity analysis and optimal design of structural systems

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.

    1987-01-01

    The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.

  3. Optimal control concepts in design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Belegundu, Ashok D.

    1987-01-01

    A close link is established between open loop optimal control theory and optimal design by noting certain similarities in the gradient calculations. The resulting benefits include a unified approach, together with physical insights in design sensitivity analysis, and an efficient approach for simultaneous optimal control and design. Both matrix displacement and matrix force methods are considered, and results are presented for dynamic systems, structures, and elasticity problems.

  4. Design sensitivity analysis of boundary element substructures

    NASA Technical Reports Server (NTRS)

    Kane, James H.; Saigal, Sunil; Gallagher, Richard H.

    1989-01-01

    The ability to reduce or condense a three-dimensional model exactly, and then iterate on this reduced size model representing the parts of the design that are allowed to change in an optimization loop is discussed. The discussion presents the results obtained from an ongoing research effort to exploit the concept of substructuring within the structural shape optimization context using a Boundary Element Analysis (BEA) formulation. The first part contains a formulation for the exact condensation of portions of the overall boundary element model designated as substructures. The use of reduced boundary element models in shape optimization requires that structural sensitivity analysis can be performed. A reduced sensitivity analysis formulation is then presented that allows for the calculation of structural response sensitivities of both the substructured (reduced) and unsubstructured parts of the model. It is shown that this approach produces significant computational economy in the design sensitivity analysis and reanalysis process by facilitating the block triangular factorization and forward reduction and backward substitution of smaller matrices. The implementatior of this formulation is discussed and timings and accuracies of representative test cases presented.

  5. Design sensitivity analysis using EAL. Part 1: Conventional design parameters

    NASA Technical Reports Server (NTRS)

    Dopker, B.; Choi, Kyung K.; Lee, J.

    1986-01-01

    A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.

  6. Design sensitivity analysis and optimization tool (DSO) for sizing design applications

    NASA Technical Reports Server (NTRS)

    Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa

    1992-01-01

    The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.

  7. Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Eleshaky, Mohamed E.

    1991-01-01

    A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.

  8. Aeroacoustic sensitivity analysis and optimal aeroacoustic design of turbomachinery blades

    NASA Technical Reports Server (NTRS)

    Hall, Kenneth C.

    1994-01-01

    During the first year of the project, we have developed a theoretical analysis - and wrote a computer code based on this analysis - to compute the sensitivity of unsteady aerodynamic loads acting on airfoils in cascades due to small changes in airfoil geometry. The steady and unsteady flow though a cascade of airfoils is computed using the full potential equation. Once the nominal solutions have been computed, one computes the sensitivity. The analysis takes advantage of the fact that LU decomposition is used to compute the nominal steady and unsteady flow fields. If the LU factors are saved, then the computer time required to compute the sensitivity of both the steady and unsteady flows to changes in airfoil geometry is quite small. The results to date are quite encouraging, and may be summarized as follows: (1) The sensitivity procedure has been validated by comparing the results obtained by 'finite difference' techniques, that is, computing the flow using the nominal flow solver for two slightly different airfoils and differencing the results. The 'analytic' solution computed using the method developed under this grant and the finite difference results are found to be in almost perfect agreement. (2) The present sensitivity analysis is computationally much more efficient than finite difference techniques. We found that using a 129 by 33 node computational grid, the present sensitivity analysis can compute the steady flow sensitivity about ten times more efficiently that the finite difference approach. For the unsteady flow problem, the present sensitivity analysis is about two and one-half times as fast as the finite difference approach. We expect that the relative efficiencies will be even larger for the finer grids which will be used to compute high frequency aeroacoustic solutions. Computational results show that the sensitivity analysis is valid for small to moderate sized design perturbations. (3) We found that the sensitivity analysis provided important

  9. Spectrograph sensitivity analysis: an efficient tool for different design phases

    NASA Astrophysics Data System (ADS)

    Genoni, M.; Riva, M.; Pariani, G.; Aliverti, M.; Moschetti, M.

    2016-08-01

    In this paper we present an efficient tool developed to perform opto-mechanical tolerance and sensitivity analysis both for the preliminary and final design phases of a spectrograph. With this tool it will be possible to evaluate the effect of mechanical perturbation of each single spectrograph optical element in terms of image stability, i.e. the motion of the echellogram on the spectrograph focal plane, and of image quality, i.e. the spot size of the different echellogram wavelengths. We present the MATLAB-Zemax script architecture of the tool. In addition we present the detailed results concerning its application to the sensitivity analysis of the ESPRESSO spectrograph (the Echelle Spectrograph for Rocky Exoplanets and Stable Spectroscopic Observations which will be soon installed on ESO's Very Large Telescope) in the framework of the incoming assembly, alignment and integration phases.

  10. Design Parameters Influencing Reliability of CCGA Assembly: A Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Tasooji, Amaneh; Ghaffarian, Reza; Rinaldi, Antonio

    2006-01-01

    Area Array microelectronic packages with small pitch and large I/O counts are now widely used in microelectronics packaging. The impact of various package design and materials/process parameters on reliability has been studied through extensive literature review. Reliability of Ceramic Column Grid Array (CCGA) package assemblies has been evaluated using JPL thermal cycle test results (-50(deg)/75(deg)C, -55(deg)/100(deg)C, and -55(deg)/125(deg)C), as well as those reported by other investigators. A sensitivity analysis has been performed using the literature da to study the impact of design parameters and global/local stress conditions on assembly reliability. The applicability of various life-prediction models for CCGA design has been investigated by comparing model's predictions with the experimental thermal cycling data. Finite Element Method (FEM) analysis has been conducted to assess the state of the stress/strain in CCGA assembly under different thermal cycling, and to explain the different failure modes and locations observed in JPL test assemblies.

  11. SENSIT: a cross-section and design sensitivity and uncertainty analysis code. [In FORTRAN for CDC-7600, IBM 360

    SciTech Connect

    Gerstl, S.A.W.

    1980-01-01

    SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.

  12. Design component method for sensitivity analysis of built-up structures

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.; Seong, Hwai G.

    1986-01-01

    A 'design component method' that provides a unified and systematic organization of design sensitivity analysis for built-up structures is developed and implemented. Both conventional design variables, such as thickness and cross-sectional area, and shape design variables of components of built-up structures are considered. It is shown that design of components of built-up structures can be characterized and system design sensitivity expressions obtained by simply adding contributions from each component. The method leads to a systematic organization of computations for design sensitivity analysis that is similar to the way in which computations are organized within a finite element code.

  13. Sensitivity analysis

    MedlinePlus

    ... page: //medlineplus.gov/ency/article/003741.htm Sensitivity analysis To use the sharing features on this page, please enable JavaScript. Sensitivity analysis determines the effectiveness of antibiotics against microorganisms (germs) ...

  14. Sensitivity analysis for aeroacoustic and aeroelastic design of turbomachinery blades

    NASA Technical Reports Server (NTRS)

    Lorence, Christopher B.; Hall, Kenneth C.

    1995-01-01

    A new method for computing the effect that small changes in the airfoil shape and cascade geometry have on the aeroacoustic and aeroelastic behavior of turbomachinery cascades is presented. The nonlinear unsteady flow is assumed to be composed of a nonlinear steady flow plus a small perturbation unsteady flow that is harmonic in time. First, the full potential equation is used to describe the behavior of the nonlinear mean (steady) flow through a two-dimensional cascade. The small disturbance unsteady flow through the cascade is described by the linearized Euler equations. Using rapid distortion theory, the unsteady velocity is split into a rotational part that contains the vorticity and an irrotational part described by a scalar potential. The unsteady vorticity transport is described analytically in terms of the drift and stream functions computed from the steady flow. Hence, the solution of the linearized Euler equations may be reduced to a single inhomogeneous equation for the unsteady potential. The steady flow and small disturbance unsteady flow equations are discretized using bilinear quadrilateral isoparametric finite elements. The nonlinear mean flow solution and streamline computational grid are computed simultaneously using Newton iteration. At each step of the Newton iteration, LU decomposition is used to solve the resulting set of linear equations. The unsteady flow problem is linear, and is also solved using LU decomposition. Next, a sensitivity analysis is performed to determine the effect small changes in cascade and airfoil geometry have on the mean and unsteady flow fields. The sensitivity analysis makes use of the nominal steady and unsteady flow LU decompositions so that no additional matrices need to be factored. Hence, the present method is computationally very efficient. To demonstrate how the sensitivity analysis may be used to redesign cascades, a compressor is redesigned for improved aeroelastic stability and two different fan exit guide

  15. Results of an integrated structure-control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1988-01-01

    Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.

  16. Design sensitivity analysis with Applicon IFAD using the adjoint variable method

    NASA Technical Reports Server (NTRS)

    Frederick, Marjorie C.; Choi, Kyung K.

    1984-01-01

    A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.

  17. Design tradeoff studies and sensitivity analysis, appendix B

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Further work was performed on the Near Term Hybrid Passenger Vehicle Development Program. Fuel economy on the order of 2 to 3 times that of a conventional vehicle, with a comparable life cycle cost, is possible. The two most significant factors in keeping the life cycle cost down are the retail price increment and the ratio of battery replacement cost to battery life. Both factors can be reduced by reducing the power rating of the electric drive portion of the system relative to the system power requirements. The type of battery most suitable for the hybrid, from the point of view of minimizing life cycle cost, is nickel-iron. The hybrid is much less sensitive than a conventional vehicle is, in terms of the reduction in total fuel consumption and resultant decreases in operating expense, to reductions in vehicle weight, tire rolling resistance, etc., and to propulsion system and drivetrain improvements designed to improve the brake specific fuel consumption of the engine under low road load conditions. It is concluded that modifications to package the propulsion system and battery pack can be easily accommodated within the confines of a modified carryover body such as the Ford Ltd.

  18. On 3-D modeling and automatic regridding in shape design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.; Yao, Tse-Min

    1987-01-01

    The material derivative idea of continuum mechanics and the adjoint variable method of design sensitivity analysis are used to obtain a computable expression for the effect of shape variations on measures of structural performance of three-dimensional elastic solids.

  19. Sensitivity analysis and multidisciplinary optimization for aircraft design - Recent advances and results

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    Optimization by decomposition, complex system sensitivity analysis, and a rapid growth of disciplinary sensitivity analysis are some of the recent developments that hold promise of a quantum jump in the support engineers receive from computers in the quantitative aspects of design. Review of the salient points of these techniques is given and illustrated by examples from aircraft design as a process that combines the best of human intellect and computer power to manipulate data.

  20. Sensitivity analysis and multidisciplinary optimization for aircraft design: Recent advances and results

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    Optimization by decomposition, complex system sensitivity analysis, and a rapid growth of disciplinary sensitivity analysis are some of the recent developments that hold promise of a quantum jump in the support engineers receive from computers in the quantitative aspects of design. Review of the salient points of these techniques is given and illustrated by examples from aircraft design as a process that combines the best of human intellect and computer power to manipulate data.

  1. Design sensitivity analysis of dynamic responses for a BLDC motor with mechanical and electromagnetic interactions

    NASA Astrophysics Data System (ADS)

    Im, Hyungbin; Bae, Dae Sung; Chung, Jintai

    2012-04-01

    This paper presents a design sensitivity analysis of dynamic responses of a BLDC motor with mechanical and electromagnetic interactions. Based on the equations of motion which consider mechanical and electromagnetic interactions of the motor, the sensitivity equations for the dynamic responses were derived by applying the direct differential method. From the sensitivity equation along with the equations of motion, the time responses for the sensitivity analysis were obtained by using the Newmark time integration method. The sensitivities of the motor performances such as the electromagnetic torque, rotating speed, and vibration level were analyzed for the six design parameters of rotor mass, shaft/bearing stiffness, rotor eccentricity, winding resistance, coil turn number, and residual magnetic flux density. Furthermore, to achieve a higher torque, higher speed, and lower vibration level, a new BLDC motor was designed by applying the multi-objective function method. It was found that all three performances are sensitive to the design parameters in the order of the coil turn number, magnetic flux density, rotor mass, winding resistance, rotor eccentricity, and stiffness. It was also found that the torque and vibration level are more sensitive to the parameters than the rotating speed. Finally, by applying the sensitivity analysis results, a new optimized design of the motor resulted in better performances. The newly designed motor showed an improved torque, rotating speed, and vibration level.

  2. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  3. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W.; Gumbert, Clyde R.; Newman, Perry A.

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The optimal solutions associated with the MPP provide measurements related to safety probability. This study focuses on two commonly used approximate probability integration methods; i.e., the Reliability Index Approach (RIA) and the Performance Measurement Approach (PMA). Their reliability sensitivity equations are first derived in this paper, based on the derivatives of their respective optimal solutions. Examples are then provided to demonstrate the use of these derivatives for better reliability analysis and Reliability-Based Design Optimization (RBDO).

  4. Automatic differentiation for design sensitivity analysis of structural systems using multiple processors

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi

    1994-01-01

    An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.

  5. Variational Methods in Design Optimization and Sensitivity Analysis for Two-Dimensional Euler Equations

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.

    1997-01-01

    Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.

  6. Results of an integrated structure/control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1989-01-01

    A design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations is discussed. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changes in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient than finite difference methods for the computation of the equivalent sensitivity information.

  7. Sensitivity analysis based preform die shape design using the finite element method

    NASA Astrophysics Data System (ADS)

    Zhao, G. Q.; Hufi, R.; Hutter, A.; Grandhi, R. V.

    1997-06-01

    This paper uses a finite element-based sensitivity analysis method to design the preform die shape for metal forming processes. The sensitivity analysis was developed using the rigid visco-plastic finite element method. The preform die shapes are represented by cubic B-spline curves. The control points or coefficients of the B-spline are used as the design variables. The optimization problem is to minimize the difference between the realized and the desired final forging shapes. The sensitivity analysis includes the sensitivities of the objective function, nodal coordinates, and nodal velocities with respect to the design variables. The remeshing procedure and the interpolation/transfer of the history/dependent parameters are considered. An adjustment of the volume loss resulting from the finite element analysis is used to make the workpiece volume consistent in each optimization iteration and improve the optimization convergence. In addition, a technique for dealing with fold-over defects during the forming simulation is employed in order to continue the optimization procedures of the preform die shape design. The method developed in this paper is used to design the preform die shape for both plane strain and axisymmetric deformations with shaped cavities. The analysis shows that satisfactory final forging shapes are obtained using the optimized preform die shapes.

  8. Methodology for Sensitivity Analysis, Approximate Analysis, and Design Optimization in CFD for Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1996-01-01

    An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.

  9. Manufacturing error sensitivity analysis and optimal design method of cable-network antenna structures

    NASA Astrophysics Data System (ADS)

    Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye

    2016-03-01

    Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.

  10. Value-Driven Design and Sensitivity Analysis of Hybrid Energy Systems using Surrogate Modeling

    SciTech Connect

    Wenbo Du; Humberto E. Garcia; William R. Binder; Christiaan J. J. Paredis

    2001-10-01

    A surrogate modeling and analysis methodology is applied to study dynamic hybrid energy systems (HES). The effect of battery size on the smoothing of variability in renewable energy generation is investigated. Global sensitivity indices calculated using surrogate models show the relative sensitivity of system variability to dynamic properties of key components. A value maximization approach is used to consider the tradeoff between system variability and required battery size. Results are found to be highly sensitive to the renewable power profile considered, demonstrating the importance of accurate renewable resource modeling and prediction. The documented computational framework and preliminary results represent an important step towards a comprehensive methodology for HES evaluation, design, and optimization.

  11. Adjoint design sensitivity analysis of reduced atomic systems using generalized Langevin equation for lattice structures

    SciTech Connect

    Kim, Min-Geun; Jang, Hong-Lae; Cho, Seonho

    2013-05-01

    An efficient adjoint design sensitivity analysis method is developed for reduced atomic systems. A reduced atomic system and the adjoint system are constructed in a locally confined region, utilizing generalized Langevin equation (GLE) for periodic lattice structures. Due to the translational symmetry of lattice structures, the size of time history kernel function that accounts for the boundary effects of the reduced atomic systems could be reduced to a single atom’s degrees of freedom. For the problems of highly nonlinear design variables, the finite difference method is impractical for its inefficiency and inaccuracy. However, the adjoint method is very efficient regardless of the number of design variables since one additional time integration is required for the adjoint GLE. Through numerical examples, the derived adjoint sensitivity turns out to be accurate and efficient through the comparison with finite difference sensitivity.

  12. Sensitivity Analysis of Design Variables to Optimize the Performance of the USV

    NASA Astrophysics Data System (ADS)

    Cao, Xue; Wei, Zifan; Yang, Songlin; Wen, Yiyan

    Optimization is an important part of the design on Unmanned Surface Vehicle (USV). In this paper, considering the rapidity, maneuverability, seakeeping and rollover resistance performance of the USV, the design variables of the optimization system of the USV have been determined a mathematical model for comprehensive optimization of USV has been established. Integrated optimization design of multi-target and multi-constrain is achieved by computer programs. However, the influence degree of each design variable are different on the final optimization results, in order to determine the degree of influence of each design variables, find out the key variables for a further optimization analysis and sensitivity studies of the design variables to optimization will be crucial. For solving this problem, a C++ program has been written by genetic algorithm and five discrete variables have been selected which are used to study the sensitivity of optimization. The results showed that different design variables have different effects on the optimization. The length of the ship and the speed of propeller have the greatest effect on the total objective function. The speed of propeller has a greater impact on both rapidity and seakeeping. The length of ship L, the molded breadth of ship B, the draft of ship T and design speed Vs have a greater sensitivity to maneuverability. Also, molded breadth B has the greatest effect on the rollover resistance.

  13. Stratospheric Airship Design Sensitivity

    NASA Astrophysics Data System (ADS)

    Smith, Ira Steve; Fortenberry, Michael; Noll, . James; Perry, William

    2012-07-01

    The concept of a stratospheric or high altitude powered platform has been around almost as long as stratospheric free balloons. Airships are defined as Lighter-Than-Air (LTA) vehicles with propulsion and steering systems. Over the past five (5) years there has been an increased interest by the U. S. Department of Defense as well as commercial enterprises in airships at all altitudes. One of these interests is in the area of stratospheric airships. Whereas DoD is primarily interested in things that look down, such platforms offer a platform for science applications, both downward and outward looking. Designing airships to operate in the stratosphere is very challenging due to the extreme high altitude environment. It is significantly different than low altitude airship designs such as observed in the familiar advertising or tourism airships or blimps. The stratospheric airship design is very dependent on the specific application and the particular requirements levied on the vehicle with mass and power limits. The design is a complex iterative process and is sensitive to many factors. In an effort to identify the key factors that have the greatest impacts on the design, a parametric analysis of a simplified airship design has been performed. The results of these studies will be presented.

  14. System Sensitivity Analysis Applied to the Conceptual Design of a Dual-Fuel Rocket SSTO

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1994-01-01

    This paper reports the results of initial efforts to apply the System Sensitivity Analysis (SSA) optimization method to the conceptual design of a single-stage-to-orbit (SSTO) launch vehicle. SSA is an efficient, calculus-based MDO technique for generating sensitivity derivatives in a highly multidisciplinary design environment. The method has been successfully applied to conceptual aircraft design and has been proven to have advantages over traditional direct optimization methods. The method is applied to the optimization of an advanced, piloted SSTO design similar to vehicles currently being analyzed by NASA as possible replacements for the Space Shuttle. Powered by a derivative of the Russian RD-701 rocket engine, the vehicle employs a combination of hydrocarbon, hydrogen, and oxygen propellants. Three primary disciplines are included in the design - propulsion, performance, and weights & sizing. A complete, converged vehicle analysis depends on the use of three standalone conceptual analysis computer codes. Efforts to minimize vehicle dry (empty) weight are reported in this paper. The problem consists of six system-level design variables and one system-level constraint. Using SSA in a 'manual' fashion to generate gradient information, six system-level iterations were performed from each of two different starting points. The results showed a good pattern of convergence for both starting points. A discussion of the advantages and disadvantages of the method, possible areas of improvement, and future work is included.

  15. Practical implementation of an accurate method for multilevel design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.

    1987-01-01

    Solution techniques for handling large scale engineering optimization problems are reviewed. Potentials for practical applications as well as their limited capabilities are discussed. A new solution algorithm for design sensitivity is proposed. The algorithm is based upon the multilevel substructuring concept to be coupled with the adjoint method of sensitivity analysis. There are no approximations involved in the present algorithm except the usual approximations introduced due to the discretization of the finite element model. Results from the six- and thirty-bar planar truss problems show that the proposed multilevel scheme for sensitivity analysis is more effective (in terms of computer incore memory and the total CPU time) than a conventional (one level) scheme even on small problems. The new algorithm is expected to perform better for larger problems and its applications on the new generation of computer hardwares with 'parallel processing' capability is very promising.

  16. A sensitivity analysis of hazardous waste disposal site climatic and soil design parameters using HELP3

    SciTech Connect

    Adelman, D.D.; Stansbury, J.

    1997-12-31

    The Resource Conservation and Recovery Act (RCRA) Subtitle C, Comprehensive Environmental Response, Compensation, And Liability Act (CERCLA), and subsequent amendments have formed a comprehensive framework to deal with hazardous wastes on the national level. Key to this waste management is guidance on design (e.g., cover and bottom leachate control systems) of hazardous waste landfills. The objective of this research was to investigate the sensitivity of leachate volume at hazardous waste disposal sites to climatic, soil cover, and vegetative cover (Leaf Area Index) conditions. The computer model HELP3 which has the capability to simulate double bottom liner systems as called for in hazardous waste disposal sites was used in the analysis. HELP3 was used to model 54 combinations of climatic conditions, disposal site soil surface curve numbers, and leaf area index values to investigate how sensitive disposal site leachate volume was to these three variables. Results showed that leachate volume from the bottom double liner system was not sensitive to these parameters. However, the cover liner system leachate volume was quite sensitive to climatic conditions and less sensitive to Leaf Area Index and curve number values. Since humid locations had considerably more cover liner system leachate volume than and locations, different design standards may be appropriate for humid conditions than for and conditions.

  17. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  18. Aerodynamic Shape Sensitivity Analysis and Design Optimization of Complex Configurations Using Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Newman, James C., III; Barnwell, Richard W.

    1997-01-01

    A three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed and is extended to model geometrically complex configurations. The advantage of unstructured grids (when compared with a structured-grid approach) is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional geometry and a Gauss-Seidel algorithm for the three-dimensional; similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Simple parameterization techniques are utilized for demonstrative purposes. Once the surface has been deformed, the unstructured grid is adapted by considering the mesh as a system of interconnected springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR (which is an advanced automatic-differentiation software tool). To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for a two-dimensional high-lift multielement airfoil and for a three-dimensional Boeing 747-200 aircraft.

  19. Sensitivity Analysis of Wind Plant Performance to Key Turbine Design Parameters: A Systems Engineering Approach; Preprint

    SciTech Connect

    Dykes, K.; Ning, A.; King, R.; Graf, P.; Scott, G.; Veers, P.

    2014-02-01

    This paper introduces the development of a new software framework for research, design, and development of wind energy systems which is meant to 1) represent a full wind plant including all physical and nonphysical assets and associated costs up to the point of grid interconnection, 2) allow use of interchangeable models of varying fidelity for different aspects of the system, and 3) support system level multidisciplinary analyses and optimizations. This paper describes the design of the overall software capability and applies it to a global sensitivity analysis of wind turbine and plant performance and cost. The analysis was performed using three different model configurations involving different levels of fidelity, which illustrate how increasing fidelity can preserve important system interactions that build up to overall system performance and cost. Analyses were performed for a reference wind plant based on the National Renewable Energy Laboratory's 5-MW reference turbine at a mid-Atlantic offshore location within the United States.

  20. Mesoscale ensemble sensitivity analysis for predictability studies and observing network design in complex terrain

    NASA Astrophysics Data System (ADS)

    Hacker, Joshua

    2013-04-01

    Ensemble sensitivity analysis (ESA) is emerging as a viable alternative to adjoint sensitivity. Several open issues face ESA for forecasts dominated by mesoscale phenomena, including (1) sampling error arising from finite-sized ensembles causing over-estimated sensitivities, and (2) violation of linearity assumptions for strongly nonlinear flows. In an effort to use ESA for predictability studies and observing network design in complex terrain, we present results from experiments designed to address these open issues. Sampling error in ESA arises in two places. First, when hypothetical observations are introduced to test the sensitivity estimates for linearity. Here the same localization that was used in the filter itself can be simply applied. Second and more critical, localization should be considered within the sensitivity calculations. Sensitivity to hypothetical observations, estimated without re-running the ensemble, includes regression of a sample of a final-time (forecast) metric onto a sample of initial states. Derivation to include localization results in two localization coefficients (or factors) applied in separate regression steps. Because the forecast metric is usually a sum, and can also include a sum over a spatial region and multiple physical variables, a spatial localization function is difficult to specify. We present results from experiments to empirically estimate localization factors for ESA to test hypothetical observations for mesoscale data assimilation in complex terrain. Localization factors are first derived for an ensemble filter following the empirical localization methodology. Sensitivities for a fog event over Salt Lake City, and a Colorado downslope wind event, are tested for linearity by approximating assimilation of perfect observations at points of maximum sensitivity, both with and without localization. Observation sensitivity is then estimated, with and without localization, and tested for linearity. The validity of the

  1. Sensitivity analysis of a dry-processed Candu fuel pellet's design parameters

    SciTech Connect

    Choi, Hangbok; Ryu, Ho Jin

    2007-07-01

    Sensitivity analysis was carried out in order to investigate the effect of a fuel pellet's design parameters on the performance of a dry-processed Canada deuterium uranium (CANDU) fuel and to suggest the optimum design modifications. Under a normal operating condition, a dry-processed fuel has a higher internal pressure and plastic strain due to a higher fuel centerline temperature when compared with a standard natural uranium CANDU fuel. Under a condition that the fuel bundle dimensions do not change, sensitivity calculations were performed on a fuel's design parameters such as the axial gap, dish depth, gap clearance and plenum volume. The results showed that the internal pressure and plastic strain of the cladding were most effectively reduced if a fuel's element plenum volume was increased. More specifically, the internal pressure and plastic strain of the dry-processed fuel satisfied the design limits of a standard CANDU fuel when the plenum volume was increased by one half a pellet, 0.5 mm{sup 3}/K. (authors)

  2. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1994-01-01

    The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.

  3. Novel design and sensitivity analysis of displacement measurement system utilizing knife edge diffraction for nanopositioning stages.

    PubMed

    Lee, ChaBum; Lee, Sun-Kyu; Tarbutton, Joshua A

    2014-09-01

    This paper presents a novel design and sensitivity analysis of a knife edge-based optical displacement sensor that can be embedded with nanopositioning stages. The measurement system consists of a laser, two knife edge locations, two photodetectors, and axillary optics components in a simple configuration. The knife edge is installed on the stage parallel to its moving direction and two separated laser beams are incident on knife edges. While the stage is in motion, the direct transverse and diffracted light at each knife edge is superposed producing interference at the detector. The interference is measured with two photodetectors in a differential amplification configuration. The performance of the proposed sensor was mathematically modeled, and the effect of the optical and mechanical parameters, wavelength, beam diameter, distances from laser to knife edge to photodetector, and knife edge topography, on sensor outputs was investigated to obtain a novel analytical method to predict linearity and sensitivity. From the model, all parameters except for the beam diameter have a significant influence on measurement range and sensitivity of the proposed sensing system. To validate the model, two types of knife edges with different edge topography were used for the experiment. By utilizing a shorter wavelength, smaller sensor distance and higher edge quality increased measurement sensitivity can be obtained. The model was experimentally validated and the results showed a good agreement with the theoretically estimated results. This sensor is expected to be easily implemented into nanopositioning stage applications at a low cost and mathematical model introduced here can be used for design and performance estimation of the knife edge-based sensor as a tool.

  4. On equivalence of discrete-discrete and continuum-discrete design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.; Twu, Sung-Ling

    1989-01-01

    Developments in design sensitivity analysis (DSA) method have been made using two fundamentally different approaches as shown. In the first approach, a discretized structural finite element model is used to carry out DSA. There are three different methods in the discrete DSA approach: finite difference, semi-analytical, and analytical methods. The finite difference method is a popular one due to its simplicity, but a serious shortcoming of the method is the uncertainty in the choice of a perturbation step size of design variables. In the semi-analytical method, the derivatives of stiffness matrix is computed by finite differences, whereas in the analytical method, the derivatives are obtained analytically. For the shape design variable, computation of analytical derivative of stiffness matrix is quite costly. Because of this, the semi-analytical method is a popular choice in discrete shape DSA approach. However, recently, Barthelemy and Haftka presented that the semi-analytical method can have serious accuracy problems for shape design variables in structures modeled by beam, plate, truss, frame, and solid elements. They found that accuracy problems occur even for a simple cantilever beam. In the second approach, a continuum model of the structure is used to carry out DSA.

  5. Application of design sensitivity analysis for greater improvement on machine structural dynamics

    NASA Technical Reports Server (NTRS)

    Yoshimura, Masataka

    1987-01-01

    Methodologies are presented for greatly improving machine structural dynamics by using design sensitivity analyses and evaluative parameters. First, design sensitivity coefficients and evaluative parameters of structural dynamics are described. Next, the relations between the design sensitivity coefficients and the evaluative parameters are clarified. Then, design improvement procedures of structural dynamics are proposed for the following three cases: (1) addition of elastic structural members, (2) addition of mass elements, and (3) substantial charges of joint design variables. Cases (1) and (2) correspond to the changes of the initial framework or configuration, and (3) corresponds to the alteration of poor initial design variables. Finally, numerical examples are given for demonstrating the availability of the methods proposed.

  6. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1993-01-01

    In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.

  7. Reliability Sensitivity Analysis and Design Optimization of Composite Structures Based on Response Surface Methodology

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2003-01-01

    This report discusses the development and application of two alternative strategies in the form of global and sequential local response surface (RS) techniques for the solution of reliability-based optimization (RBO) problems. The problem of a thin-walled composite circular cylinder under axial buckling instability is used as a demonstrative example. In this case, the global technique uses a single second-order RS model to estimate the axial buckling load over the entire feasible design space (FDS) whereas the local technique uses multiple first-order RS models with each applied to a small subregion of FDS. Alternative methods for the calculation of unknown coefficients in each RS model are explored prior to the solution of the optimization problem. The example RBO problem is formulated as a function of 23 uncorrelated random variables that include material properties, thickness and orientation angle of each ply, cylinder diameter and length, as well as the applied load. The mean values of the 8 ply thicknesses are treated as independent design variables. While the coefficients of variation of all random variables are held fixed, the standard deviations of ply thicknesses can vary during the optimization process as a result of changes in the design variables. The structural reliability analysis is based on the first-order reliability method with reliability index treated as the design constraint. In addition to the probabilistic sensitivity analysis of reliability index, the results of the RBO problem are presented for different combinations of cylinder length and diameter and laminate ply patterns. The two strategies are found to produce similar results in terms of accuracy with the sequential local RS technique having a considerably better computational efficiency.

  8. Sensitivity Analysis in Engineering

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)

    1987-01-01

    The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.

  9. Improving engineering system design by formal decomposition, sensitivity analysis, and optimization

    NASA Technical Reports Server (NTRS)

    Sobieski, J.; Barthelemy, J. F. M.

    1985-01-01

    A method for use in the design of a complex engineering system by decomposing the problem into a set of smaller subproblems is presented. Coupling of the subproblems is preserved by means of the sensitivity derivatives of the subproblem solution to the inputs received from the system. The method allows for the division of work among many people and computers.

  10. Design sensitivity analysis for an aeroelastic optimization of a helicopter blade

    NASA Technical Reports Server (NTRS)

    Lim, Joon; Chopra, Inderjit

    1987-01-01

    The sensitivity of vibratory hub loads of a four-bladed hingeless rotor with respect to blade design parameters is investigated using a finite element formulation in space and time. Design parameters include nonstructural mass distribution (spanwise and chordwise), chordwise offset of center of gravity from aerodynamic center, blade bending stiffnesses (flap, lag and torsion). Hub loads selected are 4/rev vertical hub shear and 3/rev hub moment in the rotating reference frame. The sensitivity derivatives of vertical hub loads with respect to blade design parameters are compared using two approaches, finite difference scheme and analytical approach using chain rule differentiation. The analytical derivative approach developed as an integral part of response solution (finite element in time) is a powerful method for an aeroelastic optimization of a helicopter rotor.

  11. Global sensitivity analysis of bandpass and antireflection coating manufacturing by numerical space filling designs.

    PubMed

    Vasseur, Olivier; Cathelinaud, Michel; Claeys-Bruno, Magalie; Sergent, Michelle

    2011-03-20

    We present the effectiveness of global sensitivity analyses of optical coatings manufacturing to assess the robustness of filters by computer experiments. The most critical interactions of layers are determined for a 29 quarter-wave layer bandpass filter and for an antireflection coating with eight non-quarter-wave layers. Two monitoring techniques with the associated production performances are considered, and their influence on the interactions classification is discussed. Global sensitivity analyses by numerical space filling designs give clues to improve filter manufacturing against error effects and to assess the potential robustness of the coatings.

  12. Automated divertor target design by adjoint shape sensitivity analysis and a one-shot method

    SciTech Connect

    Dekeyser, W.; Reiter, D.; Baelmans, M.

    2014-12-01

    As magnetic confinement fusion progresses towards the development of first reactor-scale devices, computational tokamak divertor design is a topic of high priority. Presently, edge plasma codes are used in a forward approach, where magnetic field and divertor geometry are manually adjusted to meet design requirements. Due to the complex edge plasma flows and large number of design variables, this method is computationally very demanding. On the other hand, efficient optimization-based design strategies have been developed in computational aerodynamics and fluid mechanics. Such an optimization approach to divertor target shape design is elaborated in the present paper. A general formulation of the design problems is given, and conditions characterizing the optimal designs are formulated. Using a continuous adjoint framework, design sensitivities can be computed at a cost of only two edge plasma simulations, independent of the number of design variables. Furthermore, by using a one-shot method the entire optimization problem can be solved at an equivalent cost of only a few forward simulations. The methodology is applied to target shape design for uniform power load, in simplified edge plasma geometry.

  13. Mass sensitivity analysis and designing of surface acoustic wave resonators for chemical sensors

    NASA Astrophysics Data System (ADS)

    Kshetrimayum, Roshan; Yadava, R. D. S.; Tandon, R. P.

    2009-05-01

    The sensitivity of surface acoustic wave (SAW) chemical sensors depends on several factors such as the frequency and phase point of SAW device operation, sensitivity of the SAW velocity to surface mass loading, sensitivity of the SAW oscillator resonance to the loop phase shift, film thickness and oscillator electronics. This paper analyzes the influence of the phase point of operation in SAW oscillator sensors based on two-port resonator devices. It is found that the mass sensitivity will be enhanced if the SAW device has a nonlinear dependence on the frequency (delay ~ frequency-1). This requires the device to generate and operate in a ωτg(ω) = const region in the device passband, where ω denotes the angular frequency of oscillation and τg(ω) denotes the phase slope of the SAW resonator device. A SAW coupled resonator filter (CRF) that take advantage of mode coupling is considered in realizing such a device to help in shaping the phase transfer characteristics of a high mass sensitivity sensor. The device design and simulation results are presented within the coupling-of-modes formalism.

  14. Sensitivity Analysis for the Optimal Design and Control of Advanced Guidance Systems

    DTIC Science & Technology

    2007-06-01

    Springer-Verlag, New York, 1995. [32] L. Davis and F. Pahlevani , Sensitivity calculations for actuator location for a parabolic PDE. In preparation... Pahlevani , International Journal for Numerical Methods in Fluids, February 2006, Vol. 52, pages 381-392. Published in Un-Reviewed Conference Proceedings 1...Equa- tions”, F. Pahlevani , submitted to SIAM Journal on Numerical Analysis, in revision. 3. “Semi-Implicit Schemes for Transient Navier-Stokes Equations

  15. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.

    PubMed

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.

  16. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis

    PubMed Central

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736

  17. Theoretical sensitivity analysis of quadruple Vernier racetrack resonators designed for fabrication on the silicon-on-insulator platform

    NASA Astrophysics Data System (ADS)

    Boeck, Robert; Chrostowski, Lukas; Jaeger, Nicolas A. F.

    2014-09-01

    Vernier racetrack resonators offer advantages over single racetrack resonators such as extending the free spectral range (FSR).1-3 Here, we have presented a theoretical sensitivity analysis on quadruple Vernier racetrack resonators based on varying, one at a time, various fabrication dependent parameters. These parameters include the waveguide widths, heights, and propagation losses. We have shown that it should be possible to design a device that meets typical commercial specifications while being tolerant to changes in these parameters.

  18. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis.

    SciTech Connect

    Eldred, Michael Scott; Vigil, Dena M.; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Lefantzi, Sophia; Hough, Patricia Diane; Eddy, John P.

    2011-12-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.

  19. Sensitivity analysis of air gap motion with respect to wind load and mooring system for semi-submersible platform design

    NASA Astrophysics Data System (ADS)

    Huo, Fa-li; Nie, Yan; Yang, De-qing; Dong, Gang; Cui, Jin

    2016-07-01

    A design of semi-submersible platform is mainly based on the extreme response analysis due to the forces experienced by the components during lifetime. The external loads can induce the extreme air gap response and potential deck impact to the semi-submersible platform. It is important to predict air gap response of platforms accurately in order to check the strength of local structures which withstand the wave slamming due to negative air gap. The wind load cannot be simulated easily by model test in towing tank whereas it can be simulated accurately in wind tunnel test. Furthermore, full scale simulation of the mooring system in model test is still a tuff work especially the stiffness of the mooring system. Owing to the above mentioned problem, the model test results are not accurate enough for air gap evaluation. The aim of this paper is to present sensitivity analysis results of air gap motion with respect to the mooring system and wind load for the design of semi-submersible platform. Though the model test results are not suitable for the direct evaluation of air gap, they can be used as a good basis for tuning the radiation damping and viscous drag in numerical simulation. In the presented design example, a numerical model is tuned and validated by ANSYS AQWA based on the model test results with a simple 4 line symmetrical horizontal soft mooring system. According to the tuned numerical model, sensitivity analysis studies of air gap motion with respect to the mooring system and wind load are performed in time domain. Three mooring systems and five simulation cases about the presented platform are simulated based on the results of wind tunnel tests and sea-keeping tests. The sensitivity analysis results are valuable for the floating platform design.

  20. Waveform design and Doppler sensitivity analysis for nonlinear FM chirp pulses

    NASA Astrophysics Data System (ADS)

    Johnston, J. A.; Fairhead, A. C.

    1986-04-01

    The use of pulse compression to obtain simultaneous long-range detection and good range resolution is described. The types of modulation that can be used to obtain pulse compression are outlined with particular emphasis on their performance under Doppler shift. It is shown that nonlinear frequency-modulated (FM) signals are capable of providing low range-sidelobes while being compressed using a matched filter. A design method for nonlinear FM signals based on window functions is outlined. Simulation results for pulse compression of nonlinear FM signals based on four different window functions with Doppler shift are presented. The results are used to define the effects of Doppler shift on the pulse compression. An analysis is presented, and interpreted pictorially, that explains the effects of Doppler shift on the pulse compression. The analysis is also extended to explain the better Doppler performance of hybrid FM pulse compression systems.

  1. Multi-parameter sensitivity analysis and application research in the robust optimization design for complex nonlinear system

    NASA Astrophysics Data System (ADS)

    Ma, Tao; Zhang, Weigang; Zhang, Yang; Tang, Ting

    2015-01-01

    The current research of complex nonlinear system robust optimization mainly focuses on the features of design parameters, such as probability density functions, boundary conditions, etc. After parameters study, high-dimensional curve or robust control design is used to find an accurate robust solution. However, there may exist complex interaction between parameters and practical engineering system. With the increase of the number of parameters, it is getting hard to determine high-dimensional curves and robust control methods, thus it's difficult to get the robust design solutions. In this paper, a method of global sensitivity analysis based on divided variables in groups is proposed. By making relevant variables in one group and keeping each other independent among sets of variables, global sensitivity analysis is conducted in grouped variables and the importance of parameters is evaluated by calculating the contribution value of each parameter to the total variance of system response. By ranking the importance of input parameters, relatively important parameters are chosen to conduct robust design analysis of the system. By applying this method to the robust optimization design of a real complex nonlinear system-a vehicle occupant restraint system with multi-parameter, good solution is gained and the response variance of the objective function is reduced to 0.01, which indicates that the robustness of the occupant restraint system is improved in a great degree and the method is effective and valuable for the robust design of complex nonlinear system. This research proposes a new method which can be used to obtain solutions for complex nonlinear system robust design.

  2. Physicochemical design and analysis of self-propelled objects that are characteristically sensitive to environments.

    PubMed

    Nakata, Satoshi; Nagayama, Masaharu; Kitahata, Hiroyuki; Suematsu, Nobuhiko J; Hasegawa, Takeshi

    2015-04-28

    The development of self-propelled motors that mimic biological motors is an important challenge for the transport of either themselves or some material in a small space, since biological systems exhibit high autonomy and various types of responses, such as taxis and swarming. In this perspective, we review non-living systems that behave like living matter. We especially focus on nonlinearity to enhance autonomy and the response of the system, since characteristic nonlinear phenomena, such as oscillation, synchronization, pattern formation, bifurcation, and hysteresis, are coupled to self-motion of which driving force is the difference in the interfacial tension. Mathematical modelling based on reaction-diffusion equations and equations of motion as well as physicochemical analysis from the point of view of the molecular structure are also important for the design of non-living motors that mimic living motors.

  3. New sensitivity analysis attack

    NASA Astrophysics Data System (ADS)

    El Choubassi, Maha; Moulin, Pierre

    2005-03-01

    The sensitivity analysis attacks by Kalker et al. constitute a known family of watermark removal attacks exploiting a vulnerability in some watermarking protocols: the attacker's unlimited access to the watermark detector. In this paper, a new attack on spread spectrum schemes is designed. We first examine one of Kalker's algorithms and prove its convergence using the law of large numbers, which gives more insight into the problem. Next, a new algorithm is presented and compared to existing ones. Various detection algorithms are considered including correlation detectors and normalized correlation detectors, as well as other, more complicated algorithms. Our algorithm is noniterative and requires at most n+1 operations, where n is the dimension of the signal. Moreover, the new approach directly estimates the watermark by exploiting the simple geometry of the detection boundary and the information leaked by the detector.

  4. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    SciTech Connect

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.; Jakeman, John Davis; Swiler, Laura Painton; Stephens, John Adam; Vigil, Dena M.; Wildey, Timothy Michael; Bohnhoff, William J.; Eddy, John P.; Hu, Kenneth T.; Dalbey, Keith R.; Bauman, Lara E; Hough, Patricia Diane

    2014-05-01

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

  5. Achieving statistical power through research design sensitivity.

    PubMed

    Beck, C T

    1994-11-01

    The challenge for nurse researchers is to design their intervention studies with sufficient sensitivity to detect the treatment effects they are investigating. In order to meet this challenge, researchers must understand the factors that influence statistical power. Underpowered studies can result in a majority of null results in a research area when, in fact, the interventions are effective. The sensitivity of a research design is not a function of just one element of the design but of the entire research design: its plan, implementation and statistical analysis. When discussing factors that can increase a research design's statistical power, attention is most often focused on increasing sample size. This paper addresses a variety of factors and techniques, other than increasing sample size, that nurse researchers can use to enhance the sensitivity of a research design so that it can attain adequate power.

  6. Integrated Sensitivity Analysis Workflow

    SciTech Connect

    Friedman-Hill, Ernest J.; Hoffman, Edward L.; Gibson, Marcus J.; Clay, Robert L.

    2014-08-01

    Sensitivity analysis is a crucial element of rigorous engineering analysis, but performing such an analysis on a complex model is difficult and time consuming. The mission of the DART Workbench team at Sandia National Laboratories is to lower the barriers to adoption of advanced analysis tools through software integration. The integrated environment guides the engineer in the use of these integrated tools and greatly reduces the cycle time for engineering analysis.

  7. A sensitivity analysis of process design parameters, commodity prices and robustness on the economics of odour abatement technologies.

    PubMed

    Estrada, José M; Kraakman, N J R Bart; Lebrero, Raquel; Muñoz, Raúl

    2012-01-01

    The sensitivity of the economics of the five most commonly applied odour abatement technologies (biofiltration, biotrickling filtration, activated carbon adsorption, chemical scrubbing and a hybrid technology consisting of a biotrickling filter coupled with carbon adsorption) towards design parameters and commodity prices was evaluated. Besides, the influence of the geographical location on the Net Present Value calculated for a 20 years lifespan (NPV20) of each technology and its robustness towards typical process fluctuations and operational upsets were also assessed. This comparative analysis showed that biological techniques present lower operating costs (up to 6 times) and lower sensitivity than their physical/chemical counterparts, with the packing material being the key parameter affecting their operating costs (40-50% of the total operating costs). The use of recycled or partially treated water (e.g. secondary effluent in wastewater treatment plants) offers an opportunity to significantly reduce costs in biological techniques. Physical/chemical technologies present a high sensitivity towards H2S concentration, which is an important drawback due to the fluctuating nature of malodorous emissions. The geographical analysis evidenced high NPV20 variations around the world for all the technologies evaluated, but despite the differences in wage and price levels, biofiltration and biotrickling filtration are always the most cost-efficient alternatives (NPV20). When, in an economical evaluation, the robustness is as relevant as the overall costs (NPV20), the hybrid technology would move up next to BTF as the most preferred technologies.

  8. Use of Sensitivity and Uncertainty Analysis in the Design of Reactor Physics and Criticality Benchmark Experiments for Advanced Nuclear Fuel

    SciTech Connect

    Rearden, B.T.; Anderson, W.J.; Harms, G.A.

    2005-08-15

    Framatome ANP, Sandia National Laboratories (SNL), Oak Ridge National Laboratory (ORNL), and the University of Florida are cooperating on the U.S. Department of Energy Nuclear Energy Research Initiative (NERI) project 2001-0124 to design, assemble, execute, analyze, and document a series of critical experiments to validate reactor physics and criticality safety codes for the analysis of commercial power reactor fuels consisting of UO{sub 2} with {sup 235}U enrichments {>=}5 wt%. The experiments will be conducted at the SNL Pulsed Reactor Facility.Framatome ANP and SNL produced two series of conceptual experiment designs based on typical parameters, such as fuel-to-moderator ratios, that meet the programmatic requirements of this project within the given restraints on available materials and facilities. ORNL used the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) to assess, from a detailed physics-based perspective, the similarity of the experiment designs to the commercial systems they are intended to validate. Based on the results of the TSUNAMI analysis, one series of experiments was found to be preferable to the other and will provide significant new data for the validation of reactor physics and criticality safety codes.

  9. Fusion-neutron-yield, activation measurements at the Z accelerator: design, analysis, and sensitivity.

    PubMed

    Hahn, K D; Cooper, G W; Ruiz, C L; Fehl, D L; Chandler, G A; Knapp, P F; Leeper, R J; Nelson, A J; Smelser, R M; Torres, J A

    2014-04-01

    We present a general methodology to determine the diagnostic sensitivity that is directly applicable to neutron-activation diagnostics fielded on a wide variety of neutron-producing experiments, which include inertial-confinement fusion (ICF), dense plasma focus, and ion beam-driven concepts. This approach includes a combination of several effects: (1) non-isotropic neutron emission; (2) the 1/r(2) decrease in neutron fluence in the activation material; (3) the spatially distributed neutron scattering, attenuation, and energy losses due to the fielding environment and activation material itself; and (4) temporally varying neutron emission. As an example, we describe the copper-activation diagnostic used to measure secondary deuterium-tritium fusion-neutron yields on ICF experiments conducted on the pulsed-power Z Accelerator at Sandia National Laboratories. Using this methodology along with results from absolute calibrations and Monte Carlo simulations, we find that for the diagnostic configuration on Z, the diagnostic sensitivity is 0.037% ± 17% counts/neutron per cm(2) and is ∼ 40% less sensitive than it would be in an ideal geometry due to neutron attenuation, scattering, and energy-loss effects.

  10. Dynamic and Design Sensitivity Analysis of Rigid and Elastic Mechanical Systems with Intermittent Motion

    DTIC Science & Technology

    1985-12-12

    of Engineering 8 The University of Iowa • SC Approved for Public Release Distribution Unlimited 86 2 7...AND ADDRESS I0. PROGRAM ELEMENT PROJECT. TASK AREA A WORK UNIT NUMBERSCenter for Computer Aided Design College of Engineering The University of Iowa...IS.. OECLASSIFICATION/DOWNGRADING SCM EDULE I. DISTRIBUTION STATEMENT (of thi. Report) Approved for public release; distribution unlimited. 4 17

  11. Characterizing Wheel-Soil Interaction Loads Using Meshfree Finite Element Methods: A Sensitivity Analysis for Design Trade Studies

    NASA Technical Reports Server (NTRS)

    Contreras, Michael T.; Trease, Brian P.; Bojanowski, Cezary; Kulakx, Ronald F.

    2013-01-01

    A wheel experiencing sinkage and slippage events poses a high risk to planetary rover missions as evidenced by the mobility challenges endured by the Mars Exploration Rover (MER) project. Current wheel design practice utilizes loads derived from a series of events in the life cycle of the rover which do not include (1) failure metrics related to wheel sinkage and slippage and (2) performance trade-offs based on grouser placement/orientation. Wheel designs are rigorously tested experimentally through a variety of drive scenarios and simulated soil environments; however, a robust simulation capability is still in development due to myriad of complex interaction phenomena that contribute to wheel sinkage and slippage conditions such as soil composition, large deformation soil behavior, wheel geometry, nonlinear contact forces, terrain irregularity, etc. For the purposes of modeling wheel sinkage and slippage at an engineering scale, meshfree nite element approaches enable simulations that capture su cient detail of wheel-soil interaction while remaining computationally feasible. This study implements the JPL wheel-soil benchmark problem in the commercial code environment utilizing the large deformation modeling capability of Smooth Particle Hydrodynamics (SPH) meshfree methods. The nominal, benchmark wheel-soil interaction model that produces numerically stable and physically realistic results is presented and simulations are shown for both wheel traverse and wheel sinkage cases. A sensitivity analysis developing the capability and framework for future ight applications is conducted to illustrate the importance of perturbations to critical material properties and parameters. Implementation of the proposed soil-wheel interaction simulation capability and associated sensitivity framework has the potential to reduce experimentation cost and improve the early stage wheel design proce

  12. Design tradeoff studies and sensitivity analysis, appendices B1 - B4. [hybrid electric vehicles

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Documentation is presented for a program which separately computes fuel and energy consumption for the two modes of operation of a hybrid electric vehicle. The distribution of daily travel is specified as input data as well as the weights which the component driving cycles are given in each of the composite cycles. The possibility of weight reduction through the substitution of various materials is considered as well as the market potential for hybrid vehicles. Data relating to battery compartment weight distribution and vehicle handling analysis is tabulated.

  13. Optimizing the design and analysis of cryogenic semiconductor dark matter detectors for maximum sensitivity

    SciTech Connect

    Pyle, Matt Christopher

    2012-01-01

    In this thesis, we illustrate how the complex E- field geometry produced by interdigitated electrodes at alternating voltage biases naturally encodes 3D fiducial volume information into the charge and phonon signals and thus is a natural geometry for our next generation dark matter detectors. Secondly, we will study in depth the physics of import to our devices including transition edge sensor dynamics, quasi- particle dynamics in our Al collection fins, and phonon physics in the crystal itself so that we can both understand the performance of our previous CDMS II device as well as optimize the design of our future devices. Of interest to the broader physics community is the derivation of the ideal athermal phonon detector resolution and it's T3 c scaling behavior which suggests that the athermal phonon detector technology developed by CDMS could also be used to discover coherent neutrino scattering and search for non-standard neutrino interaction and sterile neutrinos. These proposed resolution optimized devices can also be used in searches for exotic MeV-GeV dark matter as well as novel background free searches for 8GeV light WIMPs.

  14. Reducing Production Basis Risk through Rainfall Intensity Frequency (RIF) Indexes: Global Sensitivity Analysis' Implication on Policy Design

    NASA Astrophysics Data System (ADS)

    Muneepeerakul, Chitsomanus; Huffaker, Ray; Munoz-Carpena, Rafael

    2016-04-01

    The weather index insurance promises financial resilience to farmers struck by harsh weather conditions with swift compensation at affordable premium thanks to its minimal adverse selection and moral hazard. Despite these advantages, the very nature of indexing causes the presence of "production basis risk" that the selected weather indexes and their thresholds do not correspond to actual damages. To reduce basis risk without additional data collection cost, we propose the use of rain intensity and frequency as indexes as it could offer better protection at the lower premium by avoiding basis risk-strike trade-off inherent in the total rainfall index. We present empirical evidences and modeling results that even under the similar cumulative rainfall and temperature environment, yield can significantly differ especially for drought sensitive crops. We further show that deriving the trigger level and payoff function from regression between historical yield and total rainfall data may pose significant basis risk owing to their non-unique relationship in the insured range of rainfall. Lastly, we discuss the design of index insurance in terms of contract specifications based on the results from global sensitivity analysis.

  15. Naval Waste Package Design Sensitivity

    SciTech Connect

    T. Schmitt

    2006-12-13

    The purpose of this calculation is to determine the sensitivity of the structural response of the Naval waste packages to varying inner cavity dimensions when subjected to a comer drop and tip-over from elevated surface. This calculation will also determine the sensitivity of the structural response of the Naval waste packages to the upper bound of the naval canister masses. The scope of this document is limited to reporting the calculation results in terms of through-wall stress intensities in the outer corrosion barrier. This calculation is intended for use in support of the preliminary design activities for the license application design of the Naval waste package. It examines the effects of small changes between the naval canister and the inner vessel, and in these dimensions, the Naval Long waste package and Naval Short waste package are similar. Therefore, only the Naval Long waste package is used in this calculation and is based on the proposed potential designs presented by the drawings and sketches in References 2.1.10 to 2.1.17 and 2.1.20. All conclusions are valid for both the Naval Long and Naval Short waste packages.

  16. A sensitivity analysis for the F100 turbofan engine using the multivariable Nyquist array. [feedback control design

    NASA Technical Reports Server (NTRS)

    Leininger, G. G.; Borysiak, M. L.

    1978-01-01

    In the feedback control design of multivariable systems, closed loop performance evaluations must include the dynamic behavior of variables unavailable to the feedback controller. For the multivariable Nyquist array method, a set of sensitivity functions are proposed to simplify the adjustment of compensator parameters when the dynamic response of the unmeasurable output variables is unacceptable. A sensitivity study to improve thrust and turbine temperature responses for the Pratt-Whitney F100 turbofan engine demonstrates the utility of the proposed method.

  17. Scaling in sensitivity analysis

    USGS Publications Warehouse

    Link, W.A.; Doherty, P.F.

    2002-01-01

    Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.

  18. Interference and Sensitivity Analysis

    PubMed Central

    VanderWeele, Tyler J.; Tchetgen Tchetgen, Eric J.; Halloran, M. Elizabeth

    2014-01-01

    Causal inference with interference is a rapidly growing area. The literature has begun to relax the “no-interference” assumption that the treatment received by one individual does not affect the outcomes of other individuals. In this paper we briefly review the literature on causal inference in the presence of interference when treatments have been randomized. We then consider settings in which causal effects in the presence of interference are not identified, either because randomization alone does not suffice for identification, or because treatment is not randomized and there may be unmeasured confounders of the treatment-outcome relationship. We develop sensitivity analysis techniques for these settings. We describe several sensitivity analysis techniques for the infectiousness effect which, in a vaccine trial, captures the effect of the vaccine of one person on protecting a second person from infection even if the first is infected. We also develop two sensitivity analysis techniques for causal effects in the presence of unmeasured confounding which generalize analogous techniques when interference is absent. These two techniques for unmeasured confounding are compared and contrasted. PMID:25620841

  19. Sensitivity testing and analysis

    SciTech Connect

    Neyer, B.T.

    1991-01-01

    New methods of sensitivity testing and analysis are proposed. The new test method utilizes Maximum Likelihood Estimates to pick the next test level in order to maximize knowledge of both the mean, {mu}, and the standard deviation, {sigma} of the population. Simulation results demonstrate that this new test provides better estimators (less bias and smaller variance) of both {mu} and {sigma} than the other commonly used tests (Probit, Bruceton, Robbins-Monro, Langlie). A new method of analyzing sensitivity tests is also proposed. It uses the Likelihood Ratio Test to compute regions of arbitrary confidence. It can calculate confidence regions, for {mu}, {sigma}, and arbitrary percentiles. Unlike presently used methods, such as the program ASENT which is based on the Cramer-Rao theorem, it can analyze the results of all sensitivity tests, and it does not significantly underestimate the size of the confidence regions. The new test and analysis methods will be explained and compared to the presently used methods. 19 refs., 12 figs.

  20. Evaluation of transverse dispersion effects in tank experiments by numerical modeling: parameter estimation, sensitivity analysis and revision of experimental design.

    PubMed

    Ballarini, E; Bauer, S; Eberhardt, C; Beyer, C

    2012-06-01

    Transverse dispersion represents an important mixing process for transport of contaminants in groundwater and constitutes an essential prerequisite for geochemical and biodegradation reactions. Within this context, this work describes the detailed numerical simulation of highly controlled laboratory experiments using uranine, bromide and oxygen depleted water as conservative tracers for the quantification of transverse mixing in porous media. Synthetic numerical experiments reproducing an existing laboratory experimental set-up of quasi two-dimensional flow through tank were performed to assess the applicability of an analytical solution of the 2D advection-dispersion equation for the estimation of transverse dispersivity as fitting parameter. The fitted dispersivities were compared to the "true" values introduced in the numerical simulations and the associated error could be precisely estimated. A sensitivity analysis was performed on the experimental set-up in order to evaluate the sensitivities of the measurements taken at the tank experiment on the individual hydraulic and transport parameters. From the results, an improved experimental set-up as well as a numerical evaluation procedure could be developed, which allow for a precise and reliable determination of dispersivities. The improved tank set-up was used for new laboratory experiments, performed at advective velocities of 4.9 m d(-1) and 10.5 m d(-1). Numerical evaluation of these experiments yielded a unique and reliable parameter set, which closely fits the measured tracer concentration data. For the porous medium with a grain size of 0.25-0.30 mm, the fitted longitudinal and transverse dispersivities were 3.49×10(-4) m and 1.48×10(-5) m, respectively. The procedures developed in this paper for the synthetic and rigorous design and evaluation of the experiments can be generalized and transferred to comparable applications.

  1. Sensitivity analysis of the electrostatic force distance curve using Sobol’s method and design of experiments

    NASA Astrophysics Data System (ADS)

    Alhossen, I.; Villeneuve-Faure, C.; Baudoin, F.; Bugarin, F.; Segonds, S.

    2017-01-01

    Previous studies have demonstrated that the electrostatic force distance curve (EFDC) is a relevant way of probing injected charge in 3D. However, the EFDC needs a thorough investigation to be accurately analyzed and to provide information about charge localization. Interpreting the EFDC in terms of charge distribution is not straightforward from an experimental point of view. In this paper, a sensitivity analysis of the EFDC is studied using buried electrodes as a first approximation. In particular, the influence of input factors such as the electrode width, depth and applied potential are investigated. To reach this goal, the EFDC is fitted to a law described by four parameters, called logistic law, and the influence of the electrode parameters on the law parameters has been investigated. Then, two methods are applied—Sobol’s method and the factorial design of experiment—to quantify the effect of each factor on each parameter of the logistic law. Complementary results are obtained from both methods, demonstrating that the EFDC is not the result of the superposition of the contribution of each electrode parameter, but that it exhibits a strong contribution from electrode parameter interaction. Furthermore, thanks to these results, a matricial model has been developed to predict EFDCs for any combination of electrode characteristics. A good correlation is observed with the experiments, and this is promising for charge investigation using an EFDC.

  2. Overview of the AVT-191 Project to Assess Sensitivity Analysis and Uncertainty Quantification Methods for Military Vehicle Design

    NASA Technical Reports Server (NTRS)

    Benek, John A.; Luckring, James M.

    2017-01-01

    A NATO symposium held in 2008 identified many promising sensitivity analysis and un-certainty quantification technologies, but the maturity and suitability of these methods for realistic applications was not known. The STO Task Group AVT-191 was established to evaluate the maturity and suitability of various sensitivity analysis and uncertainty quantification methods for application to realistic problems of interest to NATO. The program ran from 2011 to 2015, and the work was organized into four discipline-centric teams: external aerodynamics, internal aerodynamics, aeroelasticity, and hydrodynamics. This paper presents an overview of the AVT-191 program content.

  3. Summary Findings from the AVT-191 Project to Assess Sensitivity Analysis and Uncertainty Quantification Methods for Military Vehicle Design

    NASA Technical Reports Server (NTRS)

    Benek, John A.; Luckring, James M.

    2017-01-01

    A NATO symposium held in Greece in 2008 identified many promising sensitivity analysis and uncertainty quantification technologies, but the maturity and suitability of these methods for realistic applications was not clear. The NATO Science and Technology Organization, Task Group AVT-191 was established to evaluate the maturity and suitability of various sensitivity analysis and uncertainty quantification methods for application to realistic vehicle development problems. The program ran from 2011 to 2015, and the work was organized into four discipline-centric teams: external aerodynamics, internal aerodynamics, aeroelasticity, and hydrodynamics. This paper summarizes findings and lessons learned from the task group.

  4. WASTE PACKAGE DESIGN SENSITIVITY REPORT

    SciTech Connect

    P. Mecharet

    2001-03-09

    The purpose of this technical report is to present the current designs for waste packages and determine which designs will be evaluated for the Site Recommendation (SR) or Licence Application (LA), to demonstrate how the design will be shown to comply with the applicable design criteria. The evaluations to support SR or LA are based on system description document criteria. The objective is to determine those system description document criteria for which compliance is to be demonstrated for SR; and, having identified the criteria, to refer to the documents that show compliance. In addition, those system description document criteria for which compliance will be addressed for LA are identified, with a distinction made between two steps of the LA process: the LA-Construction Authorization (LA-CA) phase on one hand, and the LA-Receive and Possess (LA-R&P) phase on the other hand. The scope of this work encompasses the Waste Package Project disciplines for criticality, shielding, structural, and thermal analysis.

  5. DAKOTA, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 reference manual

    SciTech Connect

    Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane; Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Guinta, Anthony A.; Brown, Shannon L.

    2006-10-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.

  6. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, user's reference manual.

    SciTech Connect

    Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.

    2010-05-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.

  7. Structural sensitivity analysis: Methods, applications and needs

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.; Camarda, C. J.; Walsh, J. L.

    1984-01-01

    Innovative techniques applicable to sensitivity analysis of discretized structural systems are reviewed. The techniques include a finite difference step size selection algorithm, a method for derivatives of iterative solutions, a Green's function technique for derivatives of transient response, simultaneous calculation of temperatures and their derivatives, derivatives with respect to shape, and derivatives of optimum designs with respect to problem parameters. Computerized implementations of sensitivity analysis and applications of sensitivity derivatives are also discussed. Some of the critical needs in the structural sensitivity area are indicated along with plans for dealing with some of those needs.

  8. Structural sensitivity analysis: Methods, applications, and needs

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.; Camarda, C. J.; Walsh, J. L.

    1984-01-01

    Some innovative techniques applicable to sensitivity analysis of discretized structural systems are reviewed. These techniques include a finite-difference step-size selection algorithm, a method for derivatives of iterative solutions, a Green's function technique for derivatives of transient response, a simultaneous calculation of temperatures and their derivatives, derivatives with respect to shape, and derivatives of optimum designs with respect to problem parameters. Computerized implementations of sensitivity analysis and applications of sensitivity derivatives are also discussed. Finally, some of the critical needs in the structural sensitivity area are indicated along with Langley plans for dealing with some of these needs.

  9. Accurate adjoint design sensitivities for nano metal optics.

    PubMed

    Hansen, Paul; Hesselink, Lambertus

    2015-09-07

    We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics.

  10. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    SciTech Connect

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S; Jakeman, John Davis; Swiler, Laura Painton; Stephens, John Adam; Vigil, Dena M.; Wildey, Timothy Michael; Bohnhoff, William J.; Eddy, John P.; Hu, Kenneth T.; Dalbey, Keith R.; Bauman, Lara E; Hough, Patricia Diane

    2014-05-01

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.

  11. DAKOTA, a multilevel parellel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 uers's manual.

    SciTech Connect

    Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J. (Sandai National Labs, Livermore, CA); Hough, Patricia Diane (Sandai National Labs, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.

    2006-10-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

  12. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, user's manual.

    SciTech Connect

    Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.

    2010-05-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.

  13. DAKOTA : a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis. Version 5.0, developers manual.

    SciTech Connect

    Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.

    2010-05-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

  14. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis:version 4.0 developers manual.

    SciTech Connect

    Griffin, Joshua D. (Sandia National lababoratory, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson (Sandia National lababoratory, Livermore, CA); Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane (Sandia National lababoratory, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.

    2006-10-01

    The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.

  15. Design and Sensitivity Analysis Simulation of a Novel 3D Force Sensor Based on a Parallel Mechanism

    PubMed Central

    Yang, Eileen Chih-Ying

    2016-01-01

    Automated force measurement is one of the most important technologies in realizing intelligent automation systems. However, while many methods are available for micro-force sensing, measuring large three-dimensional (3D) forces and loads remains a significant challenge. Accordingly, the present study proposes a novel 3D force sensor based on a parallel mechanism. The transformation function and sensitivity index of the proposed sensor are analytically derived. The simulation results show that the sensor has a larger effective measuring capability than traditional force sensors. Moreover, the sensor has a greater measurement sensitivity for horizontal forces than for vertical forces over most of the measurable force region. In other words, compared to traditional force sensors, the proposed sensor is more sensitive to shear forces than normal forces. PMID:27999246

  16. D2PC sensitivity analysis

    SciTech Connect

    Lombardi, D.P.

    1992-08-01

    The Chemical Hazard Prediction Model (D2PC) developed by the US Army will play a critical role in the Chemical Stockpile Emergency Preparedness Program by predicting chemical agent transport and dispersion through the atmosphere after an accidental release. To aid in the analysis of the output calculated by D2PC, this sensitivity analysis was conducted to provide information on model response to a variety of input parameters. The sensitivity analysis focused on six accidental release scenarios involving chemical agents VX, GB, and HD (sulfur mustard). Two categories, corresponding to conservative most likely and worst case meteorological conditions, provided the reference for standard input values. D2PC displayed a wide variety of sensitivity to the various input parameters. The model displayed the greatest overall sensitivity to wind speed, mixing height, and breathing rate. For other input parameters, sensitivity was mixed but generally lower. Sensitivity varied not only with parameter, but also over the range of values input for a single parameter. This information on model response can provide useful data for interpreting D2PC output.

  17. A measurement system analysis with design of experiments: Investigation of the adhesion performance of a pressure sensitive adhesive with the probe tack test.

    PubMed

    Michaelis, Marc; Leopold, Claudia S

    2015-12-30

    The tack of a pressure sensitive adhesive (PSA) is not an inherent material property and strongly depends on the measurement conditions. Following the concept of a measurement system analysis (MSA), influencing factors of the probe tack test were investigated by a design of experiments (DoE) approach. A response surface design with 38 runs was built to evaluate the influence of detachment speed, dwell time, contact force, adhesive film thickness and API content on tack, determined as the maximum of the stress strain curve (σmax). It could be shown that all investigated factors have a significant effect on the response and that the DoE approach allowed to detect two-factorial interactions between the dwell time, the contact force, the adhesive film thickness and the API content. Surprisingly, it was found that tack increases with decreasing and not with increasing adhesive film thickness.

  18. Adjoint sensitivity analysis of an ultrawideband antenna

    SciTech Connect

    Stephanson, M B; White, D A

    2011-07-28

    The frequency domain finite element method using H(curl)-conforming finite elements is a robust technique for full-wave analysis of antennas. As computers become more powerful, it is becoming feasible to not only predict antenna performance, but also to compute sensitivity of antenna performance with respect to multiple parameters. This sensitivity information can then be used for optimization of the design or specification of manufacturing tolerances. In this paper we review the Adjoint Method for sensitivity calculation, and apply it to the problem of optimizing a Ultrawideband antenna.

  19. Design and Vibration Sensitivity Analysis of a MEMS Tuning Fork Gyroscope with an Anchored Diamond Coupling Mechanism.

    PubMed

    Guan, Yanwei; Gao, Shiqiao; Liu, Haipeng; Jin, Lei; Niu, Shaohua

    2016-04-02

    In this paper, a new micromachined tuning fork gyroscope (TFG) with an anchored diamond coupling mechanism is proposed while the mode ordering and the vibration sensitivity are also investigated. The sense-mode of the proposed TFG was optimized through use of an anchored diamond coupling spring, which enables the in-phase mode frequency to be 108.3% higher than the anti-phase one. The frequencies of the in- and anti-phase modes in the sense direction are 9799.6 Hz and 4705.3 Hz, respectively. The analytical solutions illustrate that the stiffness difference ratio of the in- and anti-phase modes is inversely proportional to the output induced by the vibration from the sense direction. Additionally, FEM simulations demonstrate that the stiffness difference ratio of the anchored diamond coupling TFG is 16.08 times larger than the direct coupling one while the vibration output is reduced by 94.1%. Consequently, the proposed new anchored diamond coupling TFG can structurally increase the stiffness difference ratio to improve the mode ordering and considerably reduce the vibration sensitivity without sacrificing the scale factor.

  20. Design and Vibration Sensitivity Analysis of a MEMS Tuning Fork Gyroscope with an Anchored Diamond Coupling Mechanism

    PubMed Central

    Guan, Yanwei; Gao, Shiqiao; Liu, Haipeng; Jin, Lei; Niu, Shaohua

    2016-01-01

    In this paper, a new micromachined tuning fork gyroscope (TFG) with an anchored diamond coupling mechanism is proposed while the mode ordering and the vibration sensitivity are also investigated. The sense-mode of the proposed TFG was optimized through use of an anchored diamond coupling spring, which enables the in-phase mode frequency to be 108.3% higher than the anti-phase one. The frequencies of the in- and anti-phase modes in the sense direction are 9799.6 Hz and 4705.3 Hz, respectively. The analytical solutions illustrate that the stiffness difference ratio of the in- and anti-phase modes is inversely proportional to the output induced by the vibration from the sense direction. Additionally, FEM simulations demonstrate that the stiffness difference ratio of the anchored diamond coupling TFG is 16.08 times larger than the direct coupling one while the vibration output is reduced by 94.1%. Consequently, the proposed new anchored diamond coupling TFG can structurally increase the stiffness difference ratio to improve the mode ordering and considerably reduce the vibration sensitivity without sacrificing the scale factor. PMID:27049385

  1. Precision of Sensitivity in the Design Optimization of Indeterminate Structures

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Hopkins, Dale A.

    2006-01-01

    Design sensitivity is central to most optimization methods. The analytical sensitivity expression for an indeterminate structural design optimization problem can be factored into a simple determinate term and a complicated indeterminate component. Sensitivity can be approximated by retaining only the determinate term and setting the indeterminate factor to zero. The optimum solution is reached with the approximate sensitivity. The central processing unit (CPU) time to solution is substantially reduced. The benefit that accrues from using the approximate sensitivity is quantified by solving a set of problems in a controlled environment. Each problem is solved twice: first using the closed-form sensitivity expression, then using the approximation. The problem solutions use the CometBoards testbed as the optimization tool with the integrated force method as the analyzer. The modification that may be required, to use the stiffener method as the analysis tool in optimization, is discussed. The design optimization problem of an indeterminate structure contains many dependent constraints because of the implicit relationship between stresses, as well as the relationship between the stresses and displacements. The design optimization process can become problematic because the implicit relationship reduces the rank of the sensitivity matrix. The proposed approximation restores the full rank and enhances the robustness of the design optimization method.

  2. Addressing the expected survival benefit for clinical trial design in metastatic castration-resistant prostate cancer: Sensitivity analysis of randomized trials.

    PubMed

    Massari, Francesco; Modena, Alessandra; Ciccarese, Chiara; Pilotto, Sara; Maines, Francesca; Bracarda, Sergio; Sperduti, Isabella; Giannarelli, Diana; Carlini, Paolo; Santini, Daniele; Tortora, Giampaolo; Porta, Camillo; Bria, Emilio

    2016-02-01

    We performed a sensitivity analysis, cumulating all randomized clinical trials (RCTs) in which patients with metastatic castration-resistant prostate cancer (mCRPC) received systemic therapy, to evaluate if the comparison of RCTs may drive to biased survival estimations. An overall survival (OS) significant difference according to therapeutic strategy was more likely be determined in RCTs evaluating hormonal drugs versus those studies testing immunotherapy, chemotherapy or other strategies. With regard to control arm, an OS significant effect was found for placebo-controlled trials versus studies comparing experimental treatment with active therapies. Finally, regarding to docetaxel (DOC) timing, the OS benefit was more likely to be proved in Post-DOC setting in comparison with DOC and Pre-DOC. These data suggest that clinical trial design should take into account new benchmarks such as the type of treatment strategy, the choice of the comparator and the phase of the disease in relation to the administration of standard chemotherapy.

  3. Stellarator Coil Design and Plasma Sensitivity

    SciTech Connect

    Long-Poe Ku and Allen H. Boozer

    2010-11-03

    The rich information contained in the plasma response to external magnetic perturbations can be used to help design stellarator coils more effectively. We demonstrate the feasibility by first devel- oping a simple, direct method to study perturbations in stellarators that do not break stellarator symmetry and periodicity. The method applies a small perturbation to the plasma boundary and evaluates the resulting perturbed free-boundary equilibrium to build up a sensitivity matrix for the important physics attributes of the underlying configuration. Using this sensitivity information, design methods for better stellarator coils are then developed. The procedure and a proof-of-principle application are given that (1) determine the spatial distributions of external normal magnetic field at the location of the unperturbed plasma boundary to which the plasma properties are most sen- sitive, (2) determine the distributions of external normal magnetic field that can be produced most efficiently by distant coils, (3) choose the ratios of the magnitudes of the the efficiently produced magnetic distributions so the sensitive plasma properties can be controlled. Using these methods, sets of modular coils are found for the National Compact Stellarator Experiment (NCSX) that are either smoother or can be located much farther from the plasma boundary than those of the present design.

  4. Design oriented structural analysis

    NASA Technical Reports Server (NTRS)

    Giles, Gary L.

    1994-01-01

    Desirable characteristics and benefits of design oriented analysis methods are described and illustrated by presenting a synoptic description of the development and uses of the Equivalent Laminated Plate Solution (ELAPS) computer code. ELAPS is a design oriented structural analysis method which is intended for use in the early design of aircraft wing structures. Model preparation is minimized by using a few large plate segments to model the wing box structure. Computational efficiency is achieved by using a limited number of global displacement functions that encompass all segments over the wing planform. Coupling with other codes is facilitated since the output quantities such as deflections and stresses are calculated as continuous functions over the plate segments. Various aspects of the ELAPS development are discussed including the analytical formulation, verification of results by comparison with finite element analysis results, coupling with other codes, and calculation of sensitivity derivatives. The effectiveness of ELAPS for multidisciplinary design application is illustrated by describing its use in design studies of high speed civil transport wing structures.

  5. Sensitivity analysis of uncertainty in model prediction.

    PubMed

    Russi, Trent; Packard, Andrew; Feeley, Ryan; Frenklach, Michael

    2008-03-27

    Data Collaboration is a framework designed to make inferences from experimental observations in the context of an underlying model. In the prior studies, the methodology was applied to prediction on chemical kinetics models, consistency of a reaction system, and discrimination among competing reaction models. The present work advances Data Collaboration by developing sensitivity analysis of uncertainty in model prediction with respect to uncertainty in experimental observations and model parameters. Evaluation of sensitivity coefficients is performed alongside the solution of the general optimization ansatz of Data Collaboration. The obtained sensitivity coefficients allow one to determine which experiment/parameter uncertainty contributes the most to the uncertainty in model prediction, rank such effects, consider new or even hypothetical experiments to perform, and combine the uncertainty analysis with the cost of uncertainty reduction, thereby providing guidance in selecting an experimental/theoretical strategy for community action.

  6. [Sensitivity analysis in health investment projects].

    PubMed

    Arroyave-Loaiza, G; Isaza-Nieto, P; Jarillo-Soto, E C

    1994-01-01

    This paper discusses some of the concepts and methodologies frequently used in sensitivity analyses in the evaluation of investment programs. In addition, a concrete example is presented: a hospital investment in which four indicators were used to design different scenarios and their impact on investment costs. This paper emphasizes the importance of this type of analysis in the field of management of health services, and more specifically in the formulation of investment programs.

  7. Crashworthiness Design Parameter Sensitivity Analysis.

    DTIC Science & Technology

    1981-02-01

    Washington, D.C., 20302 , 18 October 1979. 117 TABLE 22. PERCENTAGE OF CREW & TROOPS KILLED AND INJURED FATALITIES INJURIES AIRCRAFT CREW TROOP CREW...Department of Defense, Washington, D.C., 20302 , 18 October 1979. 36 SUMMARY OF U.S. ARMY CRASHWORTHY FUEL SYSTEMS ACCIDENT EXPERIENCE FROM APRIL 1970 TO

  8. Designing robots for care: care centered value-sensitive design.

    PubMed

    van Wynsberghe, Aimee

    2013-06-01

    The prospective robots in healthcare intended to be included within the conclave of the nurse-patient relationship--what I refer to as care robots--require rigorous ethical reflection to ensure their design and introduction do not impede the promotion of values and the dignity of patients at such a vulnerable and sensitive time in their lives. The ethical evaluation of care robots requires insight into the values at stake in the healthcare tradition. What's more, given the stage of their development and lack of standards provided by the International Organization for Standardization to guide their development, ethics ought to be included into the design process of such robots. The manner in which this may be accomplished, as presented here, uses the blueprint of the Value-sensitive design approach as a means for creating a framework tailored to care contexts. Using care values as the foundational values to be integrated into a technology and using the elements in care, from the care ethics perspective, as the normative criteria, the resulting approach may be referred to as care centered value-sensitive design. The framework proposed here allows for the ethical evaluation of care robots both retrospectively and prospectively. By evaluating care robots in this way, we may ultimately ask what kind of care we, as a society, want to provide in the future.

  9. Visualization of the Invisible, Explanation of the Unknown, Ruggedization of the Unstable: Sensitivity Analysis, Virtual Tryout and Robust Design through Systematic Stochastic Simulation

    SciTech Connect

    Zwickl, Titus; Carleer, Bart; Kubli, Waldemar

    2005-08-05

    In the past decade, sheet metal forming simulation became a well established tool to predict the formability of parts. In the automotive industry, this has enabled significant reduction in the cost and time for vehicle design and development, and has helped to improve the quality and performance of vehicle parts. However, production stoppages for troubleshooting and unplanned die maintenance, as well as production quality fluctuations continue to plague manufacturing cost and time. The focus therefore has shifted in recent times beyond mere feasibility to robustness of the product and process being engineered. Ensuring robustness is the next big challenge for the virtual tryout / simulation technology.We introduce new methods, based on systematic stochastic simulations, to visualize the behavior of the part during the whole forming process -- in simulation as well as in production. Sensitivity analysis explains the response of the part to changes in influencing parameters. Virtual tryout allows quick exploration of changed designs and conditions. Robust design and manufacturing guarantees quality and process capability for the production process. While conventional simulations helped to reduce development time and cost by ensuring feasible processes, robustness engineering tools have the potential for far greater cost and time savings.Through examples we illustrate how expected and unexpected behavior of deep drawing parts may be tracked down, identified and assigned to the influential parameters. With this knowledge, defects can be eliminated or springback can be compensated e.g.; the response of the part to uncontrollable noise can be predicted and minimized. The newly introduced methods enable more reliable and predictable stamping processes in general.

  10. Design rules to enhance HUMS sensitivity to spur gear faults

    NASA Astrophysics Data System (ADS)

    Liu, Lin

    This dissertation describes the investigation of spur gear design rules that may enhance the sensitivity of conventional fault metrics to typical gear tooth damage. Spur gears represent a fundamental gear geometry that is commonly used in complex transmission system. This thesis explores the influence of simple spur gear design parameters such as diametral pitch, tooth number and pressure angle on gear tooth fault sensitivity. Using static and dynamic analysis, coupled with vibration based fault metrics (FM0, FM4, DI), this work attempts to determine the influence of RPM level, and gear geometry on vibration signatures. Specifically, three types of gear damage are modeled. These include pitting, crack and wear damage. Using a Cornell's gear tooth deflection method, a quasi-static nonlinear mesh gear model is developed that includes effects associated with gear tooth damage. This quasi-static model is integrated into a rigid body model of spur gear mesh dynamics under an applied load. This model is used to simulate the gear mesh dynamics under various RPM levels and applied torque loads. Additionally, the filtering effect of bearing is investigated based on a bearing model. Simulated vibration signatures of bearing case with and without gear tooth damage are used as input into conventional vibration based fault metrics. Assuming that for each spur gear design the gear tooth critical tensile or contact stress is held constant, this dynamic model enables one to vary gear design parameters to evaluate their effect on the vibration signature with and without damage. In this dissertation, the sensitivity of gear design parameters to gear tooth damage is investigated under quasi-static and dynamic loading conditions. Experimental validation is carried out for spur gear designs with various diametral pitch values and for spur gears with increasing crack damage. In the static analysis, enhanced sensitivity is measured with respect to the transmission error. In the dynamic

  11. Using Dynamic Sensitivity Analysis to Assess Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey; Morell, Larry; Miller, Keith

    1990-01-01

    This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.

  12. Design and performance of a combined secondary ion mass spectrometry-scanning probe microscopy instrument for high sensitivity and high-resolution elemental three-dimensional analysis

    SciTech Connect

    Wirtz, Tom; Fleming, Yves; Gerard, Mathieu; Gysin, Urs; Glatzel, Thilo; Meyer, Ernst; Wegmann, Urs; Maier, Urs; Odriozola, Aitziber Herrero; Uehli, Daniel

    2012-06-15

    State-of-the-art secondary ion mass spectrometry (SIMS) instruments allow producing 3D chemical mappings with excellent sensitivity and spatial resolution. Several important artifacts however arise from the fact that SIMS 3D mapping does not take into account the surface topography of the sample. In order to correct these artifacts, we have integrated a specially developed scanning probe microscopy (SPM) system into a commercial Cameca NanoSIMS 50 instrument. This new SPM module, which was designed as a DN200CF flange-mounted bolt-on accessory, includes a new high-precision sample stage, a scanner with a range of 100 {mu}m in x and y direction, and a dedicated SPM head which can be operated in the atomic force microscopy (AFM) and Kelvin probe force microscopy modes. Topographical information gained from AFM measurements taken before, during, and after SIMS analysis as well as the SIMS data are automatically compiled into an accurate 3D reconstruction using the software program 'SARINA,' which was developed for this first combined SIMS-SPM instrument. The achievable lateral resolutions are 6 nm in the SPM mode and 45 nm in the SIMS mode. Elemental 3D images obtained with our integrated SIMS-SPM instrument on Al/Cu and polystyrene/poly(methyl methacrylate) samples demonstrate the advantages of the combined SIMS-SPM approach.

  13. Design and performance of a combined secondary ion mass spectrometry-scanning probe microscopy instrument for high sensitivity and high-resolution elemental three-dimensional analysis.

    PubMed

    Wirtz, Tom; Fleming, Yves; Gerard, Mathieu; Gysin, Urs; Glatzel, Thilo; Meyer, Ernst; Wegmann, Urs; Maier, Urs; Odriozola, Aitziber Herrero; Uehli, Daniel

    2012-06-01

    State-of-the-art secondary ion mass spectrometry (SIMS) instruments allow producing 3D chemical mappings with excellent sensitivity and spatial resolution. Several important artifacts however arise from the fact that SIMS 3D mapping does not take into account the surface topography of the sample. In order to correct these artifacts, we have integrated a specially developed scanning probe microscopy (SPM) system into a commercial Cameca NanoSIMS 50 instrument. This new SPM module, which was designed as a DN200CF flange-mounted bolt-on accessory, includes a new high-precision sample stage, a scanner with a range of 100 μm in x and y direction, and a dedicated SPM head which can be operated in the atomic force microscopy (AFM) and Kelvin probe force microscopy modes. Topographical information gained from AFM measurements taken before, during, and after SIMS analysis as well as the SIMS data are automatically compiled into an accurate 3D reconstruction using the software program "SARINA," which was developed for this first combined SIMS-SPM instrument. The achievable lateral resolutions are 6 nm in the SPM mode and 45 nm in the SIMS mode. Elemental 3D images obtained with our integrated SIMS-SPM instrument on Al/Cu and polystyrene/poly(methyl methacrylate) samples demonstrate the advantages of the combined SIMS-SPM approach.

  14. Design and performance of a combined secondary ion mass spectrometry-scanning probe microscopy instrument for high sensitivity and high-resolution elemental three-dimensional analysis

    NASA Astrophysics Data System (ADS)

    Wirtz, Tom; Fleming, Yves; Gerard, Mathieu; Gysin, Urs; Glatzel, Thilo; Meyer, Ernst; Wegmann, Urs; Maier, Urs; Odriozola, Aitziber Herrero; Uehli, Daniel

    2012-06-01

    State-of-the-art secondary ion mass spectrometry (SIMS) instruments allow producing 3D chemical mappings with excellent sensitivity and spatial resolution. Several important artifacts however arise from the fact that SIMS 3D mapping does not take into account the surface topography of the sample. In order to correct these artifacts, we have integrated a specially developed scanning probe microscopy (SPM) system into a commercial Cameca NanoSIMS 50 instrument. This new SPM module, which was designed as a DN200CF flange-mounted bolt-on accessory, includes a new high-precision sample stage, a scanner with a range of 100 μm in x and y direction, and a dedicated SPM head which can be operated in the atomic force microscopy (AFM) and Kelvin probe force microscopy modes. Topographical information gained from AFM measurements taken before, during, and after SIMS analysis as well as the SIMS data are automatically compiled into an accurate 3D reconstruction using the software program "SARINA," which was developed for this first combined SIMS-SPM instrument. The achievable lateral resolutions are 6 nm in the SPM mode and 45 nm in the SIMS mode. Elemental 3D images obtained with our integrated SIMS-SPM instrument on Al/Cu and polystyrene/poly(methyl methacrylate) samples demonstrate the advantages of the combined SIMS-SPM approach.

  15. Grid sensitivity for aerodynamic optimization and flow analysis

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1993-01-01

    After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.

  16. Stiff DAE integrator with sensitivity analysis capabilities

    SciTech Connect

    Serban, R.

    2007-11-26

    IDAS is a general purpose (serial and parallel) solver for differential equation (ODE) systems with senstivity analysis capabilities. It provides both forward and adjoint sensitivity analysis options.

  17. Nursing-sensitive indicators: a concept analysis

    PubMed Central

    Heslop, Liza; Lu, Sai

    2014-01-01

    Aim To report a concept analysis of nursing-sensitive indicators within the applied context of the acute care setting. Background The concept of ‘nursing sensitive indicators’ is valuable to elaborate nursing care performance. The conceptual foundation, theoretical role, meaning, use and interpretation of the concept tend to differ. The elusiveness of the concept and the ambiguity of its attributes may have hindered research efforts to advance its application in practice. Design Concept analysis. Data sources Using ‘clinical indicators’ or ‘quality of nursing care’ as subject headings and incorporating keyword combinations of ‘acute care’ and ‘nurs*’, CINAHL and MEDLINE with full text in EBSCOhost databases were searched for English language journal articles published between 2000–2012. Only primary research articles were selected. Methods A hybrid approach was undertaken, incorporating traditional strategies as per Walker and Avant and a conceptual matrix based on Holzemer's Outcomes Model for Health Care Research. Results The analysis revealed two main attributes of nursing-sensitive indicators. Structural attributes related to health service operation included: hours of nursing care per patient day, nurse staffing. Outcome attributes related to patient care included: the prevalence of pressure ulcer, falls and falls with injury, nosocomial selective infection and patient/family satisfaction with nursing care. Conclusion This concept analysis may be used as a basis to advance understandings of the theoretical structures that underpin both research and practical application of quality dimensions of nursing care performance. PMID:25113388

  18. Longitudinal Genetic Analysis of Anxiety Sensitivity

    ERIC Educational Resources Information Center

    Zavos, Helena M. S.; Gregory, Alice M.; Eley, Thalia C.

    2012-01-01

    Anxiety sensitivity is associated with both anxiety and depression and has been shown to be heritable. Little, however, is known about the role of genetic influence on continuity and change of symptoms over time. The authors' aim was to examine the stability of anxiety sensitivity during adolescence. By using a genetically sensitive design, the…

  19. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    NASA Technical Reports Server (NTRS)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  20. A review of sensitivity analysis techniques

    SciTech Connect

    Hamby, D.M.

    1993-12-31

    Mathematical models are utilized to approximate various highly complex engineering, physical, environmental, social, and economic phenomena. Model parameters exerting the most influence on model results are identified through a {open_quotes}sensitivity analysis.{close_quotes} A comprehensive review is presented of more than a dozen sensitivity analysis methods. The most fundamental of sensitivity techniques utilizes partial differentiation whereas the simplest approach requires varying parameter values one-at-a-time. Correlation analysis is used to determine relationships between independent and dependent variables. Regression analysis provides the most comprehensive sensitivity measure and is commonly utilized to build response surfaces that approximate complex models.

  1. Geothermal power, policy, and design: Using levelized cost of energy and sensitivity analysis to target improved policy incentives for the U.S. geothermal market

    NASA Astrophysics Data System (ADS)

    Richard, Christopher L.

    At the core of the geothermal industry is a need to identify how policy incentives can better be applied for optimal return. Literature from Bloomquist (1999), Doris et al. (2009), and McIlveen (2011) suggest that a more tailored approach to crafting geothermal policy is warranted. In this research the guiding theory is based on those suggestions and is structured to represent a policy analysis approach using analytical methods. The methods being used are focus on qualitative and quantitative results. To address the qualitative sections of this research an extensive review of contemporary literature is used to identify the frequency of use for specific barriers, and is followed upon with an industry survey to determine existing gaps. As a result there is support for certain barriers and justification for expanding those barriers found within the literature. This method of inquiry is an initial point for structuring modeling tools to further quantify the research results as part of the theoretical framework. Analytical modeling utilizes the levelized cost of energy as a foundation for comparative assessment of policy incentives. Model parameters use assumptions to draw conclusions from literature and survey results to reflect unique attributes held by geothermal power technologies. Further testing by policy option provides an opportunity to assess the sensitivity of each variable with respect to applied policy. Master limited partnerships, feed in tariffs, RD&D, and categorical exclusions all result as viable options for mitigating specific barriers associated to developing geothermal power. The results show reductions of levelized cost based upon the model's exclusive parameters. These results are also compared to contemporary policy options highlighting the need for tailored policy, as discussed by Bloomquist (1999), Doris et al. (2009), and McIlveen (2011). It is the intent of this research to provide the reader with a descriptive understanding of the role of

  2. Sensitivity Analysis of Wing Aeroelastic Responses

    NASA Technical Reports Server (NTRS)

    Issac, Jason Cherian

    1995-01-01

    Design for prevention of aeroelastic instability (that is, the critical speeds leading to aeroelastic instability lie outside the operating range) is an integral part of the wing design process. Availability of the sensitivity derivatives of the various critical speeds with respect to shape parameters of the wing could be very useful to a designer in the initial design phase, when several design changes are made and the shape of the final configuration is not yet frozen. These derivatives are also indispensable for a gradient-based optimization with aeroelastic constraints. In this study, flutter characteristic of a typical section in subsonic compressible flow is examined using a state-space unsteady aerodynamic representation. The sensitivity of the flutter speed of the typical section with respect to its mass and stiffness parameters, namely, mass ratio, static unbalance, radius of gyration, bending frequency, and torsional frequency is calculated analytically. A strip theory formulation is newly developed to represent the unsteady aerodynamic forces on a wing. This is coupled with an equivalent plate structural model and solved as an eigenvalue problem to determine the critical speed of the wing. Flutter analysis of the wing is also carried out using a lifting-surface subsonic kernel function aerodynamic theory (FAST) and an equivalent plate structural model. Finite element modeling of the wing is done using NASTRAN so that wing structures made of spars and ribs and top and bottom wing skins could be analyzed. The free vibration modes of the wing obtained from NASTRAN are input into FAST to compute the flutter speed. An equivalent plate model which incorporates first-order shear deformation theory is then examined so it can be used to model thick wings, where shear deformations are important. The sensitivity of natural frequencies to changes in shape parameters is obtained using ADIFOR. A simple optimization effort is made towards obtaining a minimum weight

  3. Recent developments in structural sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Adelman, Howard M.

    1988-01-01

    Recent developments are reviewed in two major areas of structural sensitivity analysis: sensitivity of static and transient response; and sensitivity of vibration and buckling eigenproblems. Recent developments from the standpoint of computational cost, accuracy, and ease of implementation are presented. In the area of static response, current interest is focused on sensitivity to shape variation and sensitivity of nonlinear response. Two general approaches are used for computing sensitivities: differentiation of the continuum equations followed by discretization, and the reverse approach of discretization followed by differentiation. It is shown that the choice of methods has important accuracy and implementation implications. In the area of eigenproblem sensitivity, there is a great deal of interest and significant progress in sensitivity of problems with repeated eigenvalues. In addition to reviewing recent contributions in this area, the paper raises the issue of differentiability and continuity associated with the occurrence of repeated eigenvalues.

  4. Sensitivity Analysis of Differential-Algebraic Equations and Partial Differential Equations

    SciTech Connect

    Petzold, L; Cao, Y; Li, S; Serban, R

    2005-08-09

    Sensitivity analysis generates essential information for model development, design optimization, parameter estimation, optimal control, model reduction and experimental design. In this paper we describe the forward and adjoint methods for sensitivity analysis, and outline some of our recent work on theory, algorithms and software for sensitivity analysis of differential-algebraic equation (DAE) and time-dependent partial differential equation (PDE) systems.

  5. Boundary formulations for sensitivity analysis without matrix derivatives

    NASA Technical Reports Server (NTRS)

    Kane, J. H.; Guru Prasad, K.

    1993-01-01

    A new hybrid approach to continuum structural shape sensitivity analysis employing boundary element analysis (BEA) is presented. The approach uses iterative reanalysis to obviate the need to factor perturbed matrices in the determination of surface displacement and traction sensitivities via a univariate perturbation/finite difference (UPFD) step. The UPFD approach makes it possible to immediately reuse existing subroutines for computation of BEA matrix coefficients in the design sensitivity analysis process. The reanalysis technique computes economical response of univariately perturbed models without factoring perturbed matrices. The approach provides substantial computational economy without the burden of a large-scale reprogramming effort.

  6. A Sensitivity-Based Design Environment

    DTIC Science & Technology

    2007-11-02

    Sensitivity to Airfoil Rotation: Comparing Two Formulations 7 4.2 A Fundamental Study of Emulsification Jie Li and Yuriko Y. Renardy 9 Chapter 1...pressure Coefficient 4.2 A Fundamental Study of Emulsification Jie Li and Yuriko Y. Renardy Emulsions arise in a wide range of industrial

  7. Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls

    NASA Astrophysics Data System (ADS)

    Guha Ray, A.; Baidya, D. K.

    2012-09-01

    Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.

  8. Sensitivity analysis, optimization, and global critical points

    SciTech Connect

    Cacuci, D.G. )

    1989-11-01

    The title of this paper suggests that sensitivity analysis, optimization, and the search for critical points in phase-space are somehow related; the existence of such a kinship has been undoubtedly felt by many of the nuclear engineering practitioners of optimization and/or sensitivity analysis. However, a unified framework for displaying this relationship has so far been lacking, especially in a global setting. The objective of this paper is to present such a global and unified framework and to suggest, within this framework, a new direction for future developments for both sensitivity analysis and optimization of the large nonlinear systems encountered in practical problems.

  9. Updated Chemical Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    2005-01-01

    An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.

  10. Extended Forward Sensitivity Analysis for Uncertainty Quantification

    SciTech Connect

    Haihua Zhao; Vincent A. Mousseau

    2011-09-01

    Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical parameters, the simulation is allowed

  11. Aero-Structural Interaction, Analysis, and Shape Sensitivity

    NASA Technical Reports Server (NTRS)

    Newman, James C., III

    1999-01-01

    A multidisciplinary sensitivity analysis technique that has been shown to be independent of step-size selection is examined further. The accuracy of this step-size independent technique, which uses complex variables for determining sensitivity derivatives, has been previously established. The primary focus of this work is to validate the aero-structural analysis procedure currently being used. This validation consists of comparing computed and experimental data obtained for an Aeroelastic Research Wing (ARW-2). Since the aero-structural analysis procedure has the complex variable modifications already included into the software, sensitivity derivatives can automatically be computed. Other than for design purposes, sensitivity derivatives can be used for predicting the solution at nearby conditions. The use of sensitivity derivatives for predicting the aero-structural characteristics of this configuration is demonstrated.

  12. Evolution of Geometric Sensitivity Derivatives from Computer Aided Design Models

    NASA Technical Reports Server (NTRS)

    Jones, William T.; Lazzara, David; Haimes, Robert

    2010-01-01

    The generation of design parameter sensitivity derivatives is required for gradient-based optimization. Such sensitivity derivatives are elusive at best when working with geometry defined within the solid modeling context of Computer-Aided Design (CAD) systems. Solid modeling CAD systems are often proprietary and always complex, thereby necessitating ad hoc procedures to infer parameter sensitivity. A new perspective is presented that makes direct use of the hierarchical associativity of CAD features to trace their evolution and thereby track design parameter sensitivity. In contrast to ad hoc methods, this method provides a more concise procedure following the model design intent and determining the sensitivity of CAD geometry directly to its respective defining parameters.

  13. Coal Transportation Rate Sensitivity Analysis

    EIA Publications

    2005-01-01

    On December 21, 2004, the Surface Transportation Board (STB) requested that the Energy Information Administration (EIA) analyze the impact of changes in coal transportation rates on projected levels of electric power sector energy use and emissions. Specifically, the STB requested an analysis of changes in national and regional coal consumption and emissions resulting from adjustments in railroad transportation rates for Wyoming's Powder River Basin (PRB) coal using the National Energy Modeling System (NEMS). However, because NEMS operates at a relatively aggregate regional level and does not represent the costs of transporting coal over specific rail lines, this analysis reports on the impacts of interregional changes in transportation rates from those used in the Annual Energy Outlook 2005 (AEO2005) reference case.

  14. Sensitivity analysis of distributed volcanic source inversion

    NASA Astrophysics Data System (ADS)

    Cannavo', Flavio; Camacho, Antonio G.; González, Pablo J.; Puglisi, Giuseppe; Fernández, José

    2016-04-01

    A recently proposed algorithm (Camacho et al., 2011) claims to rapidly estimate magmatic sources from surface geodetic data without any a priori assumption about source geometry. The algorithm takes the advantages of fast calculation from the analytical models and adds the capability to model free-shape distributed sources. Assuming homogenous elastic conditions, the approach can determine general geometrical configurations of pressured and/or density source and/or sliding structures corresponding to prescribed values of anomalous density, pressure and slip. These source bodies are described as aggregation of elemental point sources for pressure, density and slip, and they fit the whole data (keeping some 3D regularity conditions). Although some examples and applications have been already presented to demonstrate the ability of the algorithm in reconstructing a magma pressure source (e.g. Camacho et al., 2011,Cannavò et al., 2015), a systematic analysis of sensitivity and reliability of the algorithm is still lacking. In this explorative work we present results from a large statistical test designed to evaluate the advantages and limitations of the methodology by assessing its sensitivity to the free and constrained parameters involved in inversions. In particular, besides the source parameters, we focused on the ground deformation network topology, and noise in measurements. The proposed analysis can be used for a better interpretation of the algorithm results in real-case applications. Camacho, A. G., González, P. J., Fernández, J. & Berrino, G. (2011) Simultaneous inversion of surface deformation and gravity changes by means of extended bodies with a free geometry: Application to deforming calderas. J. Geophys. Res. 116. Cannavò F., Camacho A.G., González P.J., Mattia M., Puglisi G., Fernández J. (2015) Real Time Tracking of Magmatic Intrusions by means of Ground Deformation Modeling during Volcanic Crises, Scientific Reports, 5 (10970) doi:10.1038/srep

  15. An analytical sensitivity method for use in integrated aeroservoelastic aircraft design

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1989-01-01

    Interdisciplinary analysis capabilities have been developed for aeroservoelastic aircraft and large flexible spacecraft, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Gaussian (LQG) optimal control laws, enabling the use of LQG techniques in the hierarchal design methodology. The LQG sensitivity analysis method calculates the change in the optimal control law and resulting controlled system responses due to changes in fixed design integration parameters using analytical sensitivity equations. Numerical results of a LQG design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimal control law and aircraft response for various parameters such as wing bending natural frequency is determined. The sensitivity results computed from the analytical expressions are used to estimate changes in response resulting from changes in the parameters. Comparisons of the estimates with exact calculated responses show they are reasonably accurate for + or - 15 percent changes in the parameters. Evaluation of the analytical expressions is computationally faster than equivalent finite difference calculations.

  16. An analytical sensitivity method for use in integrated aeroservoelastic aircraft design

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1989-01-01

    Interdisciplinary analysis capabilities have been developed for aeroservoelastic aircraft and large flexible spacecraft, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Gaussian (LQG) optimal control laws, enabling the use of LQG techniques in the hierarchal design methodology. The LQG sensitivity analysis method calculates the change in the optimal control law and resulting controlled system responses due to changes in fixed design integration parameters using analytical sensitivity equations. Numerical results of an LQG design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimal control law and aircraft response for various parameters such as wing bending natural frequency is determined. The sensitivity results computed from the analytical expressions are used to estimate changes in response resulting from changes in the parameters. Comparisons of the estimates with exact calculated responses show they are reasonably accurate for + or - 15 percent changes in the parameters. Evaluation of the analytical expressions is computationally faster than equivalent finite difference calculations.

  17. Multiple predictor smoothing methods for sensitivity analysis.

    SciTech Connect

    Helton, Jon Craig; Storlie, Curtis B.

    2006-08-01

    The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.

  18. Sensitivity analysis for solar plates

    NASA Technical Reports Server (NTRS)

    Aster, R. W.

    1986-01-01

    Economic evaluation methods and analyses of emerging photovoltaic (PV) technology since 1976 was prepared. This type of analysis was applied to the silicon research portion of the PV Program in order to determine the importance of this research effort in relationship to the successful development of commercial PV systems. All four generic types of PV that use silicon were addressed: crystal ingots grown either by the Czochralski method or an ingot casting method; ribbons pulled directly from molten silicon; an amorphous silicon thin film; and use of high concentration lenses. Three technologies were analyzed: the Union Carbide fluidized bed reactor process, the Hemlock process, and the Union Carbide Komatsu process. The major components of each process were assessed in terms of the costs of capital equipment, labor, materials, and utilities. These assessments were encoded as the probabilities assigned by experts for achieving various cost values or production rates.

  19. Sensitivity analysis for solar plates

    NASA Astrophysics Data System (ADS)

    Aster, R. W.

    1986-02-01

    Economic evaluation methods and analyses of emerging photovoltaic (PV) technology since 1976 was prepared. This type of analysis was applied to the silicon research portion of the PV Program in order to determine the importance of this research effort in relationship to the successful development of commercial PV systems. All four generic types of PV that use silicon were addressed: crystal ingots grown either by the Czochralski method or an ingot casting method; ribbons pulled directly from molten silicon; an amorphous silicon thin film; and use of high concentration lenses. Three technologies were analyzed: the Union Carbide fluidized bed reactor process, the Hemlock process, and the Union Carbide Komatsu process. The major components of each process were assessed in terms of the costs of capital equipment, labor, materials, and utilities. These assessments were encoded as the probabilities assigned by experts for achieving various cost values or production rates.

  20. Liquid Acquisition Device Design Sensitivity Study

    NASA Technical Reports Server (NTRS)

    VanDyke, M. K.; Hastings, L. J.

    2012-01-01

    In-space propulsion often necessitates the use of a capillary liquid acquisition device (LAD) to assure that gas-free liquid propellant is available to support engine restarts in microgravity. If a capillary screen-channel device is chosen, then the designer must determine the appropriate combination screen mesh and channel geometry. A screen mesh selection which results in the smallest LAD width when compared to any other screen candidate (for a constant length) is desirable; however, no best screen exists for all LAD design requirements. Flow rate, percent fill, and acceleration are the most influential drivers for determining screen widths. Increased flow rates and reduced percent fills increase the through-the-screen flow pressure losses, which drive the LAD to increased widths regardless of screen choice. Similarly, increased acceleration levels and corresponding liquid head pressures drive the screen mesh selection toward a higher bubble point (liquid retention capability). After ruling out some screens on the basis of acceleration requirements alone, candidates can be identified by examining screens with small flow-loss-to-bubble point ratios for a given condition (i.e., comparing screens at certain flow rates and fill levels). Within the same flow rate and fill level, the screen constants inertia resistance coefficient, void fraction, screen pore or opening diameter, and bubble point can become the driving forces in identifying the smaller flow-loss-to-bubble point ratios.

  1. A PDE Sensitivity Equation Method for Optimal Aerodynamic Design

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1996-01-01

    The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.

  2. Ceramic tubesheet design analysis

    SciTech Connect

    Mallett, R.H.; Swindeman, R.W.

    1996-06-01

    A transport combustor is being commissioned at the Southern Services facility in Wilsonville, Alabama to provide a gaseous product for the assessment of hot-gas filtering systems. One of the barrier filters incorporates a ceramic tubesheet to support candle filters. The ceramic tubesheet, designed and manufactured by Industrial Filter and Pump Manufacturing Company (EF&PM), is unique and offers distinct advantages over metallic systems in terms of density, resistance to corrosion, and resistance to creep at operating temperatures above 815{degrees}C (1500{degrees}F). Nevertheless, the operational requirements of the ceramic tubesheet are severe. The tubesheet is almost 1.5 m in (55 in.) in diameter, has many penetrations, and must support the weight of the ceramic filters, coal ash accumulation, and a pressure drop (one atmosphere). Further, thermal stresses related to steady state and transient conditions will occur. To gain a better understanding of the structural performance limitations, a contract was placed with Mallett Technology, Inc. to perform a thermal and structural analysis of the tubesheet design. The design analysis specification and a preliminary design analysis were completed in the early part of 1995. The analyses indicated that modifications to the design were necessary to reduce thermal stress, and it was necessary to complete the redesign before the final thermal/mechanical analysis could be undertaken. The preliminary analysis identified the need to confirm that the physical and mechanical properties data used in the design were representative of the material in the tubesheet. Subsequently, few exploratory tests were performed at ORNL to evaluate the ceramic structural material.

  3. SEP thrust subsystem performance sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.

    1973-01-01

    This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.

  4. Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)

    1996-01-01

    Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite

  5. Comparative Sensitivity Analysis of Muscle Activation Dynamics

    PubMed Central

    Rockenfeller, Robert; Günther, Michael; Schmitt, Syn; Götz, Thomas

    2015-01-01

    We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379

  6. Simple and Sensitive UPLC-MS/MS Method for High-Throughput Analysis of Ibrutinib in Rat Plasma: Optimization by Box-Behnken Experimental Design.

    PubMed

    2016-04-07

    Ibrutinib was the first Bruton's tyrosine kinase inhibitor that was approved by the U.S. Food and Drug Administration (FDA) for the treatment of mantle cell lymphoma, chronic lymphocytic leukemia, and waldenstrom macroglobulinemia. The aim of this study was to develop a UPLC-tandem MS method for the high-throughput analysis of ibrutinib in rat plasma samples. A chromatographic condition was optimized by the implementation of the Box-Behnken experimental design. Both ibrutinib and internal standard (vilazodone; IS) were separated within 2 min using the mobile phase of 0.1% formic acid in acetonitrile and 0.1% formic acid in 10 mM ammonium acetate in a ratio of 80+20, eluted at a flow rate of 0.250 mL/min. A simple protein precipitation method was used for the sample cleanup procedure. The detection was performed in electrospray ionization (ESI) positive mode using multiple reaction monitoring by ion transitions of m/z 441.16 > 84.02 for ibrutinib and m/z 442.17 > 155.02 for IS, respectively. All calibration curves were linear in the concentration range of 0.35 to 400 ng/mL (r(2) ≥ 0.997) with a lower LOQ of 0.35 ng/mL only. All validation parameter results were within the acceptance criteria as per international regulatory guidelines. The developed assay was successfully applied in the pharmacokinetic study of a novel ibrutinib self-nanoemulsifying drug-delivery system formulation.

  7. NIR sensitivity analysis with the VANE

    NASA Astrophysics Data System (ADS)

    Carrillo, Justin T.; Goodin, Christopher T.; Baylot, Alex E.

    2016-05-01

    Near infrared (NIR) cameras, with peak sensitivity around 905-nm wavelengths, are increasingly used in object detection applications such as pedestrian detection, occupant detection in vehicles, and vehicle detection. In this work, we present the results of simulated sensitivity analysis for object detection with NIR cameras. The analysis was conducted using high performance computing (HPC) to determine the environmental effects on object detection in different terrains and environmental conditions. The Virtual Autonomous Navigation Environment (VANE) was used to simulate highresolution models for environment, terrain, vehicles, and sensors. In the experiment, an active fiducial marker was attached to the rear bumper of a vehicle. The camera was mounted on a following vehicle that trailed at varying standoff distances. Three different terrain conditions (rural, urban, and forest), two environmental conditions (clear and hazy), three different times of day (morning, noon, and evening), and six different standoff distances were used to perform the sensor sensitivity analysis. The NIR camera that was used for the simulation is the DMK firewire monochrome on a pan-tilt motor. Standoff distance was varied along with environment and environmental conditions to determine the critical failure points for the sensor. Feature matching was used to detect the markers in each frame of the simulation, and the percentage of frames in which one of the markers was detected was recorded. The standoff distance produced the biggest impact on the performance of the camera system, while the camera system was not sensitive to environment conditions.

  8. Sensitive chiral analysis by CE: an update.

    PubMed

    Sánchez-Hernández, Laura; Crego, Antonio Luis; Marina, María Luisa; García-Ruiz, Carmen

    2008-01-01

    A general view of the different strategies used in the last years to enhance the detection sensitivity in chiral analysis by CE is provided in this article. With this purpose and in order to update the previous review by García-Ruiz et al., the articles appeared on this subject from January 2005 to March 2007 are considered. Three were the main strategies employed to increase the detection sensitivity in chiral analysis by CE: (i) the use of off-line sample treatment techniques, (ii) the employment of in-capillary preconcentration techniques based on electrophoretic principles, and (iii) the use of alternative detection systems to the widely employed on-column UV-Vis absorption detection. Combinations of two or three of the above-mentioned strategies gave rise to adequate concentration detection limits up to 10(-10) M enabling enantiomer analysis in a variety of real samples including complex biological matrices.

  9. A strategy to design highly efficient porphyrin sensitizers for dye-sensitized solar cells.

    PubMed

    Chang, Yu-Cheng; Wang, Chin-Li; Pan, Tsung-Yu; Hong, Shang-Hao; Lan, Chi-Ming; Kuo, Hshin-Hui; Lo, Chen-Fu; Hsu, Hung-Yu; Lin, Ching-Yao; Diau, Eric Wei-Guang

    2011-08-21

    We designed highly efficient porphyrin sensitizers with two phenyl groups at meso-positions of the macrocycle bearing two ortho-substituted long alkoxyl chains for dye-sensitized solar cells; the ortho-substituted devices exhibit significantly enhanced photovoltaic performances with the best porphyrin, LD14, showing J(SC) = 19.167 mA cm(-2), V(OC) = 0.736 V, FF = 0.711, and overall power conversion efficiency η = 10.17%.

  10. Microelectromechanical Resonant Accelerometer Designed with a High Sensitivity

    PubMed Central

    Zhang, Jing; Su, Yan; Shi, Qin; Qiu, An-Ping

    2015-01-01

    This paper describes the design and experimental evaluation of a silicon micro-machined resonant accelerometer (SMRA). This type of accelerometer works on the principle that a proof mass under acceleration applies force to two double-ended tuning fork (DETF) resonators, and the frequency output of two DETFs exhibits a differential shift. The dies of an SMRA are fabricated using silicon-on-insulator (SOI) processing and wafer-level vacuum packaging. This research aims to design a high-sensitivity SMRA because a high sensitivity allows for the acceleration signal to be easily demodulated by frequency counting techniques and decreases the noise level. This study applies the energy-consumed concept and the Nelder-Mead algorithm in the SMRA to address the design issues and further increase its sensitivity. Using this novel method, the sensitivity of the SMRA has been increased by 66.1%, which attributes to both the re-designed DETF and the reduced energy loss on the micro-lever. The results of both the closed-form and finite-element analyses are described and are in agreement with one another. A resonant frequency of approximately 22 kHz, a frequency sensitivity of over 250 Hz per g, a one-hour bias stability of 55 μg, a bias repeatability (1σ) of 48 μg and the bias-instability of 4.8 μg have been achieved. PMID:26633425

  11. Sensitivity Analysis for Coupled Aero-structural Systems

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.

    1999-01-01

    A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.

  12. SENSITIVITY ANALYSIS FOR OSCILLATING DYNAMICAL SYSTEMS

    PubMed Central

    WILKINS, A. KATHARINA; TIDOR, BRUCE; WHITE, JACOB; BARTON, PAUL I.

    2012-01-01

    Boundary value formulations are presented for exact and efficient sensitivity analysis, with respect to model parameters and initial conditions, of different classes of oscillating systems. Methods for the computation of sensitivities of derived quantities of oscillations such as period, amplitude and different types of phases are first developed for limit-cycle oscillators. In particular, a novel decomposition of the state sensitivities into three parts is proposed to provide an intuitive classification of the influence of parameter changes on period, amplitude and relative phase. The importance of the choice of time reference, i.e., the phase locking condition, is demonstrated and discussed, and its influence on the sensitivity solution is quantified. The methods are then extended to other classes of oscillatory systems in a general formulation. Numerical techniques are presented to facilitate the solution of the boundary value problem, and the computation of different types of sensitivities. Numerical results are verified by demonstrating consistency with finite difference approximations and are superior both in computational efficiency and in numerical precision to existing partial methods. PMID:23296349

  13. Demonstration sensitivity analysis for RADTRAN III

    SciTech Connect

    Neuhauser, K S; Reardon, P C

    1986-10-01

    A demonstration sensitivity analysis was performed to: quantify the relative importance of 37 variables to the total incident free dose; assess the elasticity of seven dose subgroups to those same variables; develop density distributions for accident dose to combinations of accident data under wide-ranging variations; show the relationship between accident consequences and probabilities of occurrence; and develop limits for the variability of probability consequence curves.

  14. Spacecraft design sensitivity for a disaster warning satellite system

    NASA Technical Reports Server (NTRS)

    Maloy, J. E.; Provencher, C. E.; Leroy, B. E.; Braley, R. C.; Shumaker, H. A.

    1977-01-01

    A disaster warning satellite (DWS) is described for warning the general public of impending natural catastrophes. The concept is responsive to NOAA requirements and maximizes the use of ATS-6 technology. Upon completion of concept development, the study was extended to establishing the sensitivity of the DWSS spacecraft power, weight, and cost to variations in both warning and conventional communications functions. The results of this sensitivity analysis are presented.

  15. New Methods for Sensitivity Analysis in Chaotic, Turbulent Fluid Flows

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Wang, Qiqi

    2012-11-01

    Computational methods for sensitivity analysis are invaluable tools for fluid mechanics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods break down when applied to long-time averaged quantities in chaotic fluid flowfields, such as those obtained using high-fidelity turbulence simulations. Also, a number of dynamical properties of chaotic fluid flows, most notably the ``Butterfly Effect,'' make the formulation of new sensitivity analysis methods difficult. This talk will outline two chaotic sensitivity analysis methods. The first method, the Fokker-Planck adjoint method, forms a probability density function on the strange attractor associated with the system and uses its adjoint to find gradients. The second method, the Least Squares Sensitivity method, finds some ``shadow trajectory'' in phase space for which perturbations do not grow exponentially. This method is formulated as a quadratic programing problem with linear constraints. This talk is concluded with demonstrations of these new methods on some example problems, including the Lorenz attractor and flow around an airfoil at a high angle of attack.

  16. Sensitivity analysis of a sound absorption model with correlated inputs

    NASA Astrophysics Data System (ADS)

    Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.

    2017-04-01

    Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.

  17. Comparison Sensitivity Design of Output Feedback Systems Using State Observers.

    DTIC Science & Technology

    1978-01-01

    of sensitivity reduction in feedback systems which use state observers for dynamic compensation is considered leading to a design procedure which...results developed using state observers in the compensator dynamics. All systems discussed are assumed to be linear time invariant (LTI) systems which are state controllable and state observable. (Author)

  18. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  19. Sensitivity and Uncertainty Analysis of the GFR MOX Fuel Subassembly

    NASA Astrophysics Data System (ADS)

    Lüley, J.; Vrban, B.; Čerba, Š.; Haščík, J.; Nečas, V.; Pelloni, S.

    2014-04-01

    We performed sensitivity and uncertainty analysis as well as benchmark similarity assessment of the MOX fuel subassembly designed for the Gas-Cooled Fast Reactor (GFR) as a representative material of the core. Material composition was defined for each assembly ring separately allowing us to decompose the sensitivities not only for isotopes and reactions but also for spatial regions. This approach was confirmed by direct perturbation calculations for chosen materials and isotopes. Similarity assessment identified only ten partly comparable benchmark experiments that can be utilized in the field of GFR development. Based on the determined uncertainties, we also identified main contributors to the calculation bias.

  20. Estimating the upper limit of gas production from Class 2 hydrate accumulations in the permafrost: 2. Alternative well designs and sensitivity analysis

    SciTech Connect

    Moridis, G.; Reagan, M.T.

    2011-01-15

    In the second paper of this series, we evaluate two additional well designs for production from permafrost-associated (PA) hydrate deposits. Both designs are within the capabilities of conventional technology. We determine that large volumes of gas can be produced at high rates (several MMSCFD) for long times using either well design. The production approach involves initial fluid withdrawal from the water zone underneath the hydrate-bearing layer (HBL). The production process follows a cyclical pattern, with each cycle composed of two stages: a long stage (months to years) of increasing gas production and decreasing water production, and a short stage (days to weeks) that involves destruction of the secondary hydrate (mainly through warm water injection) that evolves during the first stage, and is followed by a reduction in the fluid withdrawal rate. A well configuration with completion throughout the HBL leads to high production rates, but also the creation of a secondary hydrate barrier around the well that needs to be destroyed regularly by water injection. However, a configuration that initially involves heating of the outer surface of the wellbore and later continuous injection of warm water at low rates (Case C) appears to deliver optimum performance over the period it takes for the exhaustion of the hydrate deposit. Using Case C as the standard, we determine that gas production from PA hydrate deposits increases with the fluid withdrawal rate, the initial hydrate saturation and temperature, and with the formation permeability.

  1. Software Performs Complex Design Analysis

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Designers use computational fluid dynamics (CFD) to gain greater understanding of the fluid flow phenomena involved in components being designed. They also use finite element analysis (FEA) as a tool to help gain greater understanding of the structural response of components to loads, stresses and strains, and the prediction of failure modes. Automated CFD and FEA engineering design has centered on shape optimization, which has been hindered by two major problems: 1) inadequate shape parameterization algorithms, and 2) inadequate algorithms for CFD and FEA grid modification. Working with software engineers at Stennis Space Center, a NASA commercial partner, Optimal Solutions Software LLC, was able to utilize its revolutionary, one-of-a-kind arbitrary shape deformation (ASD) capability-a major advancement in solving these two aforementioned problems-to optimize the shapes of complex pipe components that transport highly sensitive fluids. The ASD technology solves the problem of inadequate shape parameterization algorithms by allowing the CFD designers to freely create their own shape parameters, therefore eliminating the restriction of only being able to use the computer-aided design (CAD) parameters. The problem of inadequate algorithms for CFD grid modification is solved by the fact that the new software performs a smooth volumetric deformation. This eliminates the extremely costly process of having to remesh the grid for every shape change desired. The program can perform a design change in a markedly reduced amount of time, a process that would traditionally involve the designer returning to the CAD model to reshape and then remesh the shapes, something that has been known to take hours, days-even weeks or months-depending upon the size of the model.

  2. Passive solar design handbook. Volume 3: Passive solar design analysis

    NASA Astrophysics Data System (ADS)

    Jones, R. W.; Bascomb, J. D.; Kosiewicz, C. E.; Lazarus, G. S.; McFarland, R. D.; Wray, W. O.

    1982-07-01

    Simple analytical methods concerning the design of passive solar heating systems are presented with an emphasis on the average annual heating energy consumption. Key terminology and methods are reviewed. The solar load ratio (SLR) is defined, and its relationship to analysis methods is reviewed. The annual calculation, or Load Collector Ratio (LCR) method, is outlined. Sensitivity data are discussed. Information is presented on balancing conservation and passive solar strategies in building design. Detailed analysis data are presented for direct gain and sunspace systems, and details of the systems are described. Key design parameters are discussed in terms of their impact on annual heating performance of the building. These are the sensitivity data. The SLR correlations for the respective system types are described. The monthly calculation, or SLR method, based on the SLR correlations, is reviewed. Performance data are given for 9 direct gain systems and 15 water wall and 42 Trombe wall systems.

  3. Rethinking Sensitivity Analysis of Nuclear Simulations with Topology

    SciTech Connect

    Dan Maljovec; Bei Wang; Paul Rosen; Andrea Alfonsi; Giovanni Pastore; Cristian Rabiti; Valerio Pascucci

    2016-01-01

    In nuclear engineering, understanding the safety margins of the nuclear reactor via simulations is arguably of paramount importance in predicting and preventing nuclear accidents. It is therefore crucial to perform sensitivity analysis to understand how changes in the model inputs affect the outputs. Modern nuclear simulation tools rely on numerical representations of the sensitivity information -- inherently lacking in visual encodings -- offering limited effectiveness in communicating and exploring the generated data. In this paper, we design a framework for sensitivity analysis and visualization of multidimensional nuclear simulation data using partition-based, topology-inspired regression models and report on its efficacy. We rely on the established Morse-Smale regression technique, which allows us to partition the domain into monotonic regions where easily interpretable linear models can be used to assess the influence of inputs on the output variability. The underlying computation is augmented with an intuitive and interactive visual design to effectively communicate sensitivity information to the nuclear scientists. Our framework is being deployed into the multi-purpose probabilistic risk assessment and uncertainty quantification framework RAVEN (Reactor Analysis and Virtual Control Environment). We evaluate our framework using an simulation dataset studying nuclear fuel performance.

  4. Sensitivity of optimum solutions to problem parameters. [in aircraft design

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Riley, K. M.; Barthelemy, J.-F.

    1981-01-01

    In an aircraft configuration optimization, the information of interest is the sensitivity of optimal block fuel consumption and wing aspect ratio and area, to variations of required range and payload. The objectives of this study are: (1) to show how the equations capable of yielding the sensitivity derivatives (the sensitivity equations) can be obtained for a constrained optimum regardless of the type of optimization algorithm that was used to arrive at the optimum point, (2) to review the solvability of the sensitivity equations and (3) to report on applications on structural optimization. Numerical examples, which demonstrate the sensitivity analysis, include a tubular column and a three-bar truss for which closed form solutions are obtained, a ten-bar truss that requires the use of a finite element analysis, and a thin-walled beam characterized by strongly nonlinear constraints for local buckling. It is concluded that a practically significant extrapolation accuracy may be obtained for a reasonably broad range of parameter changes; and that accuracy does not depend strongly on the degree of convergence of the optimum solution from which the sensitivity derivatives are obtained.

  5. Sensitivity method for integrated structure/active control law design

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1987-01-01

    The development is described of an integrated structure/active control law design methodology for aeroelastic aircraft applications. A short motivating introduction to aeroservoelasticity is given along with the need for integrated structures/controls design algorithms. Three alternative approaches to development of an integrated design method are briefly discussed with regards to complexity, coordination and tradeoff strategies, and the nature of the resulting solutions. This leads to the formulation of the proposed approach which is based on the concepts of sensitivity of optimum solutions and multi-level decompositions. The concept of sensitivity of optimum is explained in more detail and compared with traditional sensitivity concepts of classical control theory. The analytical sensitivity expressions for the solution of the linear, quadratic cost, Gaussian (LQG) control problem are summarized in terms of the linear regulator solution and the Kalman Filter solution. Numerical results for a state space aeroelastic model of the DAST ARW-II vehicle are given, showing the changes in aircraft responses to variations of a structural parameter, in this case first wing bending natural frequency.

  6. Sensitivity Analysis of Automated Ice Edge Detection

    NASA Astrophysics Data System (ADS)

    Moen, Mari-Ann N.; Isaksem, Hugo; Debien, Annekatrien

    2016-08-01

    The importance of highly detailed and time sensitive ice charts has increased with the increasing interest in the Arctic for oil and gas, tourism, and shipping. Manual ice charts are prepared by national ice services of several Arctic countries. Methods are also being developed to automate this task. Kongsberg Satellite Services uses a method that detects ice edges within 15 minutes after image acquisition. This paper describes a sensitivity analysis of the ice edge, assessing to which ice concentration class from the manual ice charts it can be compared to. The ice edge is derived using the Ice Tracking from SAR Images (ITSARI) algorithm. RADARSAT-2 images of February 2011 are used, both for the manual ice charts and the automatic ice edges. The results show that the KSAT ice edge lies within ice concentration classes with very low ice concentration or open water.

  7. Three-dimensional aerodynamic shape optimization using discrete sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Burgreen, Gregory W.

    1995-01-01

    An aerodynamic shape optimization procedure based on discrete sensitivity analysis is extended to treat three-dimensional geometries. The function of sensitivity analysis is to directly couple computational fluid dynamics (CFD) with numerical optimization techniques, which facilitates the construction of efficient direct-design methods. The development of a practical three-dimensional design procedures entails many challenges, such as: (1) the demand for significant efficiency improvements over current design methods; (2) a general and flexible three-dimensional surface representation; and (3) the efficient solution of very large systems of linear algebraic equations. It is demonstrated that each of these challenges is overcome by: (1) employing fully implicit (Newton) methods for the CFD analyses; (2) adopting a Bezier-Bernstein polynomial parameterization of two- and three-dimensional surfaces; and (3) using preconditioned conjugate gradient-like linear system solvers. Whereas each of these extensions independently yields an improvement in computational efficiency, the combined effect of implementing all the extensions simultaneously results in a significant factor of 50 decrease in computational time and a factor of eight reduction in memory over the most efficient design strategies in current use. The new aerodynamic shape optimization procedure is demonstrated in the design of both two- and three-dimensional inviscid aerodynamic problems including a two-dimensional supersonic internal/external nozzle, two-dimensional transonic airfoils (resulting in supercritical shapes), three-dimensional transport wings, and three-dimensional supersonic delta wings. Each design application results in realistic and useful optimized shapes.

  8. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  9. Measuring Road Network Vulnerability with Sensitivity Analysis

    PubMed Central

    Jun-qiang, Leng; Long-hai, Yang; Liu, Wei-yi; Zhao, Lin

    2017-01-01

    This paper focuses on the development of a method for road network vulnerability analysis, from the perspective of capacity degradation, which seeks to identify the critical infrastructures in the road network and the operational performance of the whole traffic system. This research involves defining the traffic utility index and modeling vulnerability of road segment, route, OD (Origin Destination) pair and road network. Meanwhile, sensitivity analysis method is utilized to calculate the change of traffic utility index due to capacity degradation. This method, compared to traditional traffic assignment, can improve calculation efficiency and make the application of vulnerability analysis to large actual road network possible. Finally, all the above models and calculation method is applied to actual road network evaluation to verify its efficiency and utility. This approach can be used as a decision-supporting tool for evaluating the performance of road network and identifying critical infrastructures in transportation planning and management, especially in the resource allocation for mitigation and recovery. PMID:28125706

  10. High Sensitivity MEMS Strain Sensor: Design and Simulation

    PubMed Central

    Mohammed, Ahmed A. S.; Moussa, Walied A.; Lou, Edmond

    2008-01-01

    In this article, we report on the new design of a miniaturized strain microsensor. The proposed sensor utilizes the piezoresistive properties of doped single crystal silicon. Employing the Micro Electro Mechanical Systems (MEMS) technology, high sensor sensitivities and resolutions have been achieved. The current sensor design employs different levels of signal amplifications. These amplifications include geometric, material and electronic levels. The sensor and the electronic circuits can be integrated on a single chip, and packaged as a small functional unit. The sensor converts input strain to resistance change, which can be transformed to bridge imbalance voltage. An analog output that demonstrates high sensitivity (0.03mV/με), high absolute resolution (1με) and low power consumption (100μA) with a maximum range of ±4000με has been reported. These performance characteristics have been achieved with high signal stability over a wide temperature range (±50°C), which introduces the proposed MEMS strain sensor as a strong candidate for wireless strain sensing applications under harsh environmental conditions. Moreover, this sensor has been designed, verified and can be easily modified to measure other values such as force, torque…etc. In this work, the sensor design is achieved using Finite Element Method (FEM) with the application of the piezoresistivity theory. This design process and the microfabrication process flow to prototype the design have been presented. PMID:27879841

  11. UV beam shaper alignment sensitivity: grayscale versus binary designs

    NASA Astrophysics Data System (ADS)

    Lizotte, Todd E.

    2008-08-01

    What defines a good flat top beam shaper? What is more important; an ideal flat top profile or ease of alignment and stability? These are the questions designers and fabricators can not easily define, since they are a function of experience. Anyone can generate a theoretical beam shaper design and model it until it is clear that on paper the design looks good and meets the general needs of the end customer. However, the method of fabrication can add a twist that is not fully understood by either party until the beam shaper is actually tested for the first time in a system and also produced in high volume. This paper provides some insight into how grayscale and binary fabrication methods can produce the same style of beam shaper, with similar beam shaping performance; however provide a result wherein each fabricated design has separate degrees of sensitivity for alignment and stability. The paper will explain the design and fabrication approach for the two units and present alignment and testing data to provide a contrast comparison. Further data will show that over twenty sets of each fabricated design there is a consistency to the sensitivity issue. An understanding of this phenomenon is essential when considering the use of beam shapers on production equipment that is dedicated to producing micron-precision features within high value microelectronic and consumer products. We will present our findings and explore potential explanations and solutions.

  12. Chemistry in Protoplanetary Disks: A Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Vasyunin, A. I.; Semenov, D.; Henning, Th.; Wakelam, V.; Herbst, Eric; Sobolev, A. M.

    2008-01-01

    We study how uncertainties in the rate coefficients of chemical reactions in the RATE 06 database affect abundances and column densities of key molecules in protoplanetary disks. We randomly varied the gas-phase reaction rates within their uncertainty limits and calculated the time-dependent abundances and column densities using a gas-grain chemical model and a flaring steady state disk model. We find that key species can be separated into two distinct groups according to the sensitivity of their column densities to the rate uncertainties. The first group includes CO, C+, H+3, H2O, NH3, N2H+, and HCNH+. For these species the column densities are not very sensitive to the rate uncertainties, but the abundances in specific regions are. The second group includes CS, CO2, HCO+, H2CO, C2H, CN, HCN, HNC, and other, more complex species, for which high abundances and abundance uncertainties coexist in the same disk region, leading to larger scatters in column densities. However, even for complex and heavy molecules, the dispersion in their column densities is not more than a factor of ~4. We perform a sensitivity analysis of the computed abundances to rate uncertainties and identify those reactions with the most problematic rate coefficients. We conclude that the rate coefficients of about a hundred chemical reactions need to be determined more accurately in order to greatly improve the reliability of modern astrochemical models. This improvement should be an ultimate goal of future laboratory studies and theoretical investigations.

  13. LCA data quality: sensitivity and uncertainty analysis.

    PubMed

    Guo, M; Murphy, R J

    2012-10-01

    Life cycle assessment (LCA) data quality issues were investigated by using case studies on products from starch-polyvinyl alcohol based biopolymers and petrochemical alternatives. The time horizon chosen for the characterization models was shown to be an important sensitive parameter for the environmental profiles of all the polymers. In the global warming potential and the toxicity potential categories the comparison between biopolymers and petrochemical counterparts altered as the time horizon extended from 20 years to infinite time. These case studies demonstrated that the use of a single time horizon provide only one perspective on the LCA outcomes which could introduce an inadvertent bias into LCA outcomes especially in toxicity impact categories and thus dynamic LCA characterization models with varying time horizons are recommended as a measure of the robustness for LCAs especially comparative assessments. This study also presents an approach to integrate statistical methods into LCA models for analyzing uncertainty in industrial and computer-simulated datasets. We calibrated probabilities for the LCA outcomes for biopolymer products arising from uncertainty in the inventory and from data variation characteristics this has enabled assigning confidence to the LCIA outcomes in specific impact categories for the biopolymer vs. petrochemical polymer comparisons undertaken. Uncertainty combined with the sensitivity analysis carried out in this study has led to a transparent increase in confidence in the LCA findings. We conclude that LCAs lacking explicit interpretation of the degree of uncertainty and sensitivities are of limited value as robust evidence for decision making or comparative assertions.

  14. Sensitivity Analysis of Situational Awareness Measures

    NASA Technical Reports Server (NTRS)

    Shively, R. J.; Davison, H. J.; Burdick, M. D.; Rutkowski, Michael (Technical Monitor)

    2000-01-01

    A great deal of effort has been invested in attempts to define situational awareness, and subsequently to measure this construct. However, relatively less work has focused on the sensitivity of these measures to manipulations that affect the SA of the pilot. This investigation was designed to manipulate SA and examine the sensitivity of commonly used measures of SA. In this experiment, we tested the most commonly accepted measures of SA: SAGAT, objective performance measures, and SART, against different levels of SA manipulation to determine the sensitivity of such measures in the rotorcraft flight environment. SAGAT is a measure in which the simulation blanks in the middle of a trial and the pilot is asked specific, situation-relevant questions about the state of the aircraft or the objective of a particular maneuver. In this experiment, after the pilot responded verbally to several questions, the trial continued from the point frozen. SART is a post-trial questionnaire that asked for subjective SA ratings from the pilot at certain points in the previous flight. The objective performance measures included: contacts with hazards (power lines and towers) that impeded the flight path, lateral and vertical anticipation of these hazards, response time to detection of other air traffic, and response time until an aberrant fuel gauge was detected. An SA manipulation of the flight environment was chosen that undisputedly affects a pilot's SA-- visibility. Four variations of weather conditions (clear, light rain, haze, and fog) resulted in a different level of visibility for each trial. Pilot SA was measured by either SAGAT or the objective performance measures within each level of visibility. This enabled us to not only determine the sensitivity within a measure, but also between the measures. The SART questionnaire and the NASA-TLX, a measure of workload, were distributed after every trial. Using the newly developed rotorcraft part-task laboratory (RPTL) at NASA Ames

  15. Design of highly sensitive multichannel bimetallic photonic crystal fiber biosensor

    NASA Astrophysics Data System (ADS)

    Hameed, Mohamed Farhat O.; Alrayk, Yassmin K. A.; Shaalan, Abdelhamid A.; El Deeb, Walid S.; Obayya, Salah S. A.

    2016-10-01

    A design of a highly sensitive multichannel biosensor based on photonic crystal fiber is proposed and analyzed. The suggested design has a silver layer as a plasmonic material coated by a gold layer to protect silver oxidation. The reported sensor is based on detection using the quasi transverse electric (TE) and quasi transverse magnetic (TM) modes, which offers the possibility of multichannel/multianalyte sensing. The numerical results are obtained using a finite element method with perfect matched layer boundary conditions. The sensor geometrical parameters are optimized to achieve high sensitivity for the two polarized modes. High-refractive index sensitivity of about 4750 nm/RIU (refractive index unit) and 4300 nm/RIU with corresponding resolutions of 2.1×10-5 RIU, and 2.33×10-5 RIU can be obtained according to the quasi TM and quasi TE modes of the proposed sensor, respectively. Further, the reported design can be used as a self-calibration biosensor within an unknown analyte refractive index ranging from 1.33 to 1.35 with high linearity and high accuracy. Moreover, the suggested biosensor has advantages in terms of compactness and better integration of microfluidics setup, waveguide, and metallic layers into a single structure.

  16. Simple Sensitivity Analysis for Orion GNC

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  17. Multiplexed analysis of chromosome conformation at vastly improved sensitivity

    PubMed Central

    Davies, James O.J.; Telenius, Jelena M.; McGowan, Simon; Roberts, Nigel A.; Taylor, Stephen; Higgs, Douglas R.; Hughes, Jim R.

    2015-01-01

    Since methods for analysing chromosome conformation in mammalian cells are either low resolution or low throughput and are technically challenging they are not widely used outside of specialised laboratories. We have re-designed the Capture-C method producing a new approach, called next generation (NG) Capture-C. This produces unprecedented levels of sensitivity and reproducibility and can be used to analyse many genetic loci and samples simultaneously. Importantly, high-resolution data can be produced on as few as 100,000 cells and SNPs can be used to generate allele specific tracks. The method is straightforward to perform and should therefore greatly facilitate the task of linking SNPs identified by genome wide association studies with the genes they influence. The complete and detailed protocol presented here, with new publicly available tools for library design and data analysis, will allow most laboratories to analyse chromatin conformation at levels of sensitivity and throughput that were previously impossible. PMID:26595209

  18. Sensitivity Analysis of Chaotic Flow around Two-Dimensional Airfoil

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Wang, Qiqi; Nielsen, Eric; Diskin, Boris

    2015-11-01

    Computational methods for sensitivity analysis are invaluable tools for fluid dynamics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods, including the adjoint method, break down when applied to long-time averaged quantities in chaotic fluid flow fields, such as high-fidelity turbulence simulations. This break down is due to the ``Butterfly Effect'' the high sensitivity of chaotic dynamical systems to the initial condition. A new sensitivity analysis method developed by the authors, Least Squares Shadowing (LSS), can compute useful and accurate gradients for quantities of interest in chaotic dynamical systems. LSS computes gradients using the ``shadow trajectory'', a phase space trajectory (or solution) for which perturbations to the flow field do not grow exponentially in time. To efficiently compute many gradients for one objective function, we use an adjoint version of LSS. This talk will briefly outline Least Squares Shadowing and demonstrate it on chaotic flow around a Two-Dimensional airfoil.

  19. Synthesis, Characterization, and Sensitivity Analysis of Urea Nitrate (UN)

    DTIC Science & Technology

    2015-04-01

    ARL-TR-7250 ● APR 2015 US Army Research Laboratory Synthesis, Characterization, and Sensitivity Analysis of Urea Nitrate (UN...Characterization, and Sensitivity Analysis of Urea Nitrate (UN) by William M Sherrill Weapons and Materials Research Directorate...Characterization, and Sensitivity Analysis of Urea Nitrate (UN) 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S

  20. A Sensitivity Analysis of SOLPS Plasma Detachment

    NASA Astrophysics Data System (ADS)

    Green, D. L.; Canik, J. M.; Eldon, D.; Meneghini, O.; AToM SciDAC Collaboration

    2016-10-01

    Predicting the scrape off layer plasma conditions required for the ITER plasma to achieve detachment is an important issue when considering divertor heat load management options that are compatible with desired core plasma operational scenarios. Given the complexity of the scrape off layer, such predictions often rely on an integrated model of plasma transport with many free parameters. However, the sensitivity of any given prediction to the choices made by the modeler is often overlooked due to the logistical difficulties in completing such a study. Here we utilize an OMFIT workflow to enable a sensitivity analysis of the midplane density at which detachment occurs within the SOLPS model. The workflow leverages the TaskFarmer technology developed at NERSC to launch many instances of the SOLPS integrated model in parallel to probe the high dimensional parameter space of SOLPS inputs. We examine both predictive and interpretive models where the plasma diffusion coefficients are chosen to match an empirical scaling for divertor heat flux width or experimental profiles respectively. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility, and is supported under Contracts DE-AC02-05CH11231, DE-AC05-00OR22725 and DE-SC0012656.

  1. Stormwater quality models: performance and sensitivity analysis.

    PubMed

    Dotto, C B S; Kleidorfer, M; Deletic, A; Fletcher, T D; McCarthy, D T; Rauch, W

    2010-01-01

    The complex nature of pollutant accumulation and washoff, along with high temporal and spatial variations, pose challenges for the development and establishment of accurate and reliable models of the pollution generation process in urban environments. Therefore, the search for reliable stormwater quality models remains an important area of research. Model calibration and sensitivity analysis of such models are essential in order to evaluate model performance; it is very unlikely that non-calibrated models will lead to reasonable results. This paper reports on the testing of three models which aim to represent pollutant generation from urban catchments. Assessment of the models was undertaken using a simplified Monte Carlo Markov Chain (MCMC) method. Results are presented in terms of performance, sensitivity to the parameters and correlation between these parameters. In general, it was suggested that the tested models poorly represent reality and result in a high level of uncertainty. The conclusions provide useful information for the improvement of existing models and insights for the development of new model formulations.

  2. Scalable analysis tools for sensitivity analysis and UQ (3160) results.

    SciTech Connect

    Karelitz, David B.; Ice, Lisa G.; Thompson, David C.; Bennett, Janine C.; Fabian, Nathan; Scott, W. Alan; Moreland, Kenneth D.

    2009-09-01

    The 9/30/2009 ASC Level 2 Scalable Analysis Tools for Sensitivity Analysis and UQ (Milestone 3160) contains feature recognition capability required by the user community for certain verification and validation tasks focused around sensitivity analysis and uncertainty quantification (UQ). These feature recognition capabilities include crater detection, characterization, and analysis from CTH simulation data; the ability to call fragment and crater identification code from within a CTH simulation; and the ability to output fragments in a geometric format that includes data values over the fragments. The feature recognition capabilities were tested extensively on sample and actual simulations. In addition, a number of stretch criteria were met including the ability to visualize CTH tracer particles and the ability to visualize output from within an S3D simulation.

  3. A new u-statistic with superior design sensitivity in matched observational studies.

    PubMed

    Rosenbaum, Paul R

    2011-09-01

    In an observational or nonrandomized study of treatment effects, a sensitivity analysis indicates the magnitude of bias from unmeasured covariates that would need to be present to alter the conclusions of a naïve analysis that presumes adjustments for observed covariates suffice to remove all bias. The power of sensitivity analysis is the probability that it will reject a false hypothesis about treatment effects allowing for a departure from random assignment of a specified magnitude; in particular, if this specified magnitude is "no departure" then this is the same as the power of a randomization test in a randomized experiment. A new family of u-statistics is proposed that includes Wilcoxon's signed rank statistic but also includes other statistics with substantially higher power when a sensitivity analysis is performed in an observational study. Wilcoxon's statistic has high power to detect small effects in large randomized experiments-that is, it often has good Pitman efficiency-but small effects are invariably sensitive to small unobserved biases. Members of this family of u-statistics that emphasize medium to large effects can have substantially higher power in a sensitivity analysis. For example, in one situation with 250 pair differences that are Normal with expectation 1/2 and variance 1, the power of a sensitivity analysis that uses Wilcoxon's statistic is 0.08 while the power of another member of the family of u-statistics is 0.66. The topic is examined by performing a sensitivity analysis in three observational studies, using an asymptotic measure called the design sensitivity, and by simulating power in finite samples. The three examples are drawn from epidemiology, clinical medicine, and genetic toxicology.

  4. Sensitivity Analysis of the Static Aeroelastic Response of a Wing

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.

    1993-01-01

    A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.

  5. Design and performance of a positron-sensitive surgical probe

    NASA Astrophysics Data System (ADS)

    Liu, Fang

    We report the design and performance of a portable positron-sensitive surgical imaging probe. The probe is designed to be sensitive to positrons and capable of rejecting background gammas including 511 keV. The probe consists of a multi-anode PMT and an 8 x 8 array of thin 2 mm x 2 mm plastic scintillators coupled 1:1 to GSO crystals. The probe uses three selection criteria to identify positrons. An energy threshold on the plastic signals reduces the false positron signals in the plastic due to background gammas; a second energy threshold on the PMT sum signal greatly reduces background gammas in the GSO. Finally, a timing window accepts only 511 keV gammas from the GSO that arrive within 15 ns of the plastic signals, reducing accidental coincidences to a negligible level. The first application being investigated is sentinel lymph node (SLN) surgery, to identify in real-time the location of SLNs in the axilla with high 18F-FDG uptake, which may indicate metastasis. Our simulations and measurements show that the probe's pixel separation ability in terms of peak-to-valley ratio is ˜3.5. The performance measurements also show that the 64-pixel probe has a sensitivity of 4.7 kcps/muCi using optimal signal selection criteria. For example, it is able to detect in 10 seconds a ˜4 mm lesion with a true-to-background ratio of ˜3 at a tumor uptake ratio of ˜8:1. The signal selection criteria can be fine-tuned, either for higher sensitivity, or for a higher image contrast.

  6. Sensitivity analysis of a pharmaceutical tablet production process from the control engineering perspective.

    PubMed

    Rehrl, Jakob; Gruber, Arlin; Khinast, Johannes G; Horn, Martin

    2017-01-30

    This paper presents a sensitivity analysis of a pharmaceutical direct compaction process. Sensitivity analysis is an important tool for gaining valuable process insights and designing a process control concept. Examining its results in a systematic manner makes it possible to assign actuating signals to controlled variables. This paper presents mathematical models for individual unit operations, on which the sensitivity analysis is based. Two sensitivity analysis methods are outlined: (i) based on the so-called Sobol indices and (ii) based on the steady-state gains and the frequency response of the proposed plant model.

  7. Parametric sensitivity analysis for temperature control in outdoor photobioreactors.

    PubMed

    Pereira, Darlan A; Rodrigues, Vinicius O; Gómez, Sonia V; Sales, Emerson A; Jorquera, Orlando

    2013-09-01

    In this study a critical analysis of input parameters on a model to describe the broth temperature in flat plate photobioreactors throughout the day is carried out in order to assess the effect of these parameters on the model. Using the design of experiment approach, variation of selected parameters was introduced and the influence of each parameter on the broth temperature was evaluated by a parametric sensitivity analysis. The results show that the major influence on the broth temperature is that from the reactor wall and the shading factor, both related to the direct and reflected solar irradiation. Other parameter which play an important role on the temperature is the distance between plates. This study provides information to improve the design and establish the most appropriate operating conditions for the cultivation of microalgae in outdoor systems.

  8. Context sensitivity and ambiguity in component-based systems design

    SciTech Connect

    Bespalko, S.J.; Sindt, A.

    1997-10-01

    Designers of components-based, real-time systems need to guarantee to correctness of soft-ware and its output. Complexity of a system, and thus the propensity for error, is best characterized by the number of states a component can encounter. In many cases, large numbers of states arise where the processing is highly dependent on context. In these cases, states are often missed, leading to errors. The following are proposals for compactly specifying system states which allow the factoring of complex components into a control module and a semantic processing module. Further, the need for methods that allow for the explicit representation of ambiguity and uncertainty in the design of components is discussed. Presented herein are examples of real-world problems which are highly context-sensitive or are inherently ambiguous.

  9. Global sensitivity analysis in wind energy assessment

    NASA Astrophysics Data System (ADS)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present

  10. Sensitivity analysis for electromagnetic topology optimization problems

    NASA Astrophysics Data System (ADS)

    Zhou, Shiwei; Li, Wei; Li, Qing

    2010-06-01

    This paper presents a level set based method to design the metal shape in electromagnetic field such that the incident current flow on the metal surface can be minimized or maximized. We represent the interface of the free space and conducting material (solid phase) by the zero-order contour of a higher dimensional level set function. Only the electrical component of the incident wave is considered in the current study and the distribution of the induced current flow on the metallic surface is governed by the electric field integral equation (EFIE). By minimizing or maximizing a costing function relative to the current flow, its distribution can be controlled to some extent. This method paves a new avenue to many electromagnetic applications such as antenna and metamaterial whose performance or properties are dominated by their surface current flow. The sensitivity of the objective function to the shape change, an integral formulation including both the solutions to the electric field integral equation and its adjoint equation, is obtained using a variational method and shape derivative. The advantages of the level set model lie in its flexibility of disposing complex topological changes and facilitating the mathematical expression of the electromagnetic configuration. Moreover, the level set model makes the optimization an elegant evolution process during which the volume of the metallic component keeps a constant while the free space/metal interface gradually approaching its optimal position. The effectiveness of this method is demonstrated through a self-adjoint 2D topology optimization example.

  11. Rheological Models of Blood: Sensitivity Analysis and Benchmark Simulations

    NASA Astrophysics Data System (ADS)

    Szeliga, Danuta; Macioł, Piotr; Banas, Krzysztof; Kopernik, Magdalena; Pietrzyk, Maciej

    2010-06-01

    Modeling of blood flow with respect to rheological parameters of the blood is the objective of this paper. Casson type equation was selected as a blood model and the blood flow was analyzed based on Backward Facing Step benchmark. The simulations were performed using ADINA-CFD finite element code. Three output parameters were selected, which characterize the accuracy of flow simulation. Sensitivity analysis of the results with Morris Design method was performed to identify rheological parameters and the model output, which control the blood flow to significant extent. The paper is the part of the work on identification of parameters controlling process of clotting.

  12. An analytical approach to grid sensitivity analysis. [of NACA wing sections

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, Ideen; Smith, Robert E.; Tiwari, Surendra N.

    1992-01-01

    Sensitivity analysis in Computational Fluid Dynamics with emphasis on grids and surface parameterization is described. An interactive algebraic grid-generation technique is employed to generate C-type grids around NACA four-digit wing sections. An analytical procedure is developed for calculating grid sensitivity with respect to design parameters of a wing section. A comparison of the sensitivity with that obtained using a finite-difference approach is made. Grid sensitivity with respect to grid parameters, such as grid-stretching coefficients, are also investigated. Using the resultant grid sensitivity, aerodynamic sensitivity is obtained using the compressible two-dimensional thin-layer Navier-Stokes equations.

  13. An analytical approach to grid sensitivity analysis for NACA four-digit wing sections

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1992-01-01

    Sensitivity analysis in computational fluid dynamics with emphasis on grids and surface parameterization is described. An interactive algebraic grid-generation technique is employed to generate C-type grids around NACA four-digit wing sections. An analytical procedure is developed for calculating grid sensitivity with respect to design parameters of a wing section. A comparison of the sensitivity with that obtained using a finite difference approach is made. Grid sensitivity with respect to grid parameters, such as grid-stretching coefficients, are also investigated. Using the resultant grid sensitivity, aerodynamic sensitivity is obtained using the compressible two-dimensional thin-layer Navier-Stokes equations.

  14. Designing and Building to ``Impossible'' Tolerances for Vibration Sensitive Equipment

    NASA Astrophysics Data System (ADS)

    Hertlein, Bernard H.

    2003-03-01

    As the precision and production capabilities of modern machines and factories increase, our expectations of them rise commensurately. Facility designers and engineers find themselves increasingly involved with measurement needs and design tolerances that were almost unthinkable a few years ago. An area of expertise that demonstrates this very clearly is the field of vibration measurement and control. Magnetic Resonance Imaging, Semiconductor manufacturing, micro-machining, surgical microscopes — These are just a few examples of equipment or techniques that need an extremely stable vibration environment. The challenge to architects, engineers and contractors is to provide that level of stability without undue cost or sacrificing the aesthetics and practicality of a structure. In addition, many facilities have run out of expansion room, so the design is often hampered by the need to reuse all or part of an existing structure, or to site vibration-sensitive equipment close to an existing vibration source. High resolution measurements and nondestructive testing techniques have proven to be invaluable additions to the engineer's toolbox in meeting these challenges. The author summarizes developments in this field over the last fifteen years or so, and lists some common errors of design and construction that can cost a lot of money in retrofit if missed, but can easily be avoided with a little foresight, an appropriate testing program and a carefully thought out checklist.

  15. Numerical Sensitivity Analysis of a Composite Impact Absorber

    NASA Astrophysics Data System (ADS)

    Caputo, F.; Lamanna, G.; Scarano, D.; Soprano, A.

    2008-08-01

    This work deals with a numerical investigation on the energy absorbing capability of structural composite components. There are several difficulties associated with the numerical simulation of a composite impact-absorber, such as high geometrical non-linearities, boundary contact conditions, failure criteria, material behaviour; all those aspects make the calibration of numerical models and the evaluation of their sensitivity to the governing geometrical, physical and numerical parameters one of the main objectives of whatever numerical investigation. The last aspect is a very important one for designers in order to make the application of the model to real cases robust from both a physical and a numerical point of view. At first, on the basis of experimental data from literature, a preliminary calibration of the numerical model of a composite impact absorber and then a sensitivity analysis to the variation of the main geometrical and material parameters have been developed, by using explicit finite element algorithms implemented in the Ls-Dyna code.

  16. What Makes a Good Home-Based Nocturnal Seizure Detector? A Value Sensitive Design

    PubMed Central

    van Andel, Judith; Leijten, Frans; van Delden, Hans; van Thiel, Ghislaine

    2015-01-01

    A device for the in-home detection of nocturnal seizures is currently being developed in the Netherlands, to improve care for patients with severe epilepsy. It is recognized that the design of medical technology is not value neutral: perspectives of users and developers are influential in design, and design choices influence these perspectives. However, during development processes, these influences are generally ignored and value-related choices remain implicit and poorly argued for. In the development process of the seizure detector we aimed to take values of all stakeholders into consideration. Therefore, we performed a parallel ethics study, using “value sensitive design.” Analysis of stakeholder communication (in meetings and e-mail messages) identified five important values, namely, health, trust, autonomy, accessibility, and reliability. Stakeholders were then asked to give feedback on the choice of these values and how they should be interpreted. In a next step, the values were related to design choices relevant for the device, and then the consequences (risks and benefits) of these choices were investigated. Currently the process of design and testing of the device is still ongoing. The device will be validated in a trial in which the identified consequences of design choices are measured as secondary endpoints. Value sensitive design methodology is feasible for the development of new medical technology and can help designers substantiate the choices in their design. PMID:25875320

  17. Design analysis of the astrometrical telescope facility

    NASA Technical Reports Server (NTRS)

    Huang, Chunsheng; Lawrence, George; Levy, Eugene; Mcmillan, Robert

    1989-01-01

    This paper presents a detailed analysis of a space-based telescope requiring an accuracy of 50 pico radians. A relationship between the geometric centroid of a diffraction image and wave aberrations is derived by a combination approach of diffraction optics and geometric optics. Based on sensitivity of the centroid, one-mirror and two-mirror aplanatic telescopes are investigated. The comparison among three telescopes, parabola, Schwartzschild and Ritchey-Chretien are quantitatively carried out in terms of their sensitivities to the systematic errors and random errors. The study shows that the Ritchey-Chretien design is the most preferable.

  18. Wear-Out Sensitivity Analysis Project Abstract

    NASA Technical Reports Server (NTRS)

    Harris, Adam

    2015-01-01

    During the course of the Summer 2015 internship session, I worked in the Reliability and Maintainability group of the ISS Safety and Mission Assurance department. My project was a statistical analysis of how sensitive ORU's (Orbital Replacement Units) are to a reliability parameter called the wear-out characteristic. The intended goal of this was to determine a worst case scenario of how many spares would be needed if multiple systems started exhibiting wear-out characteristics simultaneously. The goal was also to determine which parts would be most likely to do so. In order to do this, my duties were to take historical data of operational times and failure times of these ORU's and use them to build predictive models of failure using probability distribution functions, mainly the Weibull distribution. Then, I ran Monte Carlo Simulations to see how an entire population of these components would perform. From here, my final duty was to vary the wear-out characteristic from the intrinsic value, to extremely high wear-out values and determine how much the probability of sufficiency of the population would shift. This was done for around 30 different ORU populations on board the ISS.

  19. Sensitivity analysis of hydrodynamic stability operators

    NASA Technical Reports Server (NTRS)

    Schmid, Peter J.; Henningson, Dan S.; Khorrami, Mehdi R.; Malik, Mujeeb R.

    1992-01-01

    The eigenvalue sensitivity for hydrodynamic stability operators is investigated. Classical matrix perturbation techniques as well as the concept of epsilon-pseudoeigenvalues are applied to show that parts of the spectrum are highly sensitive to small perturbations. Applications are drawn from incompressible plane Couette, trailing line vortex flow and compressible Blasius boundary layer flow. Parametric studies indicate a monotonically increasing effect of the Reynolds number on the sensitivity. The phenomenon of eigenvalue sensitivity is due to the non-normality of the operators and their discrete matrix analogs and may be associated with large transient growth of the corresponding initial value problem.

  20. Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations

    NASA Technical Reports Server (NTRS)

    Newman, James C., III; Taylor, Arthur C., III; Barnwell, Richard W.; Newman, Perry A.; Hou, Gene J.-W.

    1999-01-01

    This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics (CFD). The focus here is on those methods particularly well-suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid CFD algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid CFDs in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.

  1. Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Newman, James C., III; Barnwell, Richard W.; Taylor, Arthur C., III; Hou, Gene J.-W.

    1998-01-01

    This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics. The focus here is on those methods particularly well- suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid computational fluid dynamics algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid computational fluid dynamics in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.

  2. Interactive Image Analysis System Design,

    DTIC Science & Technology

    1982-12-01

    This report describes a design for an interactive image analysis system (IIAS), which implements terrain data extraction techniques. The design... analysis system. Additionally, the system is fully capable of supporting many generic types of image analysis and data processing, and is modularly...employs commercially available, state of the art minicomputers and image display devices with proven software to achieve a cost effective, reliable image

  3. Optimizing human activity patterns using global sensitivity analysis

    SciTech Connect

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; Del Valle, Sara Y.; Hyman, James M.

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.

  4. Optimizing human activity patterns using global sensitivity analysis

    DOE PAGES

    Fairchild, Geoffrey; Hickmann, Kyle S.; Mniszewski, Susan M.; ...

    2013-12-10

    Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable high-dimensional optimizationmore » problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. Here we use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Finally, though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of high-dimensional computer simulations.« less

  5. SSTO vs TSTO design considerations—an assessment of the overall performance, design considerations, technologies, costs, and sensitivities of SSTO and TSTO designs using modern technologies

    NASA Astrophysics Data System (ADS)

    Penn, Jay P.

    1996-03-01

    It is generally believed by those skilled in launch system design that Single-Stage-To-Orbit (SSTO) designs are more technically challenging, more performance sensitive, and yield larger lift-off weights than do Two-Stage-To-Orbit designs (TSTO's) offering similar payload delivery capability. Without additional insight into the other considerations which drive the development, recurring costs, operability, and reliability of a launch fleet, an analyst may easily conclude that the higher performing, less sensitive TSTO designs, thus yield a better solution to achieving low cost payload delivery. This limited insight could justify an argument to eliminate the X-33 SSTO technology/demonstration development effort, and thus proceed directly to less risky TSTO designs. Insight into real world design considerations of launch vehicles makes the choice of SSTO vs TSTO much less clear. The presentation addresses a more comprehensive evaluation of the general class of SSTO and TSTO concepts. These include pure SSTO's, augmented SSTO's, Siamese Twin, and Pure TSTO designs. The assessment considers vehicle performance and scaling relationships which characterize real vehicle designs. The assessment also addresses technology requirements, operations and supportability, cost implications, and sensitivities. Results of the assessment indicate that the trade space between various SSTO and TSTO design approaches is complex and not yet fully understood. The results of the X-33 technology demonstrators, as well as additional parametric analysis is required to better define the relative performance and costs of the various design approaches. The results also indicate that with modern technologies and today's better understanding of vehicle design considerations, the perception that SSTO's are dramatically heavier and more sensitive than TSTO designs is more of a myth, than reality.

  6. Aircraft Design Analysis

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The helicopter pictured is the twin-turbine S-76, produced by Sikorsky Aircraft division of United Technologies, Stratford, Connecticut. It is the first transport helicopter ever dey n e d purely as a commercial vehicle rather than an adaptation of a military design. Being built in large numbers for customers in 16 countries, the S-76 is intended for offshore oil rig support, executive transportation and general utility service. The craft carries 12 passengers plus a crew of two and has a range of more than 450 miles-yet it weighs less than 10,000 pounds. Significant weight reduction was achieved by use of composite materials, which are generally lighter but stronger than conventional aircraft materials. NASA composite technology played a part in development of the S-76. Under contract with NASA's Langley Research Center, Sikorsky Aircraft designed and flight-tested a helicopter airframe of advanced composite materials.

  7. 5 CFR 732.201 - Sensitivity level designations and investigative requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... part, the head of each agency shall designate, or cause to be designated, any position within the... material adverse effect on the national security as a sensitive position at one of three sensitivity...

  8. 5 CFR 732.201 - Sensitivity level designations and investigative requirements.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 5 Administrative Personnel 2 2014-01-01 2014-01-01 false Sensitivity level designations and... Requirements § 732.201 Sensitivity level designations and investigative requirements. (a) For purposes of this... material adverse effect on the national security as a sensitive position at one of three sensitivity...

  9. 5 CFR 732.201 - Sensitivity level designations and investigative requirements.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 5 Administrative Personnel 2 2012-01-01 2012-01-01 false Sensitivity level designations and... Requirements § 732.201 Sensitivity level designations and investigative requirements. (a) For purposes of this... material adverse effect on the national security as a sensitive position at one of three sensitivity...

  10. 5 CFR 732.201 - Sensitivity level designations and investigative requirements.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 5 Administrative Personnel 2 2013-01-01 2013-01-01 false Sensitivity level designations and... Requirements § 732.201 Sensitivity level designations and investigative requirements. (a) For purposes of this... material adverse effect on the national security as a sensitive position at one of three sensitivity...

  11. 5 CFR 732.201 - Sensitivity level designations and investigative requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 5 Administrative Personnel 2 2011-01-01 2011-01-01 false Sensitivity level designations and... Requirements § 732.201 Sensitivity level designations and investigative requirements. (a) For purposes of this... material adverse effect on the national security as a sensitive position at one of three sensitivity...

  12. Design of a pulse oximeter for price sensitive emerging markets.

    PubMed

    Jones, Z; Woods, E; Nielson, D; Mahadevan, S V

    2010-01-01

    While the global market for medical devices is located primarily in developed countries, price sensitive emerging markets comprise an attractive, underserved segment in which products need a unique set of value propositions to be competitive. A pulse oximeter was designed expressly for emerging markets, and a novel feature set was implemented to reduce the cost of ownership and improve the usability of the device. Innovations included the ability of the device to generate its own electricity, a built in sensor which cuts down on operating costs, and a graphical, symbolic user interface. These features yield an average reduction of over 75% in the device cost of ownership versus comparable pulse oximeters already on the market.

  13. Design of a charge sensitive preamplifier on high resistivity silicon

    SciTech Connect

    Radeka, V.; Rehak, P.; Rescia, S.; Gatti, E.; Longoni, A.; Sampietro, M.; Holl, P.; Strueder, L.; Kemmer, J.

    1987-01-01

    A low noise, fast charge sensitive preamplifier was designed on high resistivity, detector grade silicon. It is built at the surface of a fully depleted region of n-type silicon. This allows the preamplifier to be placed very close to a detector anode. The preamplifier uses the classical input cascode configuration with a capacitor and a high value resistor in the feedback loop. The output stage of the preamplifier can drive a load up to 20pF. The power dissipation of the preamplifier is 13mW. The amplifying elements are ''Single Sided Gate JFETs'' developed especially for this application. Preamplifiers connected to a low capacitance anode of a drift type detector should achieve a rise time of 20ns and have an equivalent noise charge (ENC), after a suitable shaping, of less than 50 electrons. This performance translates to a position resolution better than 3..mu..m for silicon drift detectors. 6 refs., 9 figs.

  14. Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?

    NASA Technical Reports Server (NTRS)

    Moore, Greg; Chainyk, Mike; Schiermeier, John

    2004-01-01

    The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.

  15. Sensitivity analysis of textural parameters for vertebroplasty

    NASA Astrophysics Data System (ADS)

    Tack, Gye Rae; Lee, Seung Y.; Shin, Kyu-Chul; Lee, Sung J.

    2002-05-01

    Vertebroplasty is one of the newest surgical approaches for the treatment of the osteoporotic spine. Recent studies have shown that it is a minimally invasive, safe, promising procedure for patients with osteoporotic fractures while providing structural reinforcement of the osteoporotic vertebrae as well as immediate pain relief. However, treatment failures due to excessive bone cement injection have been reported as one of complications. It is believed that control of bone cement volume seems to be one of the most critical factors in preventing complications. We believed that an optimal bone cement volume could be assessed based on CT data of a patient. Gray-level run length analysis was used to extract textural information of the trabecular. At initial stage of the project, four indices were used to represent the textural information: mean width of intertrabecular space, mean width of trabecular, area of intertrabecular space, and area of trabecular. Finally, the area of intertrabecular space was selected as a parameter to estimate an optimal bone cement volume and it was found that there was a strong linear relationship between these 2 variables (correlation coefficient = 0.9433, standard deviation = 0.0246). In this study, we examined several factors affecting overall procedures. The threshold level, the radius of rolling ball and the size of region of interest were selected for the sensitivity analysis. As the level of threshold varied with 9, 10, and 11, the correlation coefficient varied from 0.9123 to 0.9534. As the radius of rolling ball varied with 45, 50, and 55, the correlation coefficient varied from 0.9265 to 0.9730. As the size of region of interest varied with 58 x 58, 64 x 64, and 70 x 70, the correlation coefficient varied from 0.9685 to 0.9468. Finally, we found that strong correlation between actual bone cement volume (Y) and the area (X) of the intertrabecular space calculated from the binary image and the linear equation Y = 0.001722 X - 2

  16. Derivative based sensitivity analysis of gamma index

    PubMed Central

    Sarkar, Biplab; Pradhan, Anirudh; Ganesh, T.

    2015-01-01

    Originally developed as a tool for patient-specific quality assurance in advanced treatment delivery methods to compare between measured and calculated dose distributions, the gamma index (γ) concept was later extended to compare between any two dose distributions. It takes into effect both the dose difference (DD) and distance-to-agreement (DTA) measurements in the comparison. Its strength lies in its capability to give a quantitative value for the analysis, unlike other methods. For every point on the reference curve, if there is at least one point in the evaluated curve that satisfies the pass criteria (e.g., δDD = 1%, δDTA = 1 mm), the point is included in the quantitative score as “pass.” Gamma analysis does not account for the gradient of the evaluated curve - it looks at only the minimum gamma value, and if it is <1, then the point passes, no matter what the gradient of evaluated curve is. In this work, an attempt has been made to present a derivative-based method for the identification of dose gradient. A mathematically derived reference profile (RP) representing the penumbral region of 6 MV 10 cm × 10 cm field was generated from an error function. A general test profile (GTP) was created from this RP by introducing 1 mm distance error and 1% dose error at each point. This was considered as the first of the two evaluated curves. By its nature, this curve is a smooth curve and would satisfy the pass criteria for all points in it. The second evaluated profile was generated as a sawtooth test profile (STTP) which again would satisfy the pass criteria for every point on the RP. However, being a sawtooth curve, it is not a smooth one and would be obviously poor when compared with the smooth profile. Considering the smooth GTP as an acceptable profile when it passed the gamma pass criteria (1% DD and 1 mm DTA) against the RP, the first and second order derivatives of the DDs (δD’, δD”) between these two curves were derived and used as the boundary

  17. Topographic Avalanche Risk: DEM Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Nazarkulova, Ainura; Strobl, Josef

    2015-04-01

    GIS-based models are frequently used to assess the risk and trigger probabilities of (snow) avalanche releases, based on parameters and geomorphometric derivatives like elevation, exposure, slope, proximity to ridges and local relief energy. Numerous models, and model-based specific applications and project results have been published based on a variety of approaches and parametrizations as well as calibrations. Digital Elevation Models (DEM) come with many different resolution (scale) and quality (accuracy) properties, some of these resulting from sensor characteristics and DEM generation algorithms, others from different DEM processing workflows and analysis strategies. This paper explores the impact of using different types and characteristics of DEMs for avalanche risk modeling approaches, and aims at establishing a framework for assessing the uncertainty of results. The research question is derived from simply demonstrating the differences in release risk areas and intensities by applying identical models to DEMs with different properties, and then extending this into a broader sensitivity analysis. For the quantification and calibration of uncertainty parameters different metrics are established, based on simple value ranges, probabilities, as well as fuzzy expressions and fractal metrics. As a specific approach the work on DEM resolution-dependent 'slope spectra' is being considered and linked with the specific application of geomorphometry-base risk assessment. For the purpose of this study focusing on DEM characteristics, factors like land cover, meteorological recordings and snowpack structure and transformation are kept constant, i.e. not considered explicitly. Key aims of the research presented here are the development of a multi-resolution and multi-scale framework supporting the consistent combination of large area basic risk assessment with local mitigation-oriented studies, and the transferability of the latter into areas without availability of

  18. A diameter-sensitive flow entropy method for reliability consideration in water distribution system design

    NASA Astrophysics Data System (ADS)

    Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin

    2014-07-01

    Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.

  19. A discourse on sensitivity analysis for discretely-modeled structures

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Haftka, Raphael T.

    1991-01-01

    A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.

  20. GPT-Free Sensitivity Analysis for Reactor Depletion and Analysis

    NASA Astrophysics Data System (ADS)

    Kennedy, Christopher Brandon

    model (ROM) error. When building a subspace using the GPT-Free approach, the reduction error can be selected based on an error tolerance for generic flux response-integrals. The GPT-Free approach then solves the fundamental adjoint equation with randomly generated sets of input parameters. Using properties from linear algebra, the fundamental k-eigenvalue sensitivities, spanned by the various randomly generated models, can be related to response sensitivity profiles by a change of basis. These sensitivity profiles are the first-order derivatives of responses to input parameters. The quality of the basis is evaluated using the kappa-metric, developed from Wilks' order statistics, on the user-defined response functionals that involve the flux state-space. Because the kappa-metric is formed from Wilks' order statistics, a probability-confidence interval can be established around the reduction error based on user-defined responses such as fuel-flux, max-flux error, or other generic inner products requiring the flux. In general, The GPT-Free approach will produce a ROM with a quantifiable, user-specified reduction error. This dissertation demonstrates the GPT-Free approach for steady state and depletion reactor calculations modeled by SCALE6, an analysis tool developed by Oak Ridge National Laboratory. Future work includes the development of GPT-Free for new Monte Carlo methods where the fundamental adjoint is available. Additionally, the approach in this dissertation examines only the first derivatives of responses, the response sensitivity profile; extension and/or generalization of the GPT-Free approach to higher order response sensitivity profiles is natural area for future research.

  1. Launch vehicle systems design analysis

    NASA Technical Reports Server (NTRS)

    Ryan, Robert; Verderaime, V.

    1993-01-01

    Current launch vehicle design emphasis is on low life-cycle cost. This paper applies total quality management (TQM) principles to a conventional systems design analysis process to provide low-cost, high-reliability designs. Suggested TQM techniques include Steward's systems information flow matrix method, quality leverage principle, quality through robustness and function deployment, Pareto's principle, Pugh's selection and enhancement criteria, and other design process procedures. TQM quality performance at least-cost can be realized through competent concurrent engineering teams and brilliance of their technical leadership.

  2. Control sensitivity indices for stability analysis of HVdc systems

    SciTech Connect

    Nayak, O.B.; Gole, A.M.; Chapman, D.G.; Davies, J.B.

    1995-10-01

    This paper presents a new concept called the ``Control Sensitivity Index`` of CSI, for the stability analysis of HVdc converters connected to weak ac systems. The CSI for a particular control mode can be defined as the ratio of incremental changes in the two system variables that are most relevant to that control mode. The index provides valuable information on the stability of the system and, unlike other approaches, aids in the design of the controller. It also plays an important role in defining non-linear gains for the controller. This paper offers a generalized formulation of CSI and demonstrates its application through an analysis of the CSI for three modes of HVdc control. The conclusions drawn from the analysis are confirmed by a detailed electromagnetic transients simulation of the ac/dc system. The paper concludes that the CSI can be used to improve the controller design and, for an inverter in a weak ac system, the conventional voltage control mode is more stable than the conventional {gamma} control mode.

  3. Design of a High Sensitivity GNSS receiver for Lunar missions

    NASA Astrophysics Data System (ADS)

    Musumeci, Luciano; Dovis, Fabio; Silva, João S.; da Silva, Pedro F.; Lopes, Hugo D.

    2016-06-01

    This paper presents the design of a satellite navigation receiver architecture tailored for future Lunar exploration missions, demonstrating the feasibility of using Global Navigation Satellite Systems signals integrated with an orbital filter to achieve such a scope. It analyzes the performance of a navigation solution based on pseudorange and pseudorange rate measurements, generated through the processing of very weak signals of the Global Positioning System (GPS) L1/L5 and Galileo E1/E5 frequency bands. In critical scenarios (e.g. during manoeuvres) acceleration and attitude measurements from additional sensors complementing the GNSS measurements are integrated with the GNSS measurement to match the positioning requirement. A review of environment characteristics (dynamics, geometry and signal power) for the different phases of a reference Lunar mission is provided, focusing on the stringent requirements of the Descent, Approach and Hazard Detection and Avoidance phase. The design of High Sensitivity acquisition and tracking schemes is supported by an extensive simulation test campaign using a software receiver implementation and navigation results are validated by means of an end-to-end software simulator. Acquisition and tracking of GPS and Galileo signals of the L1/E1 and L5/E5a bands was successfully demonstrated for Carrier-to-Noise density ratios as low as 5-8 dB-Hz. The proposed navigation architecture provides acceptable performances during the considered critical phases, granting position and velocity errors below 61.4 m and 3.2 m/s, respectively, for the 99.7% of the mission time.

  4. A highly sensitive and multiplexed method for focused transcript analysis.

    PubMed

    Kataja, Kari; Satokari, Reetta M; Arvas, Mikko; Takkinen, Kristiina; Söderlund, Hans

    2006-10-01

    We describe a novel, multiplexed method for focused transcript analysis of tens to hundreds of genes. In this method TRAC (transcript analysis with aid of affinity capture) mRNA targets, a set of amplifiable detection probes of distinct sizes and biotinylated oligo(dT) capture probe are hybridized in solution. The formed sandwich hybrids are collected on magnetic streptavidin-coated microparticles and washed. The hybridized probes are eluted, optionally amplified by a PCR using a universal primer pair and detected with laser-induced fluorescence and capillary electrophoresis. The probes were designed by using a computer program developed for the purpose. The TRAC method was adapted to 96-well format by utilizing an automated magnetic particle processor. Here we demonstrate a simultaneous analysis of 18 Saccharomyces cerevisiae transcripts from two experimental conditions and show a comparison with a qPCR system. The sensitivity of the method is significantly increased by the PCR amplification of the hybridized and eluted probes. Our data demonstrate a bias-free use of at least 16 cycles of PCR amplification to increase probe signal, allowing transcript analysis from 2.5 ng of the total mRNA sample. The method is fast and simple and avoids cDNA conversion. These qualifications make it a potential, new means for routine analysis and a complementing method for microarrays and high density chips.

  5. Buckling Design and Imperfection Sensitivity of Sandwich Composite Launch-Vehicle Shell Structures

    NASA Technical Reports Server (NTRS)

    Schultz, Marc R.; Sleight, David W.; Myers, David E.; Waters, W. Allen, Jr.; Chunchu, Prasad B.; Lovejoy, Andrew W.; Hilburger, Mark W.

    2016-01-01

    Composite materials are increasingly being considered and used for launch-vehicle structures. For shell structures, such as interstages, skirts, and shrouds, honeycomb-core sandwich composites are often selected for their structural efficiency. Therefore, it is becoming increasingly important to understand the structural response, including buckling, of sandwich composite shell structures. Additionally, small geometric imperfections can significantly influence the buckling response, including considerably reducing the buckling load, of shell structures. Thus, both the response of the theoretically perfect structure and the buckling imperfection sensitivity must be considered during the design of such structures. To address the latter, empirically derived design factors, called buckling knockdown factors (KDFs), were developed by NASA in the 1960s to account for this buckling imperfection sensitivity during design. However, most of the test-article designs used in the development of these recommendations are not relevant to modern launch-vehicle constructions and material systems, and in particular, no composite test articles were considered. Herein, a two-part study on composite sandwich shells to (1) examine the relationship between the buckling knockdown factor and the areal mass of optimized designs, and (2) to interrogate the imperfection sensitivity of those optimized designs is presented. Four structures from recent NASA launch-vehicle development activities are considered. First, designs optimized for both strength and stability were generated for each of these structures using design optimization software and a range of buckling knockdown factors; it was found that the designed areal masses varied by between 6.1% and 19.6% over knockdown factors ranging from 0.6 to 0.9. Next, the buckling imperfection sensitivity of the optimized designs is explored using nonlinear finite-element analysis and the as-measured shape of a large-scale composite cylindrical

  6. New Uses for Sensitivity Analysis: How Different Movement Tasks Effect Limb Model Parameter Sensitivity

    NASA Technical Reports Server (NTRS)

    Winters, J. M.; Stark, L.

    1984-01-01

    Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.

  7. Context-sensitive design and human interaction principles for usable, useful, and adoptable radars

    NASA Astrophysics Data System (ADS)

    McNamara, Laura A.; Klein, Laura M.

    2016-05-01

    The evolution of exquisitely sensitive Synthetic Aperture Radar (SAR) systems is positioning this technology for use in time-critical environments, such as search-and-rescue missions and improvised explosive device (IED) detection. SAR systems should be playing a keystone role in the United States' Intelligence, Surveillance, and Reconnaissance activities. Yet many in the SAR community see missed opportunities for incorporating SAR into existing remote sensing data collection and analysis challenges. Drawing on several years' of field research with SAR engineering and operational teams, this paper examines the human and organizational factors that mitigate against the adoption and use of SAR for tactical ISR and operational support. We suggest that SAR has a design problem, and that context-sensitive, human and organizational design frameworks are required if the community is to realize SAR's tactical potential.

  8. Discrete analysis of spatial-sensitivity models

    NASA Technical Reports Server (NTRS)

    Nielsen, Kenneth R. K.; Wandell, Brian A.

    1988-01-01

    Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.

  9. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  10. Thermodynamics-based Metabolite Sensitivity Analysis in metabolic networks.

    PubMed

    Kiparissides, A; Hatzimanikatis, V

    2017-01-01

    The increasing availability of large metabolomics datasets enhances the need for computational methodologies that can organize the data in a way that can lead to the inference of meaningful relationships. Knowledge of the metabolic state of a cell and how it responds to various stimuli and extracellular conditions can offer significant insight in the regulatory functions and how to manipulate them. Constraint based methods, such as Flux Balance Analysis (FBA) and Thermodynamics-based flux analysis (TFA), are commonly used to estimate the flow of metabolites through genome-wide metabolic networks, making it possible to identify the ranges of flux values that are consistent with the studied physiological and thermodynamic conditions. However, unless key intracellular fluxes and metabolite concentrations are known, constraint-based models lead to underdetermined problem formulations. This lack of information propagates as uncertainty in the estimation of fluxes and basic reaction properties such as the determination of reaction directionalities. Therefore, knowledge of which metabolites, if measured, would contribute the most to reducing this uncertainty can significantly improve our ability to define the internal state of the cell. In the present work we combine constraint based modeling, Design of Experiments (DoE) and Global Sensitivity Analysis (GSA) into the Thermodynamics-based Metabolite Sensitivity Analysis (TMSA) method. TMSA ranks metabolites comprising a metabolic network based on their ability to constrain the gamut of possible solutions to a limited, thermodynamically consistent set of internal states. TMSA is modular and can be applied to a single reaction, a metabolic pathway or an entire metabolic network. This is, to our knowledge, the first attempt to use metabolic modeling in order to provide a significance ranking of metabolites to guide experimental measurements.

  11. Sensitivity Analysis of Offshore Wind Cost of Energy (Poster)

    SciTech Connect

    Dykes, K.; Ning, A.; Graf, P.; Scott, G.; Damiami, R.; Hand, M.; Meadows, R.; Musial, W.; Moriarty, P.; Veers, P.

    2012-10-01

    No matter the source, offshore wind energy plant cost estimates are significantly higher than for land-based projects. For instance, a National Renewable Energy Laboratory (NREL) review on the 2010 cost of wind energy found baseline cost estimates for onshore wind energy systems to be 71 dollars per megawatt-hour ($/MWh), versus 225 $/MWh for offshore systems. There are many ways that innovation can be used to reduce the high costs of offshore wind energy. However, the use of such innovation impacts the cost of energy because of the highly coupled nature of the system. For example, the deployment of multimegawatt turbines can reduce the number of turbines, thereby reducing the operation and maintenance (O&M) costs associated with vessel acquisition and use. On the other hand, larger turbines may require more specialized vessels and infrastructure to perform the same operations, which could result in higher costs. To better understand the full impact of a design decision on offshore wind energy system performance and cost, a system analysis approach is needed. In 2011-2012, NREL began development of a wind energy systems engineering software tool to support offshore wind energy system analysis. The tool combines engineering and cost models to represent an entire offshore wind energy plant and to perform system cost sensitivity analysis and optimization. Initial results were collected by applying the tool to conduct a sensitivity analysis on a baseline offshore wind energy system using 5-MW and 6-MW NREL reference turbines. Results included information on rotor diameter, hub height, power rating, and maximum allowable tip speeds.

  12. A Small Range Six-Axis Accelerometer Designed with High Sensitivity DCB Elastic Element.

    PubMed

    Sun, Zhibo; Liu, Jinhao; Yu, Chunzhan; Zheng, Yili

    2016-09-21

    This paper describes a small range six-axis accelerometer (the measurement range of the sensor is ±g) with high sensitivity DCB (Double Cantilever Beam) elastic element. This sensor is developed based on a parallel mechanism because of the reliability. The accuracy of sensors is affected by its sensitivity characteristics. To improve the sensitivity, a DCB structure is applied as the elastic element. Through dynamic analysis, the dynamic model of the accelerometer is established using the Lagrange equation, and the mass matrix and stiffness matrix are obtained by a partial derivative calculation and a conservative congruence transformation, respectively. By simplifying the structure of the accelerometer, a model of the free vibration is achieved, and the parameters of the sensor are designed based on the model. Through stiffness analysis of the DCB structure, the deflection curve of the beam is calculated. Compared with the result obtained using a finite element analysis simulation in ANSYS Workbench, the coincidence rate of the maximum deflection is 89.0% along the x-axis, 88.3% along the y-axis and 87.5% along the z-axis. Through strain analysis of the DCB elastic element, the sensitivity of the beam is obtained. According to the experimental result, the accuracy of the theoretical analysis is found to be 90.4% along the x-axis, 74.9% along the y-axis and 78.9% along the z-axis. The measurement errors of linear accelerations ax, ay and az in the experiments are 2.6%, 0.6% and 1.31%, respectively. The experiments prove that accelerometer with DCB elastic element performs great sensitive and precision characteristics.

  13. A Small Range Six-Axis Accelerometer Designed with High Sensitivity DCB Elastic Element

    PubMed Central

    Sun, Zhibo; Liu, Jinhao; Yu, Chunzhan; Zheng, Yili

    2016-01-01

    This paper describes a small range six-axis accelerometer (the measurement range of the sensor is ±g) with high sensitivity DCB (Double Cantilever Beam) elastic element. This sensor is developed based on a parallel mechanism because of the reliability. The accuracy of sensors is affected by its sensitivity characteristics. To improve the sensitivity, a DCB structure is applied as the elastic element. Through dynamic analysis, the dynamic model of the accelerometer is established using the Lagrange equation, and the mass matrix and stiffness matrix are obtained by a partial derivative calculation and a conservative congruence transformation, respectively. By simplifying the structure of the accelerometer, a model of the free vibration is achieved, and the parameters of the sensor are designed based on the model. Through stiffness analysis of the DCB structure, the deflection curve of the beam is calculated. Compared with the result obtained using a finite element analysis simulation in ANSYS Workbench, the coincidence rate of the maximum deflection is 89.0% along the x-axis, 88.3% along the y-axis and 87.5% along the z-axis. Through strain analysis of the DCB elastic element, the sensitivity of the beam is obtained. According to the experimental result, the accuracy of the theoretical analysis is found to be 90.4% along the x-axis, 74.9% along the y-axis and 78.9% along the z-axis. The measurement errors of linear accelerations ax, ay and az in the experiments are 2.6%, 0.6% and 1.31%, respectively. The experiments prove that accelerometer with DCB elastic element performs great sensitive and precision characteristics. PMID:27657089

  14. Estimate design sensitivity to process variation for the 14nm node

    NASA Astrophysics Data System (ADS)

    Landié, Guillaume; Farys, Vincent

    2016-03-01

    Looking for the highest density and best performance, the 14nm technological node saw the development of aggressive designs, with design rules as close as possible to the limit of the process. Edge placement error (EPE) budget is now tighter and Reticle Enhancement Techniques (RET) must take into account the highest number of parameters to be able to get the best printability and guaranty yield requirements. Overlay is a parameter that must be taken into account earlier during the design library development to avoid design structures presenting a high risk of performance failure. This paper presents a method taking into account the overlay variation and the Resist Image simulation across the process window variation to estimate the design sensitivity to overlay. Areas in the design are classified with specific metrics, from the highest to the lowest overlay sensitivity. This classification can be used to evaluate the robustness of a full chip product to process variability or to work with designers during the design library development. The ultimate goal is to evaluate critical structures in different contexts and report the most critical ones. In this paper, we study layers interacting together, such as Contact/Poly area overlap or Contact/Active distance. ASML-Brion tooling allowed simulating the different resist contours and applying the overlay value to one of the layers. Lithography Manufacturability Check (LMC) detectors are then set to extract the desired values for analysis. Two different approaches have been investigated. The first one is a systematic overlay where we apply the same overlay everywhere on the design. The second one is using a real overlay map which has been measured and applied to the LMC tools. The data are then post-processed and compared to the design target to create a classification and show the error distribution. Figure:

  15. Design, theoretical analysis, and experimental verification of a CMOS current integrator with 1.2 × 2.05 µm2 microelectrode array for high-sensitivity bacterial counting

    NASA Astrophysics Data System (ADS)

    Gamo, Kohei; Nakazato, Kazuo; Niitsu, Kiichi

    2017-01-01

    In this paper, we present the design and experimental verification of an amperometric CMOS-based sensor with a current integrator and a 1.2 × 2.05 µm2 bacterial-sized microelectrode array for high-sensitivity bacterial counting. For high-sensitivity bacterial counting with a sufficient signal-to-noise ratio (SNR), noise must be reduced because bacterial-sized microelectrodes can handle only a low current of the order of 100 pA. Thus, we implement a current integrator that is highly effective for noise reduction. Furthermore, for the first time, we use the current integrator in conjunction with the bacterial-sized microelectrode array. On the basis of the results of the proposed current integration, we successfully reduce noise and achieve a high SNR of 30.4 dB. To verify the effectiveness of the proposed CMOS-based sensor, we perform two-dimensional counting of microbeads, which are almost of the same size as bacteria. The measurement results demonstrate successful high-sensitivity two-dimensional (2D) counting of microbeads with a high SNR of 27 dB.

  16. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks.

    PubMed

    Arampatzis, Georgios; Katsoulakis, Markos A; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in "sloppy" systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the

  17. Probabilistic Finite Element Analysis & Design Optimization for Structural Designs

    NASA Astrophysics Data System (ADS)

    Deivanayagam, Arumugam

    This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that

  18. Design of a Piezoelectric Accelerometer with High Sensitivity and Low Transverse Effect.

    PubMed

    Tian, Bian; Liu, Hanyue; Yang, Ning; Zhao, Yulong; Jiang, Zhuangde

    2016-09-26

    In order to meet the requirements of cable fault detection, a new structure of piezoelectric accelerometer was designed and analyzed in detail. The structure was composed of a seismic mass, two sensitive beams, and two added beams. Then, simulations including the maximum stress, natural frequency, and output voltage were carried out. Moreover, comparisons with traditional structures of piezoelectric accelerometer were made. To verify which vibration mode is the dominant one on the acceleration and the space between the mass and glass, mode analysis and deflection analysis were carried out. Fabricated on an n-type single crystal silicon wafer, the sensor chips were wire-bonged to printed circuit boards (PCBs) and simply packaged for experiments. Finally, a vibration test was conducted. The results show that the proposed piezoelectric accelerometer has high sensitivity, low resonance frequency, and low transverse effect.

  19. Design of a Piezoelectric Accelerometer with High Sensitivity and Low Transverse Effect

    PubMed Central

    Tian, Bian; Liu, Hanyue; Yang, Ning; Zhao, Yulong; Jiang, Zhuangde

    2016-01-01

    In order to meet the requirements of cable fault detection, a new structure of piezoelectric accelerometer was designed and analyzed in detail. The structure was composed of a seismic mass, two sensitive beams, and two added beams. Then, simulations including the maximum stress, natural frequency, and output voltage were carried out. Moreover, comparisons with traditional structures of piezoelectric accelerometer were made. To verify which vibration mode is the dominant one on the acceleration and the space between the mass and glass, mode analysis and deflection analysis were carried out. Fabricated on an n-type single crystal silicon wafer, the sensor chips were wire-bonged to printed circuit boards (PCBs) and simply packaged for experiments. Finally, a vibration test was conducted. The results show that the proposed piezoelectric accelerometer has high sensitivity, low resonance frequency, and low transverse effect. PMID:27681734

  20. BEHAVIOR OF SENSITIVITIES IN THE ONE-DIMENSIONAL ADVECTION-DISPERSION EQUATION: IMPLICATIONS FOR PARAMETER ESTIMATION AND SAMPLING DESIGN.

    USGS Publications Warehouse

    Knopman, Debra S.; Voss, Clifford I.

    1987-01-01

    The spatial and temporal variability of sensitivities has a significant impact on parameter estimation and sampling design for studies of solute transport in porous media. Physical insight into the behavior of sensitivities is offered through an analysis of analytically derived sensitivities for the one-dimensional form of the advection-dispersion equation. When parameters are estimated in regression models of one-dimensional transport, the spatial and temporal variability in sensitivities influences variance and covariance of parameter estimates. Several principles account for the observed influence of sensitivities on parameter uncertainty. (1) Information about a physical parameter may be most accurately gained at points in space and time. (2) As the distance of observation points from the upstream boundary increases, maximum sensitivity to velocity during passage of the solute front increases. (3) The frequency of sampling must be 'in phase' with the S shape of the dispersion sensitivity curve to yield the most information on dispersion. (4) The sensitivity to the dispersion coefficient is usually at least an order of magnitude less than the sensitivity to velocity. (5) The assumed probability distribution of random error in observations of solute concentration determines the form of the sensitivities. (6) If variance in random error in observations is large, trends in sensitivities of observation points may be obscured by noise. (7) Designs that minimize the variance of one parameter may not necessarily minimize the variance of other parameters.

  1. Global sensitivity analysis of analytical vibroacoustic transmission models

    NASA Astrophysics Data System (ADS)

    Christen, Jean-Loup; Ichchou, Mohamed; Troclet, Bernard; Bareille, Olivier; Ouisse, Morvan

    2016-04-01

    Noise reduction issues arise in many engineering problems. One typical vibroacoustic problem is the transmission loss (TL) optimisation and control. The TL depends mainly on the mechanical parameters of the considered media. At early stages of the design, such parameters are not well known. Decision making tools are therefore needed to tackle this issue. In this paper, we consider the use of the Fourier Amplitude Sensitivity Test (FAST) for the analysis of the impact of mechanical parameters on features of interest. FAST is implemented with several structural configurations. FAST method is used to estimate the relative influence of the model parameters while assuming some uncertainty or variability on their values. The method offers a way to synthesize the results of a multiparametric analysis with large variability. Results are presented for transmission loss of isotropic, orthotropic and sandwich plates excited by a diffuse field on one side. Qualitative trends found to agree with the physical expectation. Design rules can then be set up for vibroacoustic indicators. The case of a sandwich plate is taken as an example of the use of this method inside an optimisation process and for uncertainty quantification.

  2. Support systems design and analysis

    NASA Technical Reports Server (NTRS)

    Ferguson, R. M.

    1985-01-01

    The integration of Kennedy Space Center (KSC) ground support systems with the new launch processing system and new launch vehicle provided KSC with a unique challenge in system design and analysis for the Space Transportation System. Approximately 70 support systems are controlled and monitored by the launch processing system. Typical systems are main propulsion oxygen and hydrogen loading systems, environmental control life support system, hydraulics, etc. An End-to-End concept of documentation and analysis was chosen and applied to these systems. Unique problems were resolved in the areas of software analysis, safing under emergency conditions, sampling rates, and control loop analysis. New methods of performing End-to-End reliability analyses were implemented. The systems design approach selected and the resolution of major problem areas are discussed.

  3. Programmable ion-sensitive transistor interfaces. III. Design considerations, signal generation, and sensitivity enhancement

    NASA Astrophysics Data System (ADS)

    Jayant, Krishna; Auluck, Kshitij; Rodriguez, Sergio; Cao, Yingqiu; Kan, Edwin C.

    2014-05-01

    We report on factors that affect DNA hybridization detection using ion-sensitive field-effect transistors (ISFETs). Signal generation at the interface between the transistor and immobilized biomolecules is widely ascribed to unscreened molecular charges causing a shift in surface potential and hence the transistor output current. Traditionally, the interaction between DNA and the dielectric or metal sensing interface is modeled by treating the molecular layer as a sheet charge and the ionic profile with a Poisson-Boltzmann distribution. The surface potential under this scenario is described by the Graham equation. This approximation, however, often fails to explain large hybridization signals on the order of tens of mV. More realistic descriptions of the DNA-transistor interface which include factors such as ion permeation, exclusion, and packing constraints have been proposed with little or no corroboration against experimental findings. In this study, we examine such physical models by their assumptions, range of validity, and limitations. We compare simulations against experiments performed on electrolyte-oxide-semiconductor capacitors and foundry-ready floating-gate ISFETs. We find that with weakly charged interfaces (i.e., low intrinsic interface charge), pertinent to the surfaces used in this study, the best agreement between theory and experiment exists when ions are completely excluded from the DNA layer. The influence of various factors such as bulk pH, background salinity, chemical reactivity of surface groups, target molecule concentration, and surface coatings on signal generation is studied. Furthermore, in order to overcome Debye screening limited detection, we suggest two signal enhancement strategies. We first describe frequency domain biosensing, highlighting the ability to sort short DNA strands based on molecular length, and then describe DNA biosensing in multielectrolytes comprising trace amounts of higher-valency salt in a background of

  4. Validation of FSP Reactor Design with Sensitivity Studies of Beryllium-Reflected Critical Assemblies

    SciTech Connect

    John D. Bess; Margaret A. Marshall

    2013-02-01

    The baseline design for space nuclear power is a fission surface power (FSP) system: sodium-potassium (NaK) cooled, fast spectrum reactor with highly-enriched-uranium (HEU)-O2 fuel, stainless steel (SS) cladding, and beryllium reflectors with B4C control drums. Previous studies were performed to evaluate modeling capabilities and quantify uncertainties and biases associated with analysis methods and nuclear data. Comparison of Zero Power Plutonium Reactor (ZPPR)-20 benchmark experiments with the FSP design indicated that further reduction of the total design model uncertainty requires the reduction in uncertainties pertaining to beryllium and uranium cross-section data. Further comparison with three beryllium-reflected HEU-metal benchmark experiments performed at the Oak Ridge Critical Experiments Facility (ORCEF) concluded the requirement that experimental validation data have similar cross section sensitivities to those found in the FSP design. A series of critical experiments was performed at ORCEF in the 1960s to support the Medium Power Reactor Experiment (MPRE) space reactor design. The small, compact critical assembly (SCCA) experiments were graphite- or beryllium-reflected assemblies of SS-clad, HEU-O2 fuel on a vertical lift machine. All five configurations were evaluated as benchmarks. Two of the five configurations were beryllium reflected, and further evaluated using the sensitivity and uncertainty analysis capabilities of SCALE 6.1. Validation of the example FSP design model was successful in reducing the primary uncertainty constituent, the Be(n,n) reaction, from 0.28 %dk/k to 0.0004 %dk/k. Further assessment of additional reactor physics measurements performed on the SCCA experiments may serve to further validate FSP design and operation.

  5. Structural Analysis and Design Software

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Collier Research and Development Corporation received a one-of-a-kind computer code for designing exotic hypersonic aircraft called ST-SIZE in the first ever Langley Research Center software copyright license agreement. Collier transformed the NASA computer code into a commercial software package called HyperSizer, which integrates with other Finite Element Modeling and Finite Analysis private-sector structural analysis program. ST-SIZE was chiefly conceived as a means to improve and speed the structural design of a future aerospace plane for Langley Hypersonic Vehicles Office. Including the NASA computer code into HyperSizer has enabled the company to also apply the software to applications other than aerospace, including improved design and construction for offices, marine structures, cargo containers, commercial and military aircraft, rail cars, and a host of everyday consumer products.

  6. Global and Local Sensitivity Analysis Methods for a Physical System

    ERIC Educational Resources Information Center

    Morio, Jerome

    2011-01-01

    Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…

  7. A design method for minimizing sensitivity to plant parameter variations

    NASA Technical Reports Server (NTRS)

    Hadass, Z.; Powell, J. D.

    1974-01-01

    A method is described for minimizing the sensitivity of multivariable systems to parameter variations. The variable parameters are considered as random variables and their effect is included in a quadratic performance index. The performance index is a weighted sum of the state and control covariances that stem from both the random system disturbances and the parameter uncertainties. The numerical solution of the problem is described and application of the method to several initially sensitive tracking systems is discussed. The sensitivity factor of reduction was typically 2 or 3 over a system based on random system noise only, and yet resulted in state RMS increases of only about a factor of two.

  8. Habitat Design Optimization and Analysis

    NASA Technical Reports Server (NTRS)

    SanSoucie, Michael P.; Hull, Patrick V.; Tinker, Michael L.

    2006-01-01

    Long-duration surface missions to the Moon and Mars will require habitats for the astronauts. The materials chosen for the habitat walls play a direct role in the protection against the harsh environments found on the surface. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Advanced optimization techniques are necessary for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat design optimization tool utilizing genetic algorithms has been developed. Genetic algorithms use a "survival of the fittest" philosophy, where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multi-objective formulation of structural analysis, heat loss, radiation protection, and meteoroid protection. This paper presents the research and development of this tool.

  9. Advanced Fuel Cycle Economic Sensitivity Analysis

    SciTech Connect

    David Shropshire; Kent Williams; J.D. Smith; Brent Boore

    2006-12-01

    A fuel cycle economic analysis was performed on four fuel cycles to provide a baseline for initial cost comparison using the Gen IV Economic Modeling Work Group G4 ECON spreadsheet model, Decision Programming Language software, the 2006 Advanced Fuel Cycle Cost Basis report, industry cost data, international papers, the nuclear power related cost study from MIT, Harvard, and the University of Chicago. The analysis developed and compared the fuel cycle cost component of the total cost of energy for a wide range of fuel cycles including: once through, thermal with fast recycle, continuous fast recycle, and thermal recycle.

  10. Parameter sensitivity analysis for pesticide impacts on honeybee colonies

    EPA Science Inventory

    We employ Monte Carlo simulation and linear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed that simulate hive population trajectories, taking into account queen strength, foraging success, weather, colo...

  11. Sobol’ sensitivity analysis for stressor impacts on honeybee colonies

    EPA Science Inventory

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather...

  12. Selecting step sizes in sensitivity analysis by finite differences

    NASA Technical Reports Server (NTRS)

    Iott, J.; Haftka, R. T.; Adelman, H. M.

    1985-01-01

    This paper deals with methods for obtaining near-optimum step sizes for finite difference approximations to first derivatives with particular application to sensitivity analysis. A technique denoted the finite difference (FD) algorithm, previously described in the literature and applicable to one derivative at a time, is extended to the calculation of several simultaneously. Both the original and extended FD algorithms are applied to sensitivity analysis for a data-fitting problem in which derivatives of the coefficients of an interpolation polynomial are calculated with respect to uncertainties in the data. The methods are also applied to sensitivity analysis of the structural response of a finite-element-modeled swept wing. In a previous study, this sensitivity analysis of the swept wing required a time-consuming trial-and-error effort to obtain a suitable step size, but it proved to be a routine application for the extended FD algorithm herein.

  13. Sensitivity Analysis and Computation for Partial Differential Equations

    DTIC Science & Technology

    2008-03-14

    Example, Journal of Mathematical Analysis and Applications , to appear. 11 [22] John R. Singler, Transition to Turbulence, Small Disturbances, and...Sensitivity Analysis II: The Navier-Stokes Equations, Journal of Mathematical Analysis and Applications , to appear. [23] A. M. Stuart and A. R. Humphries

  14. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    PubMed

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.

  15. Sensitivity Analysis of the Gap Heat Transfer Model in BISON.

    SciTech Connect

    Swiler, Laura Painton; Schmidt, Rodney C.; Williamson, Richard; Perez, Danielle

    2014-10-01

    This report summarizes the result of a NEAMS project focused on sensitivity analysis of the heat transfer model in the gap between the fuel rod and the cladding used in the BISON fuel performance code of Idaho National Laboratory. Using the gap heat transfer models in BISON, the sensitivity of the modeling parameters and the associated responses is investigated. The study results in a quantitative assessment of the role of various parameters in the analysis of gap heat transfer in nuclear fuel.

  16. Sensitivity Analysis of QSAR Models for Assessing Novel Military Compounds

    DTIC Science & Technology

    2009-01-01

    erties, such as log P, would aid in estimating a chemical’s environmental fate and toxicology when applied to QSAR modeling. Granted, QSAR mod- els, such...ER D C TR -0 9 -3 Strategic Environmental Research and Development Program Sensitivity Analysis of QSAR Models for Assessing Novel...Environmental Research and Development Program ERDC TR-09-3 January 2009 Sensitivity Analysis of QSAR Models for Assessing Novel Military Compound

  17. Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator

    NASA Astrophysics Data System (ADS)

    Rehman, Naveed Ur; Siddiqui, Mubashir Ali

    2017-01-01

    In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.

  18. Plans for a sensitivity analysis of bridge-scour computations

    USGS Publications Warehouse

    Dunn, David D.; Smith, Peter N.

    1993-01-01

    Plans for an analysis of the sensitivity of Level 2 bridge-scour computations are described. Cross-section data from 15 bridge sites in Texas are modified to reflect four levels of field effort ranging from no field surveys to complete surveys. Data from United States Geological Survey (USGS) topographic maps will be used to supplement incomplete field surveys. The cross sections are used to compute the water-surface profile through each bridge for several T-year recurrence-interval design discharges. The effect of determining the downstream energy grade-line slope from topographic maps is investigated by systematically varying the starting slope of each profile. The water-surface profile analyses are then used to compute potential scour resulting from each of the design discharges. The planned results will be presented in the form of exceedance-probability versus scour-depth plots with the maximum and minimum scour depths at each T-year discharge presented as error bars.

  19. Sensitivity analysis on an AC600 aluminum skin component

    NASA Astrophysics Data System (ADS)

    Mendiguren, J.; Agirre, J.; Mugarra, E.; Galdos, L.; Saenz de Argandoña, E.

    2016-08-01

    New materials are been introduced on the car body in order to reduce weight and fulfil the international CO2 emission regulations. Among them, the application of aluminum alloys is increasing for skin panels. Even if these alloys are beneficial for the car design, the manufacturing of these components become more complex. In this regard, numerical simulations have become a necessary tool for die designers. There are multiple factors affecting the accuracy of these simulations e.g. hardening, anisotropy, lubrication, elastic behavior. Numerous studies have been conducted in the last years on high strength steels component stamping and on developing new anisotropic models for aluminum cup drawings. However, the impact of the correct modelling on the latest aluminums for the manufacturing of skin panels has been not yet analyzed. In this work, first, the new AC600 aluminum alloy of JLR-Novelis is characterized for anisotropy, kinematic hardening, friction coefficient, elastic behavior. Next, a sensitivity analysis is conducted on the simulation of a U channel (with drawbeads). Then, the numerical an experimental results are correlated in terms of springback and failure. Finally, some conclusions are drawn.

  20. Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator

    NASA Astrophysics Data System (ADS)

    Rehman, Naveed Ur; Siddiqui, Mubashir Ali

    2017-03-01

    In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.

  1. LSENS - GENERAL CHEMICAL KINETICS AND SENSITIVITY ANALYSIS CODE

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1994-01-01

    which provides the relationships between the predictions of a kinetics model and the input parameters of the problem. LSENS provides for efficient and accurate chemical kinetics computations and includes sensitivity analysis for a variety of problems, including nonisothermal conditions. LSENS replaces the previous NASA general chemical kinetics codes GCKP and GCKP84. LSENS is designed for flexibility, convenience and computational efficiency. A variety of chemical reaction models can be considered. The models include static system, steady one-dimensional inviscid flow, reaction behind an incident shock wave including boundary layer correction, and the perfectly stirred (highly backmixed) reactor. In addition, computations of equilibrium properties can be performed for the following assigned states, enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static problems LSENS computes sensitivity coefficients with respect to the initial values of the dependent variables and/or the three rates coefficient parameters of each chemical reaction. To integrate the ODEs describing chemical kinetics problems, LSENS uses the packaged code LSODE, the Livermore Solver for Ordinary Differential Equations, because it has been shown to be the most efficient and accurate code for solving such problems. The sensitivity analysis computations use the decoupled direct method, as implemented by Dunker and modified by Radhakrishnan. This method has shown greater efficiency and stability with equal or better accuracy than other methods of sensitivity analysis. LSENS is written in FORTRAN 77 with the exception of the NAMELIST extensions used for input. While this makes the code fairly machine independent, execution times on IBM PC compatibles would be unacceptable to most users. LSENS has been successfully implemented on a Sun4 running SunOS and a DEC VAX running VMS. With minor modifications, it should also be easily implemented on other

  2. Advancing sensitivity analysis to precisely characterize temporal parameter dominance

    NASA Astrophysics Data System (ADS)

    Guse, Björn; Pfannerstill, Matthias; Strauch, Michael; Reusser, Dominik; Lüdtke, Stefan; Volk, Martin; Gupta, Hoshin; Fohrer, Nicola

    2016-04-01

    Parameter sensitivity analysis is a strategy for detecting dominant model parameters. A temporal sensitivity analysis calculates daily sensitivities of model parameters. This allows a precise characterization of temporal patterns of parameter dominance and an identification of the related discharge conditions. To achieve this goal, the diagnostic information as derived from the temporal parameter sensitivity is advanced by including discharge information in three steps. In a first step, the temporal dynamics are analyzed by means of daily time series of parameter sensitivities. As sensitivity analysis method, we used the Fourier Amplitude Sensitivity Test (FAST) applied directly onto the modelled discharge. Next, the daily sensitivities are analyzed in combination with the flow duration curve (FDC). Through this step, we determine whether high sensitivities of model parameters are related to specific discharges. Finally, parameter sensitivities are separately analyzed for five segments of the FDC and presented as monthly averaged sensitivities. In this way, seasonal patterns of dominant model parameter are provided for each FDC segment. For this methodical approach, we used two contrasting catchments (upland and lowland catchment) to illustrate how parameter dominances change seasonally in different catchments. For all of the FDC segments, the groundwater parameters are dominant in the lowland catchment, while in the upland catchment the controlling parameters change seasonally between parameters from different runoff components. The three methodical steps lead to clear temporal patterns, which represent the typical characteristics of the study catchments. Our methodical approach thus provides a clear idea of how the hydrological dynamics are controlled by model parameters for certain discharge magnitudes during the year. Overall, these three methodical steps precisely characterize model parameters and improve the understanding of process dynamics in hydrological

  3. Behavioral metabolomics analysis identifies novel neurochemical signatures in methamphetamine sensitization.

    PubMed

    Adkins, D E; McClay, J L; Vunck, S A; Batman, A M; Vann, R E; Clark, S L; Souza, R P; Crowley, J J; Sullivan, P F; van den Oord, E J C G; Beardsley, P M

    2013-11-01

    Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In this study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine (MA)-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate, FDR <0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent MA levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization.

  4. Lock Acquisition and Sensitivity Analysis of Advanced LIGO Interferometers

    NASA Astrophysics Data System (ADS)

    Martynov, Denis

    Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe. The initial phase of LIGO started in 2002, and since then data was collected during the six science runs. Instrument sensitivity improved from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010. In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation of detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted until 2014. This thesis describes results of commissioning work done at the LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers. The first part of this thesis is devoted to the description of methods for bringing the interferometer into linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details. Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in

  5. A study of turbulent flow with sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Dwyer, H. A.; Peterson, T.

    1980-07-01

    In this paper a new type of analysis is introduced that can be used in numerical fluid mechanics. The method is known as sensitivity analysis and it has been widely used in the field of automatic control theory. Sensitivity analysis addresses in a systematic way to the question of 'how' the solution to an equation will change due to variations in the equation's parameters and boundary conditions. An important application is turbulent flow where there exists a large uncertainty in the models used for closure. In the present work the analysis is applied to the three-dimensional planetary boundary layer equations, and sensitivity equations are generated for various parameters in turbulence model. The solution of these equations with the proper techniques leads to considerable insight into the flow field and its dependence on turbulence parameters. Also, the analysis allows for unique decompositions of the parameter dependence and is efficient.

  6. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    PubMed

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms.

  7. Global sensitivity analysis in stochastic simulators of uncertain reaction networks.

    PubMed

    Navarro Jimenez, M; Le Maître, O P; Knio, O M

    2016-12-28

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  8. International comparison of criteria for evaluating sensitization of PRTR-designated chemical substances.

    PubMed

    Murakami, Tomoe; Oyama, Tsunehiro; Isse, Toyohi; Ogawa, Masanori; Sugie, Takuya; Kawamoto, Toshihiro

    2007-03-01

    In this study, we aim to compare the criteria for sensitizers among national organizations in various countries and international organizations, and to specify whether each Pollutant Release and Transfer Register (PRTR)-designated chemical substance is a sensitizer by each organization. The definition of sensitizing chemicals and the designation of respective sensitizers according to the PRTR law, Japan Society for Occupational Health (JSOH), American Conference of Governmental Industrial Hygienists (ACGIH), European Union (EU), and Deutsche Forschungsgemeinshaft (DFG) were studied. Of the 435 PRTR-designated chemical substances, 15 are listed as sensitizers according to the PRTR law, 16 as sensitizers of the airway and 21 as sensitizers of the skin by JSOH, 12 as sensitizers (no discrimination) by ACGIH, 19 (airway) and 85 (skin) by EU, and 15 (airway) and 43 (skin) by DFG. Only 9 substances were designated as sensitizers by all these organizations. The variation in the designation of sensitizers is accounted for by the differences in the classification criteria and grouping of chemical substances. JSOH limits the definition of sensitizers to substances that induce allergic reactions in humans and uses only human data. Other organizations utilize not only human evidence but also appropriate animal tests. In addition, EU designates an isocyanate as a sensitizer except those for which there is evidence showing that they do not cause respiratory sensitivity. The worldwide enforcement of the globally harmonized system (GHS) of classification and labeling of chemicals could promote not only the consistent designation of sensitizers among national and international organizations, but also the development of testing guidelines and classification criteria for mixtures.

  9. Bi-harmonic cantilever design for improved measurement sensitivity in tapping-mode atomic force microscopy.

    PubMed

    Loganathan, Muthukumaran; Bristow, Douglas A

    2014-04-01

    This paper presents a method and cantilever design for improving the mechanical measurement sensitivity in the atomic force microscopy (AFM) tapping mode. The method uses two harmonics in the drive signal to generate a bi-harmonic tapping trajectory. Mathematical analysis demonstrates that the wide-valley bi-harmonic tapping trajectory is as much as 70% more sensitive to changes in the sample topography than the standard single-harmonic trajectory typically used. Although standard AFM cantilevers can be driven in the bi-harmonic tapping trajectory, they require large forcing at the second harmonic. A design is presented for a bi-harmonic cantilever that has a second resonant mode at twice its first resonant mode, thereby capable of generating bi-harmonic trajectories with small forcing signals. Bi-harmonic cantilevers are fabricated by milling a small cantilever on the interior of a standard cantilever probe using a focused ion beam. Bi-harmonic drive signals are derived for standard cantilevers and bi-harmonic cantilevers. Experimental results demonstrate better than 30% improvement in measurement sensitivity using the bi-harmonic cantilever. Images obtained through bi-harmonic tapping exhibit improved sharpness and surface tracking, especially at high scan speeds and low force fields.

  10. Bi-harmonic cantilever design for improved measurement sensitivity in tapping-mode atomic force microscopy

    SciTech Connect

    Loganathan, Muthukumaran; Bristow, Douglas A.

    2014-04-15

    This paper presents a method and cantilever design for improving the mechanical measurement sensitivity in the atomic force microscopy (AFM) tapping mode. The method uses two harmonics in the drive signal to generate a bi-harmonic tapping trajectory. Mathematical analysis demonstrates that the wide-valley bi-harmonic tapping trajectory is as much as 70% more sensitive to changes in the sample topography than the standard single-harmonic trajectory typically used. Although standard AFM cantilevers can be driven in the bi-harmonic tapping trajectory, they require large forcing at the second harmonic. A design is presented for a bi-harmonic cantilever that has a second resonant mode at twice its first resonant mode, thereby capable of generating bi-harmonic trajectories with small forcing signals. Bi-harmonic cantilevers are fabricated by milling a small cantilever on the interior of a standard cantilever probe using a focused ion beam. Bi-harmonic drive signals are derived for standard cantilevers and bi-harmonic cantilevers. Experimental results demonstrate better than 30% improvement in measurement sensitivity using the bi-harmonic cantilever. Images obtained through bi-harmonic tapping exhibit improved sharpness and surface tracking, especially at high scan speeds and low force fields.

  11. Designing novel nano-immunoassays: antibody orientation versus sensitivity

    NASA Astrophysics Data System (ADS)

    Puertas, S.; Moros, M.; Fernández-Pacheco, R.; Ibarra, M. R.; Grazú, V.; de la Fuente, J. M.

    2010-12-01

    There is a growing interest in the use of magnetic nanoparticles (MNPs) for their application in quantitative and highly sensitive biosensors. Their use as labels of biological recognition events and their detection by means of some magnetic method constitute a very promising strategy for quantitative high-sensitive lateral-flow assays. In this paper, we report the importance of nanoparticle functionalization for the improvement of sensitivity for a lateral-flow immunoassay. More precisely, we have found that immobilization of IgG anti-hCG through its polysaccharide moieties on MNPs allows more successful recognition of the hCG hormone. Although we have used the detection of hCG as a model in this work, the strategy of binding antibodies to MNPs through its sugar chains reported here is applicable to other antibodies. It has huge potential as it will be very useful for the development of quantitative and high-sensitive lateral-flow assays for its use on human and veterinary, medicine, food and beverage manufacturing, pharmaceutical, medical biologics and personal care product production, environmental remediation, etc.

  12. Design and operational parameters of a rooftop rainwater harvesting system: definition, sensitivity and verification.

    PubMed

    Mun, J S; Han, M Y

    2012-01-01

    The appropriate design and evaluation of a rainwater harvesting (RWH) system is necessary to improve system performance and the stability of the water supply. The main design parameters (DPs) of an RWH system are rainfall, catchment area, collection efficiency, tank volume and water demand. Its operational parameters (OPs) include rainwater use efficiency (RUE), water saving efficiency (WSE) and cycle number (CN). The sensitivity analysis of a rooftop RWH system's DPs to its OPs reveals that the ratio of tank volume to catchment area (V/A) for an RWH system in Seoul, South Korea is recommended between 0.03 and 0.08 in terms of rate of change in RUE. The appropriate design value of V/A is varied with D/A. The extra tank volume up to V/A of 0.15∼0.2 is also available, if necessary to secure more water. Accordingly, we should figure out suitable value or range of DPs based on the sensitivity analysis to optimize design of an RWH system or improve operation efficiency. The operational data employed in this study, which was carried out to validate the design and evaluation method of an RWH system, were obtained from the system in use at a dormitory complex at Seoul National University (SNU) in Korea. The results of these operational data are in good agreement with those used in the initial simulation. The proposed method and the results of this research will be useful in evaluating and comparing the performance of RWH systems. It is found that RUE can be increased by expanding the variety of rainwater uses, particularly in the high rainfall season.

  13. Parameter sensitivity analysis of a simplified electrochemical and thermal model for Li-ion batteries aging

    NASA Astrophysics Data System (ADS)

    Edouard, C.; Petit, M.; Forgez, C.; Bernard, J.; Revel, R.

    2016-09-01

    In this work, a simplified electrochemical and thermal model that can predict both physicochemical and aging behavior of Li-ion batteries is studied. A sensitivity analysis of all its physical parameters is performed in order to find out their influence on the model output based on simulations under various conditions. The results gave hints on whether a parameter needs particular attention when measured or identified and on the conditions (e.g. temperature, discharge rate) under which it is the most sensitive. A specific simulation profile is designed for parameters involved in aging equations in order to determine their sensitivity. Finally, a step-wise method is followed to limit the influence of parameter values when identifying some of them, according to their relative sensitivity from the study. This sensitivity analysis and the subsequent step-wise identification method show very good results, such as a better fitting of the simulated cell voltage with experimental data.

  14. DESIGN ANALYSIS FOR THE NAVAL SNF WASTE PACKAGE

    SciTech Connect

    T.L. Mitchell

    2000-05-31

    The purpose of this analysis is to demonstrate the design of the naval spent nuclear fuel (SNF) waste package (WP) using the Waste Package Department's (WPD) design methodologies and processes described in the ''Waste Package Design Methodology Report'' (CRWMS M&O [Civilian Radioactive Waste Management System Management and Operating Contractor] 2000b). The calculations that support the design of the naval SNF WP will be discussed; however, only a sub-set of such analyses will be presented and shall be limited to those identified in the ''Waste Package Design Sensitivity Report'' (CRWMS M&O 2000c). The objective of this analysis is to describe the naval SNF WP design method and to show that the design of the naval SNF WP complies with the ''Naval Spent Nuclear Fuel Disposal Container System Description Document'' (CRWMS M&O 1999a) and Interface Control Document (ICD) criteria for Site Recommendation. Additional criteria for the design of the naval SNF WP have been outlined in Section 6.2 of the ''Waste Package Design Sensitivity Report'' (CRWMS M&O 2000c). The scope of this analysis is restricted to the design of the naval long WP containing one naval long SNF canister. This WP is representative of the WPs that will contain both naval short SNF and naval long SNF canisters. The following items are included in the scope of this analysis: (1) Providing a general description of the applicable design criteria; (2) Describing the design methodology to be used; (3) Presenting the design of the naval SNF waste package; and (4) Showing compliance with all applicable design criteria. The intended use of this analysis is to support Site Recommendation reports and assist in the development of WPD drawings. Activities described in this analysis were conducted in accordance with the technical product development plan (TPDP) ''Design Analysis for the Naval SNF Waste Package (CRWMS M&O 2000a).

  15. Novel design of dual-core microstructured fiber with enhanced longitudinal strain sensitivity

    NASA Astrophysics Data System (ADS)

    Szostkiewicz, Lukasz; Tenderenda, T.; Napierala, M.; Szymański, M.; Murawski, M.; Mergo, P.; Lesiak, P.; Marc, P.; Jaroszewicz, L. R.; Nasilowski, T.

    2014-05-01

    Constantly refined technology of manufacturing increasingly complex photonic crystal fibers (PCF) leads to new optical fiber sensor concepts. The ways of enhancing the influence of external factors (such as hydrostatic pressure, temperature, acceleration) on the fiber propagating conditions are commonly investigated in literature. On the other hand longitudinal strain analysis, due to the calculation difficulties caused by the three dimensional computation, are somehow neglected. In this paper we show results of such a 3D numerical simulation and report methods of tuning the fiber strain sensitivity by changing the fiber microstructure and core doping level. Furthermore our approach allows to control whether the modes' effective refractive index is increasing or decreasing with strain, with the possibility of achieving zero strain sensitivity with specific fiber geometries. The presented numerical analysis is compared with experimental results of the fabricated fibers characterization. Basing on the aforementioned methodology we propose a novel dual-core fiber design with significantly increased sensitivity to longitudinal strain for optical fiber sensor applications. Furthermore the reported fiber satisfies all conditions necessary for commercial applications like good mode matching with standard single-mode fiber, low confinement loss and ease of manufacturing with the stack-and-draw technique. Such fiber may serve as an integrated Mach-Zehnder interferometer when highly coherent source is used. With the optimization of single mode transmission to 850 nm, we propose a VCSEL source to be used in order to achieve a low-cost, reliable and compact strain sensing transducer.

  16. Sensitivity analysis of small circular cylinders as wake control

    NASA Astrophysics Data System (ADS)

    Meneghini, Julio; Patino, Gustavo; Gioria, Rafael

    2016-11-01

    We apply a sensitivity analysis to a steady external force regarding control vortex shedding from a circular cylinder using active and passive small control cylinders. We evaluate the changes on the flow produced by the device on the flow near the primary instability, transition to wake. We numerically predict by means of sensitivity analysis the effective regions to place the control devices. The quantitative effect of the hydrodynamic forces produced by the control devices is also obtained by a sensitivity analysis supporting the prediction of minimum rotation rate. These results are extrapolated for higher Reynolds. Also, the analysis provided the positions of combined passive control cylinders that suppress the wake. The latter shows that these particular positions for the devices are adequate to suppress the wake unsteadiness. In both cases the results agree very well with experimental cases of control devices previously published.

  17. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

    NASA Technical Reports Server (NTRS)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2016-01-01

    Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

  18. Design Spectrum Analysis in NASTRAN

    NASA Technical Reports Server (NTRS)

    Butler, T. G.

    1984-01-01

    The utility of Design Spectrum Analysis is to give a mode by mode characterization of the behavior of a design under a given loading. The theory of design spectrum is discussed after operations are explained. User instructions are taken up here in three parts: Transient Preface, Maximum Envelope Spectrum, and RMS Average Spectrum followed by a Summary Table. A single DMAP ALTER packet will provide for all parts of the design spectrum operations. The starting point for getting a modal break-down of the response to acceleration loading is the Modal Transient rigid format. After eigenvalue extraction, modal vectors need to be isolated in the full set of physical coordinates (P-sized as opposed to the D-sized vectors in RF 12). After integration for transient response the results are scanned over the solution time interval for the peak values and for the times that they occur. A module called SCAN was written to do this job, that organizes these maxima into a diagonal output matrix. The maximum amplifier in each mode is applied to the eigenvector of each mode which then reveals the maximum displacements, stresses, forces and boundary reactions that the structure will experience for a load history, mode by mode. The standard NASTRAN output processors have been modified for this task. It is required that modes be normalized to mass.

  19. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    NASA Technical Reports Server (NTRS)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral

  20. MAP Stability, Design and Analysis

    NASA Technical Reports Server (NTRS)

    Ericsson -Jackson, A.J.; Andrews, S. F.; ODonnell, J. R., Jr.; Markley, F. L.

    1998-01-01

    The Microwave Anisotropy Probe (MAP) is a follow-on to the Differential Microwave Radiometer (DMR) instrument on the Cosmic Background Explorer (COBE) spacecraft. The design and analysis of the MAP attitude control system (ACS) have been refined since work previously reported. The full spacecraft and instrument flexible model was developed in NASTRAN, and the resulting flexible modes were plotted and reduced with the Modal Significance Analysis Package (MSAP). The reduced-order model was used to perform the linear stability analysis for each control mode, the results of which are presented in this paper. Although MAP is going to a relatively disturbance-free Lissajous orbit around the Earth-Sun L2 Lagrange point, a detailed disturbance-torque analysis is required because there are only a small number of opportunities for momentum unloading each year. Environmental torques, including solar pressure at L2, and aerodynamic and gravity gradient during phasing-loop orbits, were calculated and simulated. A simple model of fuel slosh was derived to model its effect on the motion of the spacecraft. In addition, a thruster mode linear impulse controller was developed to meet the accuracy requirements of the phasing loop burns. A dynamic attitude error limiter was added to improve the performance of the ACS during large attitude slews. The result of this analysis is a stable ACS subsystem that meets all of the mission's requirements.

  1. MAP stability, design, and analysis

    NASA Technical Reports Server (NTRS)

    Ericsson-Jackson, A. J.; Andrews, S. F.; O'Donnell, J. R., Jr.; Markley, F. L.

    1998-01-01

    The Microwave Anisotropy Probe (MAP) is a follow-on to the Differential Microwave Radiometer (DMR) instrument on the Cosmic Background Explorer (COBE) spacecraft. The design and analysis of the MAP attitude control system (ACS) have been refined since work previously reported. The full spacecraft and instrument flexible model was developed in NASTRAN, and the resulting flexible modes were plotted and reduced with the Modal Significance Analysis Package (MSAP). The reduced-order model was used to perform the linear stability analysis for each control mode, the results of which are presented in this paper. Although MAP is going to a relatively disturbance-free Lissajous orbit around the Earth-Sun L(2) Lagrange point, a detailed disturbance-torque analysis is required because there are only a small number of opportunities for momentum unloading each year. Environmental torques, including solar pressure at L(2), aerodynamic and gravity gradient during phasing-loop orbits, were calculated and simulated. Thruster plume impingement torques that could affect the performance of the thruster modes were estimated and simulated, and a simple model of fuel slosh was derived to model its effect on the motion of the spacecraft. In addition, a thruster mode linear impulse controller was developed to meet the accuracy requirements of the phasing loop burns. A dynamic attitude error limiter was added to improve the performance of the ACS during large attitude slews. The result of this analysis is a stable ACS subsystem that meets all of the mission's requirements.

  2. Shape design sensitivities using fully automatic 3-D mesh generation

    NASA Technical Reports Server (NTRS)

    Botkin, M. E.

    1990-01-01

    Previous work in three dimensional shape optimization involved specifying design variables by associating parameters directly with mesh points. More recent work has shown the use of fully-automatic mesh generation based upon a parameterized geometric representation. Design variables have been associated with a mathematical model of the part rather than the discretized representation. The mesh generation procedure uses a nonuniform grid intersection technique to place nodal points directly on the surface geometry. Although there exists an associativity between the mesh and the geometrical/topological entities, there is no mathematical functional relationship. This poses a problem during certain steps in the optimization process in which geometry modification is required. For the large geometrical changes which occur at the beginning of each optimization step, a completely new mesh is created. However, for gradient calculations many small changes must be made and it would be too costly to regenerate the mesh for each design variable perturbation. For that reason, a local remeshing procedure has been implemented which operates only on the specific edges and faces associated with the design variable being perturbed. Two realistic design problems are presented which show the efficiency of this process and test the accuracy of the gradient computations.

  3. Sensitivity analysis as an aid in modelling and control of (poorly-defined) ecological systems. [closed ecological systems

    NASA Technical Reports Server (NTRS)

    Hornberger, G. M.; Rastetter, E. B.

    1982-01-01

    A literature review of the use of sensitivity analyses in modelling nonlinear, ill-defined systems, such as ecological interactions is presented. Discussions of previous work, and a proposed scheme for generalized sensitivity analysis applicable to ill-defined systems are included. This scheme considers classes of mathematical models, problem-defining behavior, analysis procedures (especially the use of Monte-Carlo methods), sensitivity ranking of parameters, and extension to control system design.

  4. Design and analysis of a micromachined gyroscope

    NASA Astrophysics Data System (ADS)

    Zarei, Nilgoon; Leung, Albert; Jones, John D.

    2012-03-01

    This paper describes the simulation and design of a MEMS thermal gyroscope and optimizing the design for increased sensitivity through the use of the Comsol Multiphysics software package. Two different designs are described, and the effects of working fluid properties are explored. A prototype of this device has been fabricated using techniques for rapid prototyping of MEMS transducers.

  5. Sensitivity analysis of a wide-field telescope

    NASA Astrophysics Data System (ADS)

    Lim, Juhee; Lee, Sangon; Moon, Il Kweon; Yang, Ho-Soon; Lee, Jong Ung; Choi, Young-Jun; Park, Jang-Hyun; Jin, Ho

    2013-07-01

    We are developing three ground-based wide-field telescopes. A wide-field Cassegrain telescope consists of two hyperbolic mirrors, aberration correctors and a field flattener for a 2-degree field of view. The diameters of the primary mirror and the secondary mirror are 500 mm and 200 mm, respectively. Corrective optics combined with four lenses, a filter and a window are also considered. For the imaging detection device, we use a charge coupled device (CCD) which has a 4096 × 4096 array with a 9-µm2 pixel size. One of the requirements is that the image motion limit of the opto-mechanical structure be less than 1 pixel size of the CCD on the image plane. To meet this requirement, we carried out an optical design evaluation and a misalignment analysis. Line-of-sight sensitivity equations are obtained from the rigid-body rotation in three directions and the rigid-body translation in three directions. These equations express the image motions at the image plane in terms of the independent motions of the optical components. We conducted a response simulation to evaluate the finite element method models under static load conditions, and the result is represented by the static response function. We show that the wide-field telescope system is stiff and stable enough to be supported and operated during its operating time.

  6. Sensitivity analysis of silicon-on-insulator quadruple Vernier racetrack resonators

    NASA Astrophysics Data System (ADS)

    Boeck, Robert; Chrostowski, Lukas; Jaeger, Nicolas A. F.

    2015-11-01

    We present a theoretical sensitivity analysis of silicon-on-insulator quadruple Vernier racetrack resonators based on varying, one at a time, various fabrication-dependent parameters. These parameters include the waveguide widths, heights, and propagation losses. We show that it should be possible to design a device that meets typical commercial specifications while being tolerant to changes in these parameters.

  7. Automation of primal and sensitivity analysis of transient coupled problems

    NASA Astrophysics Data System (ADS)

    Korelc, Jože

    2009-10-01

    The paper describes a hybrid symbolic-numeric approach to automation of primal and sensitivity analysis of computational models formulated and solved by finite element method. The necessary apparatus for the automation of steady-state, steady-state coupled, transient and transient coupled problems is introduced as combination of a symbolic system, an automatic differentiation (AD) technique and an automatic code generation. For this purpose the paper extends the classical formulation of AD by additional operators necessary for a high abstract description of primal and sensitivity analysis of the typical computational models. An appropriate abstract description for the fully implicit primal and sensitivity analysis of hyperelastic and elasto-plastic problems and a symbolic input for the generation of necessary user subroutines for the two-dimensional, hyperelastic finite element are presented at the end.

  8. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  9. Sensitivity analysis for missing data in regulatory submissions.

    PubMed

    Permutt, Thomas

    2016-07-30

    The National Research Council Panel on Handling Missing Data in Clinical Trials recommended that sensitivity analyses have to be part of the primary reporting of findings from clinical trials. Their specific recommendations, however, seem not to have been taken up rapidly by sponsors of regulatory submissions. The NRC report's detailed suggestions are along rather different lines than what has been called sensitivity analysis in the regulatory setting up to now. Furthermore, the role of sensitivity analysis in regulatory decision-making, although discussed briefly in the NRC report, remains unclear. This paper will examine previous ideas of sensitivity analysis with a view to explaining how the NRC panel's recommendations are different and possibly better suited to coping with present problems of missing data in the regulatory setting. It will also discuss, in more detail than the NRC report, the relevance of sensitivity analysis to decision-making, both for applicants and for regulators. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  10. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    EPA Pesticide Factsheets

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  11. Multiobjective sensitivity analysis and optimization of distributed hydrologic model MOBIDIC

    NASA Astrophysics Data System (ADS)

    Yang, J.; Castelli, F.; Chen, Y.

    2014-10-01

    Calibration of distributed hydrologic models usually involves how to deal with the large number of distributed parameters and optimization problems with multiple but often conflicting objectives that arise in a natural fashion. This study presents a multiobjective sensitivity and optimization approach to handle these problems for the MOBIDIC (MOdello di Bilancio Idrologico DIstribuito e Continuo) distributed hydrologic model, which combines two sensitivity analysis techniques (the Morris method and the state-dependent parameter (SDP) method) with multiobjective optimization (MOO) approach ɛ-NSGAII (Non-dominated Sorting Genetic Algorithm-II). This approach was implemented to calibrate MOBIDIC with its application to the Davidson watershed, North Carolina, with three objective functions, i.e., the standardized root mean square error (SRMSE) of logarithmic transformed discharge, the water balance index, and the mean absolute error of the logarithmic transformed flow duration curve, and its results were compared with those of a single objective optimization (SOO) with the traditional Nelder-Mead simplex algorithm used in MOBIDIC by taking the objective function as the Euclidean norm of these three objectives. Results show that (1) the two sensitivity analysis techniques are effective and efficient for determining the sensitive processes and insensitive parameters: surface runoff and evaporation are very sensitive processes to all three objective functions, while groundwater recession and soil hydraulic conductivity are not sensitive and were excluded in the optimization. (2) Both MOO and SOO lead to acceptable simulations; e.g., for MOO, the average Nash-Sutcliffe value is 0.75 in the calibration period and 0.70 in the validation period. (3) Evaporation and surface runoff show similar importance for watershed water balance, while the contribution of baseflow can be ignored. (4) Compared to SOO, which was dependent on the initial starting location, MOO provides more

  12. Theoretical design and screening of alkyne bridged triphenyl zinc porphyrins as sensitizer candidates for dye-sensitized solar cells.

    PubMed

    Zhang, Xianxi; Chen, Qianqian; Sun, Huafei; Pan, Tingting; Hu, Guiqi; Ma, Ruimin; Dou, Jianmin; Li, Dacheng; Pan, Xu

    2014-01-24

    Alkyne bridged porphyrins have been proved very promising sensitizers for dye-sensitized solar cells (DSSCs) with the highest photo-to-electric conversion efficiencies of 11.9% solely and 12.3% co-sensitized with other sensitizers achieved. Developing better porphyrin sensitizers with wider electronic absorption spectra to further improve the efficiencies of corresponding solar cells is still of great significance for the application of DSSCs. A series of triphenyl zinc porphyrins (ZnTriPP) differing in the nature of a pendant acceptor group and the conjugated bridge between the porphyrin nucleus and the acceptor unit were modeled and their electronic and spectral properties calculated using density functional theory. As compared with each other and the experimental results of the compounds used in DSSCs previously, the molecules with a relatively longer conjugative linker and a strong electron-withdrawing group such as cyanide adjacent to the carboxyl acid group seem to provide wider electronic absorption spectra and higher photo-to-electric conversion efficiencies. The dye candidates ZnTriPPE, ZnTriPPM, ZnTriPPQ, ZnTriPPR and ZnTriPPS designed in the current work were found promising to provide comparable photo-to-electric conversion efficiencies to the record 11.9% of the alkyne bridged porphyrin sensitizer YD2-o-C8 reported previously.

  13. Theoretical design and screening of alkyne bridged triphenyl zinc porphyrins as sensitizer candidates for dye-sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Zhang, Xianxi; Chen, Qianqian; Sun, Huafei; Pan, Tingting; Hu, Guiqi; Ma, Ruimin; Dou, Jianmin; Li, Dacheng; Pan, Xu

    2014-01-01

    Alkyne bridged porphyrins have been proved very promising sensitizers for dye-sensitized solar cells (DSSCs) with the highest photo-to-electric conversion efficiencies of 11.9% solely and 12.3% co-sensitized with other sensitizers achieved. Developing better porphyrin sensitizers with wider electronic absorption spectra to further improve the efficiencies of corresponding solar cells is still of great significance for the application of DSSCs. A series of triphenyl zinc porphyrins (ZnTriPP) differing in the nature of a pendant acceptor group and the conjugated bridge between the porphyrin nucleus and the acceptor unit were modeled and their electronic and spectral properties calculated using density functional theory. As compared with each other and the experimental results of the compounds used in DSSCs previously, the molecules with a relatively longer conjugative linker and a strong electron-withdrawing group such as cyanide adjacent to the carboxyl acid group seem to provide wider electronic absorption spectra and higher photo-to-electric conversion efficiencies. The dye candidates ZnTriPPE, ZnTriPPM, ZnTriPPQ, ZnTriPPR and ZnTriPPS designed in the current work were found promising to provide comparable photo-to-electric conversion efficiencies to the record 11.9% of the alkyne bridged porphyrin sensitizer YD2-o-C8 reported previously.

  14. Sensitivity Analysis and Optimization of Aerodynamic Configurations with Blend Surfaces

    NASA Technical Reports Server (NTRS)

    Thomas, A. M.; Tiwari, S. N.

    1997-01-01

    A novel (geometrical) parametrization procedure using solutions to a suitably chosen fourth order partial differential equation is used to define a class of airplane configurations. Inclusive in this definition are surface grids, volume grids, and grid sensitivity. The general airplane configuration has wing, fuselage, vertical tail and horizontal tail. The design variables are incorporated into the boundary conditions, and the solution is expressed as a Fourier series. The fuselage has circular cross section, and the radius is an algebraic function of four design parameters and an independent computational variable. Volume grids are obtained through an application of the Control Point Form method. A graphic interface software is developed which dynamically changes the surface of the airplane configuration with the change in input design variable. The software is made user friendly and is targeted towards the initial conceptual development of any aerodynamic configurations. Grid sensitivity with respect to surface design parameters and aerodynamic sensitivity coefficients based on potential flow is obtained using an Automatic Differentiation precompiler software tool ADIFOR. Aerodynamic shape optimization of the complete aircraft with twenty four design variables is performed. Unstructured and structured volume grids and Euler solutions are obtained with standard software to demonstrate the feasibility of the new surface definition.

  15. Sensitivity analysis of dynamic biological systems with time-delays

    PubMed Central

    2010-01-01

    Background Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. Results We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. Conclusions By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex

  16. Sensitivity Analysis for Dynamic Failure and Damage in Metallic Structures

    DTIC Science & Technology

    2005-03-01

    respect to the nominal alloy composition at the center of weld surface (Point 6 of Figure 7) -21 - U CO 2000 - * cE axc -2000 o" "....". . -401.11𔃺 1󈧄...Final Report Sensitivity Analysis for Dynamic Failure and Damage in Metallic Structures Office of Naval Research 800 North Quincy Street Arlington...3/31/05 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Sensitivity Analysis for Dynamic Failure and Damage in Metallic Structures Sb. GRANT NUMBER N000

  17. Sensitivity analysis of the fission gas behavior model in BISON.

    SciTech Connect

    Swiler, Laura Painton; Pastore, Giovanni; Perez, Danielle; Williamson, Richard

    2013-05-01

    This report summarizes the result of a NEAMS project focused on sensitivity analysis of a new model for the fission gas behavior (release and swelling) in the BISON fuel performance code of Idaho National Laboratory. Using the new model in BISON, the sensitivity of the calculated fission gas release and swelling to the involved parameters and the associated uncertainties is investigated. The study results in a quantitative assessment of the role of intrinsic uncertainties in the analysis of fission gas behavior in nuclear fuel.

  18. Preliminary sensitivity analysis of the Devonian shale in Ohio

    SciTech Connect

    Covatch, G.L.

    1985-06-01

    A preliminary sensitivity analysis of gas reserves in Devonian shale in Ohio was made on the six partitioned areas, based on a payout time of 3 years. Data sets were obtained from Lewin and Associates for the six partitioned areas in Ohio and used as a base case for the METC sensitivity analysis. A total of five different well stimulation techniques were evaluated in both the METC and Lewin studies. The five techniques evaluated were borehole shooting, a small radial stimulation, a large radial stimulation, a small vertical fracture, and a large vertical fracture.

  19. Stable locality sensitive discriminant analysis for image recognition.

    PubMed

    Gao, Quanxue; Liu, Jingjing; Cui, Kai; Zhang, Hailin; Wang, Xiaogang

    2014-06-01

    Locality Sensitive Discriminant Analysis (LSDA) is one of the prevalent discriminant approaches based on manifold learning for dimensionality reduction. However, LSDA ignores the intra-class variation that characterizes the diversity of data, resulting in unstableness of the intra-class geometrical structure representation and not good enough performance of the algorithm. In this paper, a novel approach is proposed, namely stable locality sensitive discriminant analysis (SLSDA), for dimensionality reduction. SLSDA constructs an adjacency graph to model the diversity of data and then integrates it in the objective function of LSDA. Experimental results in five databases show the effectiveness of the proposed approach.

  20. What Constitutes a "Good" Sensitivity Analysis? Elements and Tools for a Robust Sensitivity Analysis with Reduced Computational Cost

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin

    2016-04-01

    Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  1. Optical design and analysis program.

    PubMed

    Powell, I

    1978-11-01

    An optical design and analysis program structured for operation on a minicomputer has been developed at NRC (National Research Council of Canada). It has been designed to be used interactively giving the user both flexibility and ease of operation. The computer on which it runs at present is a Digital PDP11 with a memory of around 28K, and this represents a great saving in computer costs when compared with those of a large computer upon which most lens design work is carried out. This program has capabilities for optimizing a lens system, for pupil exploration, for fitting the computed wavefront aberration to a polynomial, and for evaluating the diffraction optical transfer function. Although only ten finite rays are traced in the optimization routine, the aberrations computed, together with the Seidel aberrations obtained from the paraxial ray trace, provide the user with adequate control of the aberrations over both aperture and field. A Double Gauss and a Maksutov-Cassegrain system are used as practical examples to illustrate this.

  2. Efficient sensitivity analysis method for chaotic dynamical systems

    SciTech Connect

    Liao, Haitao

    2016-05-15

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.

  3. A Global Sensitivity Analysis Methodology for Multi-physics Applications

    SciTech Connect

    Tong, C H; Graziani, F R

    2007-02-02

    Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.

  4. The Design and Operation of Ultra-Sensitive and Tunable Radio-Frequency Interferometers.

    PubMed

    Cui, Yan; Wang, Pingshan

    2014-12-01

    Dielectric spectroscopy (DS) is an important technique for scientific and technological investigations in various areas. DS sensitivity and operating frequency ranges are critical for many applications, including lab-on-chip development where sample volumes are small with a wide range of dynamic processes to probe. In this work, we present the design and operation considerations of radio-frequency (RF) interferometers that are based on power-dividers (PDs) and quadrature-hybrids (QHs). Such interferometers are proposed to address the sensitivity and frequency tuning challenges of current DS techniques. Verified algorithms together with mathematical models are presented to quantify material properties from scattering parameters for three common transmission line sensing structures, i.e., coplanar waveguides (CPWs), conductor-backed CPWs, and microstrip lines. A high-sensitivity and stable QH-based interferometer is demonstrated by measuring glucose-water solution at a concentration level that is ten times lower than some recent RF sensors while our sample volume is ~1 nL. Composition analysis of ternary mixture solutions are also demonstrated with a PD-based interferometer. Further work is needed to address issues like system automation, model improvement at high frequencies, and interferometer scaling.

  5. The Design and Operation of Ultra-Sensitive and Tunable Radio-Frequency Interferometers

    PubMed Central

    Cui, Yan; Wang, Pingshan

    2015-01-01

    Dielectric spectroscopy (DS) is an important technique for scientific and technological investigations in various areas. DS sensitivity and operating frequency ranges are critical for many applications, including lab-on-chip development where sample volumes are small with a wide range of dynamic processes to probe. In this work, we present the design and operation considerations of radio-frequency (RF) interferometers that are based on power-dividers (PDs) and quadrature-hybrids (QHs). Such interferometers are proposed to address the sensitivity and frequency tuning challenges of current DS techniques. Verified algorithms together with mathematical models are presented to quantify material properties from scattering parameters for three common transmission line sensing structures, i.e., coplanar waveguides (CPWs), conductor-backed CPWs, and microstrip lines. A high-sensitivity and stable QH-based interferometer is demonstrated by measuring glucose–water solution at a concentration level that is ten times lower than some recent RF sensors while our sample volume is ~1 nL. Composition analysis of ternary mixture solutions are also demonstrated with a PD-based interferometer. Further work is needed to address issues like system automation, model improvement at high frequencies, and interferometer scaling. PMID:26549891

  6. Sobol‧ sensitivity analysis of NAPL-contaminated aquifer remediation process based on multiple surrogates

    NASA Astrophysics Data System (ADS)

    Luo, Jiannan; Lu, Wenxi

    2014-06-01

    Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.

  7. Polarization sensitivity analysis of an earth remote sensing instrument - The MODIS-N phase B study

    NASA Technical Reports Server (NTRS)

    Waluschka, E.; Silverglate, P.; Ftaclas, C.; Turner, A.

    1992-01-01

    Polarization analysis software that employs Jones matrix formalism to calculate the polarization sensitivity of an instrument design was developed at Hughes Danbury Optical Systems. The code is capable of analyzing the full ray bundle at its angles of incidence for each optical surface. Input is based on the system ray trace and the thin film coating design at each surface. The MODIS-N (Moderate Resolution Imaging Spectrometer) system is used to demonstrate that it is possible to meet stringent requirements on polarization insensitivity associated with planned remote sensing instruments. Analysis indicates that a polarization sensitivity less than or equal to 2 percent was achieved in all desired spectral bands at all pointing angles, per specification. Polarization sensitivities were as high as 10 percent in similar remote sensing instruments.

  8. Sensitivity analysis in a Lassa fever deterministic mathematical model

    NASA Astrophysics Data System (ADS)

    Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman

    2015-05-01

    Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.

  9. Sensitivity analysis applied to stalled airfoil wake and steady control

    NASA Astrophysics Data System (ADS)

    Patino, Gustavo; Gioria, Rafael; Meneghini, Julio

    2014-11-01

    The sensitivity of an eigenvalue to base flow modifications induced by an external force is applied to the global unstable modes associated to the onset of vortex shedding in the wake of a stalled airfoil. In this work, the flow regime is close to the first instability of the system and its associated eigenvalue/eigenmode is determined. The sensitivity analysis to a general punctual external force allows establishing the regions where control devices must be in order to stabilize the global modes. Different types of steady control devices, passive and active, are used in the regions predicted by the sensitivity analysis to check the vortex shedding suppression, i.e. the primary instability bifurcation is delayed. The new eigenvalue, modified by the action of the device, is also calculated. Finally the spectral finite element method is employed to determine flow characteristics before and after of the bifurcation in order to cross check the results.

  10. Uncertainty and sensitivity analysis and its applications in OCD measurements

    NASA Astrophysics Data System (ADS)

    Vagos, Pedro; Hu, Jiangtao; Liu, Zhuan; Rabello, Silvio

    2009-03-01

    This article describes an Uncertainty & Sensitivity Analysis package, a mathematical tool that can be an effective time-shortcut for optimizing OCD models. By including real system noises in the model, an accurate method for predicting measurements uncertainties is shown. The assessment, in an early stage, of the uncertainties, sensitivities and correlations of the parameters to be measured drives the user in the optimization of the OCD measurement strategy. Real examples are discussed revealing common pitfalls like hidden correlations and simulation results are compared with real measurements. Special emphasis is given to 2 different cases: 1) the optimization of the data set of multi-head metrology tools (NI-OCD, SE-OCD), 2) the optimization of the azimuth measurement angle in SE-OCD. With the uncertainty and sensitivity analysis result, the right data set and measurement mode (NI-OCD, SE-OCD or NI+SE OCD) can be easily selected to achieve the best OCD model performance.

  11. Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.

    2007-01-01

    To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.

  12. The Volatility of Data Space: Topology Oriented Sensitivity Analysis

    PubMed Central

    Du, Jing; Ligmann-Zielinska, Arika

    2015-01-01

    Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929

  13. Beyond the GUM: variance-based sensitivity analysis in metrology

    NASA Astrophysics Data System (ADS)

    Lira, I.

    2016-07-01

    Variance-based sensitivity analysis is a well established tool for evaluating the contribution of the uncertainties in the inputs to the uncertainty in the output of a general mathematical model. While the literature on this subject is quite extensive, it has not found widespread use in metrological applications. In this article we present a succinct review of the fundamentals of sensitivity analysis, in a form that should be useful to most people familiarized with the Guide to the Expression of Uncertainty in Measurement (GUM). Through two examples, it is shown that in linear measurement models, no new knowledge is gained by using sensitivity analysis that is not already available after the terms in the so-called ‘law of propagation of uncertainties’ have been computed. However, if the model behaves non-linearly in the neighbourhood of the best estimates of the input quantities—and if these quantities are assumed to be statistically independent—sensitivity analysis is definitely advantageous for gaining insight into how they can be ranked according to their importance in establishing the uncertainty of the measurand.

  14. Omitted Variable Sensitivity Analysis with the Annotated Love Plot

    ERIC Educational Resources Information Center

    Hansen, Ben B.; Fredrickson, Mark M.

    2014-01-01

    The goal of this research is to make sensitivity analysis accessible not only to empirical researchers but also to the various stakeholders for whom educational evaluations are conducted. To do this it derives anchors for the omitted variable (OV)-program participation association intrinsically, using the Love plot to present a wide range of…

  15. Sensitivity analysis of the Ohio phosphorus risk index

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The Phosphorus (P) Index is a widely used tool for assessing the vulnerability of agricultural fields to P loss; yet, few of the P Indices developed in the U.S. have been evaluated for their accuracy. Sensitivity analysis is one approach that can be used prior to calibration and field-scale testing ...

  16. Optimization of Forming Processes in Microstructure Sensitive Design

    NASA Astrophysics Data System (ADS)

    Garmestani, H.; Li, D. S.

    2004-06-01

    Optimization of the forming processes from initial microstructures of raw materials to desired microstructures of final products is an important topic in materials design. Processing path model proposed in this study gives an explicit mathematical solution about how the microstructure evolves during thermomechanical processing. Based on a conservation principle in the orientation space (originally proposed by Bunge), this methodology is independent of the underlying deformation mechanisms. The evolutions of texture coefficients are modeled using a texture evolution matrix calculated from the experimental results. For the same material using the same processing method, the texture evolution matrix is the same. It does not change with the initial texture. This processing path model provides functions of processing paths and streamlines.

  17. Sensitivity analysis of a ground-water-flow model

    USGS Publications Warehouse

    Torak, Lynn J.; ,

    1991-01-01

    A sensitivity analysis was performed on 18 hydrological factors affecting steady-state groundwater flow in the Upper Floridan aquifer near Albany, southwestern Georgia. Computations were based on a calibrated, two-dimensional, finite-element digital model of the stream-aquifer system and the corresponding data inputs. Flow-system sensitivity was analyzed by computing water-level residuals obtained from simulations involving individual changes to each hydrological factor. Hydrological factors to which computed water levels were most sensitive were those that produced the largest change in the sum-of-squares of residuals for the smallest change in factor value. Plots of the sum-of-squares of residuals against multiplier or additive values that effect change in the hydrological factors are used to evaluate the influence of each factor on the simulated flow system. The shapes of these 'sensitivity curves' indicate the importance of each hydrological factor to the flow system. Because the sensitivity analysis can be performed during the preliminary phase of a water-resource investigation, it can be used to identify the types of hydrological data required to accurately characterize the flow system prior to collecting additional data or making management decisions.

  18. LSENS - GENERAL CHEMICAL KINETICS AND SENSITIVITY ANALYSIS CODE

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1994-01-01

    which provides the relationships between the predictions of a kinetics model and the input parameters of the problem. LSENS provides for efficient and accurate chemical kinetics computations and includes sensitivity analysis for a variety of problems, including nonisothermal conditions. LSENS replaces the previous NASA general chemical kinetics codes GCKP and GCKP84. LSENS is designed for flexibility, convenience and computational efficiency. A variety of chemical reaction models can be considered. The models include static system, steady one-dimensional inviscid flow, reaction behind an incident shock wave including boundary layer correction, and the perfectly stirred (highly backmixed) reactor. In addition, computations of equilibrium properties can be performed for the following assigned states, enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static problems LSENS computes sensitivity coefficients with respect to the initial values of the dependent variables and/or the three rates coefficient parameters of each chemical reaction. To integrate the ODEs describing chemical kinetics problems, LSENS uses the packaged code LSODE, the Livermore Solver for Ordinary Differential Equations, because it has been shown to be the most efficient and accurate code for solving such problems. The sensitivity analysis computations use the decoupled direct method, as implemented by Dunker and modified by Radhakrishnan. This method has shown greater efficiency and stability with equal or better accuracy than other methods of sensitivity analysis. LSENS is written in FORTRAN 77 with the exception of the NAMELIST extensions used for input. While this makes the code fairly machine independent, execution times on IBM PC compatibles would be unacceptable to most users. LSENS has been successfully implemented on a Sun4 running SunOS and a DEC VAX running VMS. With minor modifications, it should also be easily implemented on other

  19. Molecular design for enhanced sensitivity of a FRET aptasensor built on the graphene oxide surface.

    PubMed

    Ueno, Yuko; Furukawa, Kazuaki; Matsuo, Kota; Inoue, Suzuyo; Hayashi, Katsuyoshi; Hibino, Hiroki

    2013-11-14

    We designed a biomolecular probe for highly sensitive protein detection by modifying an aptamer with a DNA spacer. The spacer controls the distance between a fluorescence dye and a quencher, which is crucial for FRET-based sensors. We successfully demonstrated an improvement in the sensitivity of an on-chip graphene oxide aptasensor.

  20. Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit

    NASA Astrophysics Data System (ADS)

    Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie

    2015-09-01

    The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity

  1. Systems design and analysis of the microwave radiometer spacecraft

    NASA Technical Reports Server (NTRS)

    Garrett, L. B.

    1981-01-01

    Systems design and analysis data were generated for microwave radiometer spacecraft concept using the Large Advanced Space Systems (LASS) computer aided design and analysis program. Parametric analyses were conducted for perturbations off the nominal-orbital-altitude/antenna-reflector-size and for control/propulsion system options. Optimized spacecraft mass, structural element design, and on-orbit loading data are presented. Propulsion and rigid-body control systems sensitivities to current and advanced technology are established. Spacecraft-induced and environmental effects on antenna performance (surface accuracy, defocus, and boresight off-set) are quantified and structured material frequencies and modal shapes are defined.

  2. Superconducting Accelerating Cavity Pressure Sensitivity Analysis and Stiffening

    SciTech Connect

    Rodnizki, J; Ben Aliz, Y; Grin, A; Horvitz, Z; Perry, A; Weissman, L; Davis, G Kirk; Delayen, Jean R.

    2014-12-01

    The Soreq Applied Research Accelerator Facility (SARAF) design is based on a 40 MeV 5 mA light ions superconducting RF linac. Phase-I of SARAF delivers up to 2 mA CW proton beams in an energy range of 1.5 - 4.0 MeV. The maximum beam power that we have reached is 5.7 kW. Today, the main limiting factor to reach higher ion energy and beam power is related to the HWR sensitivity to the liquid helium coolant pressure fluctuations. The HWR sensitivity to helium pressure is about 60 Hz/mbar. The cavities had been designed, a decade ago, to be soft in order to enable tuning of their novel shape. However, the cavities turned out to be too soft. In this work we found that increasing the rigidity of the cavities in the vicinity of the external drift tubes may reduce the cavity sensitivity by a factor of three. A preliminary design to increase the cavity rigidity is presented.

  3. Sensitive Chiral Analysis via Microwave Three-Wave Mixing

    NASA Astrophysics Data System (ADS)

    Patterson, David; Doyle, John M.

    2013-07-01

    We demonstrate chirality-induced three-wave mixing in the microwave regime, using rotational transitions in cold gas-phase samples of 1,2-propanediol and 1,3-butanediol. We show that bulk three-wave mixing, which can only be realized in a chiral environment, provides a sensitive, species-selective probe of enantiomeric excess and is applicable to a broad class of molecules. The doubly resonant condition provides simultaneous identification of species and of handedness, which should allow sensitive chiral analysis even within a complex mixture.

  4. Design Sensitivities of the Superconducting Parallel-Bar Cavity

    SciTech Connect

    De Silva, Subashini U.; Delayen, Jean D.

    2010-09-01

    The superconducting parallel-bar cavity has properties that makes it attractive as a deflecting or crabbing rf structure. For example it is under consideration as an rf separator for the Jefferson Lab 12 GeV upgrade and as a crabbing structure for a possible LHC luminosity upgrade. In order to maintain the purity of the deflecting mode and avoid mixing with the near accelerating mode caused by geometrical imperfection, a minimum frequency separation is needed which depends on the expected deviations from perfect symmetry. We have done an extensive analysis of the impact of several geometrical imperfections on the properties of the parallel-bar cavities and the effects on the beam, and present the results in this paper.

  5. Pressure-Sensitive Paints Advance Rotorcraft Design Testing

    NASA Technical Reports Server (NTRS)

    2013-01-01

    The rotors of certain helicopters can spin at speeds as high as 500 revolutions per minute. As the blades slice through the air, they flex, moving into the wind and back out, experiencing pressure changes on the order of thousands of times a second and even higher. All of this makes acquiring a true understanding of rotorcraft aerodynamics a difficult task. A traditional means of acquiring aerodynamic data is to conduct wind tunnel tests using a vehicle model outfitted with pressure taps and other sensors. These sensors add significant costs to wind tunnel testing while only providing measurements at discrete locations on the model's surface. In addition, standard sensor solutions do not work for pulling data from a rotor in motion. "Typical static pressure instrumentation can't handle that," explains Neal Watkins, electronics engineer in Langley Research Center s Advanced Sensing and Optical Measurement Branch. "There are dynamic pressure taps, but your costs go up by a factor of five to ten if you use those. In addition, recovery of the pressure tap readings is accomplished through slip rings, which allow only a limited amount of sensors and can require significant maintenance throughout a typical rotor test." One alternative to sensor-based wind tunnel testing is pressure sensitive paint (PSP). A coating of a specialized paint containing luminescent material is applied to the model. When exposed to an LED or laser light source, the material glows. The glowing material tends to be reactive to oxygen, explains Watkins, which causes the glow to diminish. The more oxygen that is present (or the more air present, since oxygen exists in a fixed proportion in air), the less the painted surface glows. Imaged with a camera, the areas experiencing greater air pressure show up darker than areas of less pressure. "The paint allows for a global pressure map as opposed to specific points," says Watkins. With PSP, each pixel recorded by the camera becomes an optical pressure

  6. Improved PID controller design for unstable time delay processes based on direct synthesis method and maximum sensitivity

    NASA Astrophysics Data System (ADS)

    Vanavil, B.; Krishna Chaitanya, K.; Seshagiri Rao, A.

    2015-06-01

    In this paper, a proportional-integral-derivative controller in series with a lead-lag filter is designed for control of the open-loop unstable processes with time delay based on direct synthesis method. Study of the performance of the designed controllers has been carried out on various unstable processes. Set-point weighting is considered to reduce the undesirable overshoot. The proposed scheme consists of only one tuning parameter, and systematic guidelines are provided for selection of the tuning parameter based on the peak value of the sensitivity function (Ms). Robustness analysis has been carried out based on sensitivity and complementary sensitivity functions. Nominal and robust control performances are achieved with the proposed method and improved closed-loop performances are obtained when compared to the recently reported methods in the literature.

  7. Design, analysis and test verification of advanced encapsulation systems

    NASA Technical Reports Server (NTRS)

    Garcia, A., III

    1982-01-01

    An analytical methodology for advanced encapsulation designs was developed. From these methods design sensitivities are established for the development of photovoltaic module criteria and the definition of needed research tasks. Analytical models were developed to perform optical, thermal, electrical and analyses on candidate encapsulation systems. From these analyses several candidate systems were selected for qualification testing. Additionally, test specimens of various types are constructed and tested to determine the validity of the analysis methodology developed. Identified deficiencies and/or discrepancies between analytical models and relevant test data are corrected. Prediction capability of analytical models is improved. Encapsulation engineering generalities, principles, and design aids for photovoltaic module designers is generated.

  8. Global sensitivity analysis in wastewater treatment plant model applications: prioritizing sources of uncertainty.

    PubMed

    Sin, Gürkan; Gernaey, Krist V; Neumann, Marc B; van Loosdrecht, Mark C M; Gujer, Willi

    2011-01-01

    This study demonstrates the usefulness of global sensitivity analysis in wastewater treatment plant (WWTP) design to prioritize sources of uncertainty and quantify their impact on performance criteria. The study, which is performed with the Benchmark Simulation Model no. 1 plant design, complements a previous paper on input uncertainty characterisation and propagation (Sin et al., 2009). A sampling-based sensitivity analysis is conducted to compute standardized regression coefficients. It was found that this method is able to decompose satisfactorily the variance of plant performance criteria (with R(2) > 0.9) for effluent concentrations, sludge production and energy demand. This high extent of linearity means that the plant performance criteria can be described as linear functions of the model inputs under the defined plant conditions. In effect, the system of coupled ordinary differential equations can be replaced by multivariate linear models, which can be used as surrogate models. The importance ranking based on the sensitivity measures demonstrates that the most influential factors involve ash content and influent inert particulate COD among others, largely responsible for the uncertainty in predicting sludge production and effluent ammonium concentration. While these results were in agreement with process knowledge, the added value is that the global sensitivity methods can quantify the contribution of the variance of significant parameters, e.g., ash content explains 70% of the variance in sludge production. Further the importance of formulating appropriate sensitivity analysis scenarios that match the purpose of the model application needs to be highlighted. Overall, the global sensitivity analysis proved a powerful tool for explaining and quantifying uncertainties as well as providing insight into devising useful ways for reducing uncertainties in the plant performance. This information can help engineers design robust WWTP plants.

  9. Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety

    SciTech Connect

    Broadhead, B.L.; Childs, R.L.; Rearden, B.T.

    1999-09-20

    Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community.

  10. Shape sensitivity analysis of flutter response of a laminated wing

    NASA Technical Reports Server (NTRS)

    Bergen, Fred D.; Kapania, Rakesh K.

    1988-01-01

    A method is presented for calculating the shape sensitivity of a wing aeroelastic response with respect to changes in geometric shape. Yates' modified strip method is used in conjunction with Giles' equivalent plate analysis to predict the flutter speed, frequency, and reduced frequency of the wing. Three methods are used to calculate the sensitivity of the eigenvalue. The first method is purely a finite difference calculation of the eigenvalue derivative directly from the solution of the flutter problem corresponding to the two different values of the shape parameters. The second method uses an analytic expression for the eigenvalue sensitivities of a general complex matrix, where the derivatives of the aerodynamic, mass, and stiffness matrices are computed using a finite difference approximation. The third method also uses an analytic expression for the eigenvalue sensitivities, but the aerodynamic matrix is computed analytically. All three methods are found to be in good agreement with each other. The sensitivities of the eigenvalues were used to predict the flutter speed, frequency, and reduced frequency. These approximations were found to be in good agreement with those obtained using a complete reanalysis.

  11. Automated Simulation For Analysis And Design

    NASA Technical Reports Server (NTRS)

    Cantwell, E.; Shenk, Tim; Robinson, Peter; Upadhye, R.

    1992-01-01

    Design Assistant Workstation (DAWN) software being developed to facilitate simulation of qualitative and quantitative aspects of behavior of life-support system in spacecraft, chemical-processing plant, heating and cooling system of large building, or any of variety of systems including interacting process streams and processes. Used to analyze alternative design scenarios or specific designs of such systems. Expert system will automate part of design analysis: reason independently by simulating design scenarios and return to designer with overall evaluations and recommendations.

  12. Sensitivity Analysis and Optimal Control of Anthroponotic Cutaneous Leishmania

    PubMed Central

    Zamir, Muhammad; Zaman, Gul; Alshomrani, Ali Saleh

    2016-01-01

    This paper is focused on the transmission dynamics and optimal control of Anthroponotic Cutaneous Leishmania. The threshold condition R0 for initial transmission of infection is obtained by next generation method. Biological sense of the threshold condition is investigated and discussed in detail. The sensitivity analysis of the reproduction number is presented and the most sensitive parameters are high lighted. On the basis of sensitivity analysis, some control strategies are introduced in the model. These strategies positively reduce the effect of the parameters with high sensitivity indices, on the initial transmission. Finally, an optimal control strategy is presented by taking into account the cost associated with control strategies. It is also shown that an optimal control exists for the proposed control problem. The goal of optimal control problem is to minimize, the cost associated with control strategies and the chances of infectious humans, exposed humans and vector population to become infected. Numerical simulations are carried out with the help of Runge-Kutta fourth order procedure. PMID:27505634

  13. Navigation Design and Analysis for the Orion Cislunar Exploration Missions

    NASA Technical Reports Server (NTRS)

    D'Souza, Christopher; Holt, Greg; Gay, Robert; Zanetti, Renato

    2014-01-01

    This paper details the design and analysis of the cislunar optical navigation system being proposed for the Orion Earth-Moon (EM) missions. In particular, it presents the mathematics of the navigation filter. It also presents the sensitivity analysis that has been performed to understand the performance of the proposed system, with particular attention paid to entry flight path angle constraints and the DELTA V performance

  14. Computational aspects of sensitivity calculations in transient structural analysis

    NASA Technical Reports Server (NTRS)

    Greene, William H.; Haftka, Raphael T.

    1988-01-01

    A key step in the application of formal automated design techniques to structures under transient loading is the calculation of sensitivities of response quantities to the design parameters. This paper considers structures with general forms of damping acted on by general transient loading and addresses issues of computational errors and computational efficiency. The equations of motion are reduced using the traditional basis of vibration modes and then integrated using a highly accurate, explicit integration technique. A critical point constraint formulation is used to place constraints on the magnitude of each response quantity as a function of time. Three different techniques for calculating sensitivities of the critical point constraints are presented. The first two are based on the straightforward application of the forward and central difference operators, respectively. The third is based on explicit differentiation of the equations of motion. Condition errors, finite difference truncation errors, and modal convergence errors for the three techniques are compared by applying them to a simple five-span-beam problem. Sensitivity results are presented for two different transient loading conditions and for both damped and undamped cases.

  15. Wing-Design And -Analysis Code

    NASA Technical Reports Server (NTRS)

    Darden, Christine M.; Carlson, Harry W.

    1990-01-01

    WINGDES2 computer program provides wing-design algorithm based on modified linear theory taking into account effects of attainable leading-edge thrust. Features improved numerical accuracy and additional capabilities. Provides analysis as well as design capability and applicable to both subsonic and supersonic flow. Replaces earlier wing-design code designated WINGDES (see LAR-13315). Written in FORTRAN V.

  16. Sensitivity analysis techniques for models of human behavior.

    SciTech Connect

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  17. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    NASA Astrophysics Data System (ADS)

    Wang, Qiqi; Hu, Rui; Blonigan, Patrick

    2014-06-01

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned "least squares shadowing (LSS) problem". The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.

  18. Adaptive approach for nonlinear sensitivity analysis of reaction kinetics.

    PubMed

    Horenko, Illia; Lorenz, Sönke; Schütte, Christof; Huisinga, Wilhelm

    2005-07-15

    We present a unified approach for linear and nonlinear sensitivity analysis for models of reaction kinetics that are stated in terms of systems of ordinary differential equations (ODEs). The approach is based on the reformulation of the ODE problem as a density transport problem described by a Fokker-Planck equation. The resulting multidimensional partial differential equation is herein solved by extending the TRAIL algorithm originally introduced by Horenko and Weiser in the context of molecular dynamics (J. Comp. Chem. 2003, 24, 1921) and discussed it in comparison with Monte Carlo techniques. The extended TRAIL approach is fully adaptive and easily allows to study the influence of nonlinear dynamical effects. We illustrate the scheme in application to an enzyme-substrate model problem for sensitivity analysis w.r.t. to initial concentrations and parameter values.

  19. Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations

    SciTech Connect

    Wang, Qiqi Hu, Rui Blonigan, Patrick

    2014-06-15

    The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate our algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.

  20. Objective analysis of the ARM IOP data: method and sensitivity

    SciTech Connect

    Cedarwall, R; Lin, J L; Xie, S C; Yio, J J; Zhang, M H

    1999-04-01

    Motivated by the need of to obtain accurate objective analysis of field experimental data to force physical parameterizations in numerical models, this paper -first reviews the existing objective analysis methods and interpolation schemes that are used to derive atmospheric wind divergence, vertical velocity, and advective tendencies. Advantages and disadvantages of each method are discussed. It is shown that considerable uncertainties in the analyzed products can result from the use of different analysis schemes and even more from different implementations of a particular scheme. The paper then describes a hybrid approach to combine the strengths of the regular grid method and the line-integral method, together with a variational constraining procedure for the analysis of field experimental data. In addition to the use of upper air data, measurements at the surface and at the top-of-the-atmosphere are used to constrain the upper air analysis to conserve column-integrated mass, water, energy, and momentum. Analyses are shown for measurements taken in the Atmospheric Radiation Measurement Programs (ARM) July 1995 Intensive Observational Period (IOP). Sensitivity experiments are carried out to test the robustness of the analyzed data and to reveal the uncertainties in the analysis. It is shown that the variational constraining process significantly reduces the sensitivity of the final data products.

  1. Graphical methods for the sensitivity analysis in discriminant analysis

    DOE PAGES

    Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang

    2015-09-30

    Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern ofmore » the change.« less

  2. Graphical methods for the sensitivity analysis in discriminant analysis

    SciTech Connect

    Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang

    2015-09-30

    Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern of the change.

  3. A Sensitivity Analysis of Entry Age Normal Military Retirement Costs.

    DTIC Science & Technology

    1983-09-01

    sensitivity analysis of both the individual and aggregate entryU age normal actuarial cost models under differing economic, man- agerial and legal assumptions... actuarial cost models under dif- fering economic, managerial and legal assumptions. In addition to the above, a set of simple estimating equations... actuarially com- * puted variables are listed since the model uses each pay- grade’s individual actuarial data (e.g. the life expectancy of a retiring

  4. Sensitivity Analysis of Launch Vehicle Debris Risk Model

    NASA Technical Reports Server (NTRS)

    Gee, Ken; Lawrence, Scott L.

    2010-01-01

    As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.

  5. System analysis in rotorcraft design: The past decade

    NASA Technical Reports Server (NTRS)

    Galloway, Thomas L.

    1988-01-01

    Rapid advances in the technology of electronic digital computers and the need for an integrated synthesis approach in developing future rotorcraft programs has led to increased emphasis on system analysis techniques in rotorcraft design. The task in systems analysis is to deal with complex, interdependent, and conflicting requirements in a structured manner so rational and objective decisions can be made. Whether the results are wisdom or rubbish depends upon the validity and sometimes more importantly, the consistency of the inputs, the correctness of the analysis, and a sensible choice of measures of effectiveness to draw conclusions. In rotorcraft design this means combining design requirements, technology assessment, sensitivity analysis and reviews techniques currently in use by NASA and Army organizations in developing research programs and vehicle specifications for rotorcraft. These procedures span simple graphical approaches to comprehensive analysis on large mainframe computers. Examples of recent applications to military and civil missions are highlighted.

  6. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis

    NASA Astrophysics Data System (ADS)

    Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian

    2017-01-01

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.

  7. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis.

    PubMed

    Hoffmann, Max J; Engelmann, Felix; Matera, Sebastian

    2017-01-28

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.

  8. Solvatochromic and Fluorogenic Dyes as Environment-Sensitive Probes: Design and Biological Applications.

    PubMed

    Klymchenko, Andrey S

    2017-02-21

    Fluorescent environment-sensitive probes are specially designed dyes that change their fluorescence intensity (fluorogenic dyes) or color (e.g., solvatochromic dyes) in response to change in their microenvironment polarity, viscosity, and molecular order. The studies of the past decade, including those of our group, have shown that these molecules become universal tools in fluorescence sensing and imaging. In fact, any biomolecular interaction or change in biomolecular organization results in modification of the local microenvironment, which can be directly monitored by these types of probes. In this Account, the main examples of environment-sensitive probes are summarized according to their design concepts. Solvatochromic dyes constitute a large class of environment-sensitive probes which change their color in response to polarity. Generally, they are push-pull dyes undergoing intramolecular charge transfer. Emission of their highly polarized excited state shifts to the red in more polar solvents. Excited-state intramolecular proton transfer is the second key concept to design efficient solvatochromic dyes, which respond to the microenvironment by changing relative intensity of the two emissive tautomeric forms. Due to their sensitivity to polarity and hydration, solvatochromic dyes have been successfully applied to biological membranes for studying lipid domains (rafts), apoptosis and endocytosis. As fluorescent labels, solvatochromic dyes can detect practically any type of biomolecular interactions, involving proteins, nucleic acids and biomembranes, because the binding event excludes local water molecules from the interaction site. On the other hand, fluorogenic probes usually exploit intramolecular rotation (conformation change) as a design concept, with molecular rotors being main representatives. These probes were particularly efficient for imaging viscosity and lipid order in biomembranes as well as to light up biomolecular targets, such as antibodies

  9. On the variational data assimilation problem solving and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Arcucci, Rossella; D'Amore, Luisa; Pistoia, Jenny; Toumi, Ralf; Murli, Almerico

    2017-04-01

    We consider the Variational Data Assimilation (VarDA) problem in an operational framework, namely, as it results when it is employed for the analysis of temperature and salinity variations of data collected in closed and semi closed seas. We present a computing approach to solve the main computational kernel at the heart of the VarDA problem, which outperforms the technique nowadays employed by the oceanographic operative software. The new approach is obtained by means of Tikhonov regularization. We provide the sensitivity analysis of this approach and we also study its performance in terms of the accuracy gain on the computed solution. We provide validations on two realistic oceanographic data sets.

  10. Sensitivity of Forecast Skill to Different Objective Analysis Schemes

    NASA Technical Reports Server (NTRS)

    Baker, W. E.

    1979-01-01

    Numerical weather forecasts are characterized by rapidly declining skill in the first 48 to 72 h. Recent estimates of the sources of forecast error indicate that the inaccurate specification of the initial conditions contributes substantially to this error. The sensitivity of the forecast skill to the initial conditions is examined by comparing a set of real-data experiments whose initial data were obtained with two different analysis schemes. Results are presented to emphasize the importance of the objective analysis techniques used in the assimilation of observational data.

  11. Disclosure of sensitive behaviors across self-administered survey modes: a meta-analysis.

    PubMed

    Gnambs, Timo; Kaspar, Kai

    2015-12-01

    In surveys, individuals tend to misreport behaviors that are in contrast to prevalent social norms or regulations. Several design features of the survey procedure have been suggested to counteract this problem; particularly, computerized surveys are supposed to elicit more truthful responding. This assumption was tested in a meta-analysis of survey experiments reporting 460 effect sizes (total N =125,672). Self-reported prevalence rates of several sensitive behaviors for which motivated misreporting has been frequently observed were compared across self-administered paper-and-pencil versus computerized surveys. The results revealed that computerized surveys led to significantly more reporting of socially undesirable behaviors than comparable surveys administered on paper. This effect was strongest for highly sensitive behaviors and surveys administered individually to respondents. Moderator analyses did not identify interviewer effects or benefits of audio-enhanced computer surveys. The meta-analysis highlighted the advantages of computerized survey modes for the assessment of sensitive topics.

  12. A flexible and highly pressure-sensitive graphene-polyurethane sponge based on fractured microstructure design.

    PubMed

    Yao, Hong-Bin; Ge, Jin; Wang, Chang-Feng; Wang, Xu; Hu, Wei; Zheng, Zhi-Jun; Ni, Yong; Yu, Shu-Hong

    2013-12-10

    A fractured microstructure design: A new type of piezoresistive sensor with ultra-high-pressure sensitivity (0.26 kPa(-1) ) in low pressure range (<2 kPa) and minimum detectable pressure of 9 Pa has been fabricated using a fractured microstructure design in a graphene-nanosheet-wrapped polyurethane (PU) sponge. This low-cost and easily scalable graphene-wrapped PU sponge pressure sensor has potential application in high-spatial-resolution, artificial skin without complex nanostructure design.

  13. Sensitivity analysis of fine sediment models using heterogeneous data

    NASA Astrophysics Data System (ADS)

    Kamel, A. M. Yousif; Bhattacharya, B.; El Serafy, G. Y.; van Kessel, T.; Solomatine, D. P.

    2012-04-01

    Sediments play an important role in many aquatic systems. Their transportation and deposition has significant implication on morphology, navigability and water quality. Understanding the dynamics of sediment transportation in time and space is therefore important in drawing interventions and making management decisions. This research is related to the fine sediment dynamics in the Dutch coastal zone, which is subject to human interference through constructions, fishing, navigation, sand mining, etc. These activities do affect the natural flow of sediments and sometimes lead to environmental concerns or affect the siltation rates in harbours and fairways. Numerical models are widely used in studying fine sediment processes. Accuracy of numerical models depends upon the estimation of model parameters through calibration. Studying the model uncertainty related to these parameters is important in improving the spatio-temporal prediction of suspended particulate matter (SPM) concentrations, and determining the limits of their accuracy. This research deals with the analysis of a 3D numerical model of North Sea covering the Dutch coast using the Delft3D modelling tool (developed at Deltares, The Netherlands). The methodology in this research was divided into three main phases. The first phase focused on analysing the performance of the numerical model in simulating SPM concentrations near the Dutch coast by comparing the model predictions with SPM concentrations estimated from NASA's MODIS sensors at different time scales. The second phase focused on carrying out a sensitivity analysis of model parameters. Four model parameters were identified for the uncertainty and sensitivity analysis: the sedimentation velocity, the critical shear stress above which re-suspension occurs, the shields shear stress for re-suspension pick-up, and the re-suspension pick-up factor. By adopting different values of these parameters the numerical model was run and a comparison between the

  14. Planar Inlet Design and Analysis Process (PINDAP)

    NASA Technical Reports Server (NTRS)

    Slater, John W.; Gruber, Christopher R.

    2005-01-01

    The Planar Inlet Design and Analysis Process (PINDAP) is a collection of software tools that allow the efficient aerodynamic design and analysis of planar (two-dimensional and axisymmetric) inlets. The aerodynamic analysis is performed using the Wind-US computational fluid dynamics (CFD) program. A major element in PINDAP is a Fortran 90 code named PINDAP that can establish the parametric design of the inlet and efficiently model the geometry and generate the grid for CFD analysis with design changes to those parameters. The use of PINDAP is demonstrated for subsonic, supersonic, and hypersonic inlets.

  15. Species sensitivity analysis of heavy metals to freshwater organisms.

    PubMed

    Xin, Zheng; Wenchao, Zang; Zhenguang, Yan; Yiguo, Hong; Zhengtao, Liu; Xianliang, Yi; Xiaonan, Wang; Tingting, Liu; Liming, Zhou

    2015-10-01

    Acute toxicity data of six heavy metals [Cu, Hg, Cd, Cr(VI), Pb, Zn] to aquatic organisms were collected and screened. Species sensitivity distributions (SSD) curves of vertebrate and invertebrate were constructed by log-logistic model separately. The comprehensive comparisons of the sensitivities of different trophic species to six typical heavy metals were performed. The results indicated invertebrate taxa to each heavy metal exhibited higher sensitivity than vertebrates. However, with respect to the same taxa species, Cu had the most adverse effect on vertebrate, followed by Hg, Cd, Zn and Cr. When datasets from all species were included, Cu and Hg were still more toxic than the others. In particular, the toxicities of Pb to vertebrate and fish were complicated as the SSD curves of Pb intersected with those of other heavy metals, while the SSD curves of Pb constructed by total species no longer crossed with others. The hazardous concentrations for 5 % of the species (HC5) affected were derived to determine the concentration protecting 95 % of species. The HC5 values of the six heavy metals were in the descending order: Zn > Pb > Cr > Cd > Hg > Cu, indicating toxicities in opposite order. Moreover, potential affected fractions were calculated to assess the ecological risks of different heavy metals at certain concentrations of the selected heavy metals. Evaluations of sensitivities of the species at various trophic levels and toxicity analysis of heavy metals are necessary prior to derivation of water quality criteria and the further environmental protection.

  16. Multi-resolution multi-sensitivity design for parallel-hole SPECT collimators

    NASA Astrophysics Data System (ADS)

    Li, Yanzhao; Xiao, Peng; Zhu, Xiaohua; Xie, Qingguo

    2016-07-01

    Multi-resolution multi-sensitivity (MRMS) collimator offering adjustable trade-off between resolution and sensitivity, can make a SPECT system adaptive. We propose in this paper a new idea for MRMS design based on, for the first time, parallel-hole collimators for clinical SPECT. Multiple collimation states with varied resolution/sensitivity trade-offs can be formed by slightly changing the collimator’s inner structure. To validate the idea, the GE LEHR collimator is selected as the design prototype and is modeled using a ray-tracing technique. Point images are generated for several states of the design. Results show that the collimation states of the design can obtain similar point response characteristics to parallel-hole collimators, and can be used just like parallel-hole collimators in clinical SPECT imaging. Ray-tracing modeling also shows that the proposed design can offer varied resolution/sensitivity trade-offs: at 100 mm before the collimator, the highest resolution state provides 6.9 mm full width at a half maximum (FWHM) with a nearly minimum sensitivity of about 96.2 cps MBq-1, while the lowest resolution state obtains 10.6 mm FWHM with the highest sensitivity of about 167.6 cps MBq-1. Further comparisons of the states on image qualities are conducted through Monte Carlo simulation of a hot-spot phantom which contains five hot spots with varied sizes. Contrast-to-noise ratios (CNR) of the spots are calculated and compared, showing that different spots can prefer different collimation states: the larger spots obtain better CNRs by using the larger sensitivity states, and the smaller spots prefer the higher resolution states. In conclusion, the proposed idea can be an effective approach for MRMS design for parallel-hole SPECT collimators.

  17. Multi-resolution multi-sensitivity design for parallel-hole SPECT collimators.

    PubMed

    Li, Yanzhao; Xiao, Peng; Zhu, Xiaohua; Xie, Qingguo

    2016-07-21

    Multi-resolution multi-sensitivity (MRMS) collimator offering adjustable trade-off between resolution and sensitivity, can make a SPECT system adaptive. We propose in this paper a new idea for MRMS design based on, for the first time, parallel-hole collimators for clinical SPECT. Multiple collimation states with varied resolution/sensitivity trade-offs can be formed by slightly changing the collimator's inner structure. To validate the idea, the GE LEHR collimator is selected as the design prototype and is modeled using a ray-tracing technique. Point images are generated for several states of the design. Results show that the collimation states of the design can obtain similar point response characteristics to parallel-hole collimators, and can be used just like parallel-hole collimators in clinical SPECT imaging. Ray-tracing modeling also shows that the proposed design can offer varied resolution/sensitivity trade-offs: at 100 mm before the collimator, the highest resolution state provides 6.9 mm full width at a half maximum (FWHM) with a nearly minimum sensitivity of about 96.2 cps MBq(-1), while the lowest resolution state obtains 10.6 mm FWHM with the highest sensitivity of about 167.6 cps MBq(-1). Further comparisons of the states on image qualities are conducted through Monte Carlo simulation of a hot-spot phantom which contains five hot spots with varied sizes. Contrast-to-noise ratios (CNR) of the spots are calculated and compared, showing that different spots can prefer different collimation states: the larger spots obtain better CNRs by using the larger sensitivity states, and the smaller spots prefer the higher resolution states. In conclusion, the proposed idea can be an effective approach for MRMS design for parallel-hole SPECT collimators.

  18. Initial Multidisciplinary Design and Analysis Framework

    NASA Technical Reports Server (NTRS)

    Ozoroski, L. P.; Geiselhart, K. A.; Padula, S. L.; Li, W.; Olson, E. D.; Campbell, R. L.; Shields, E. W.; Berton, J. J.; Gray, J. S.; Jones, S. M.; Naiman, C. G.; Seidel, J. A.; Moore, K. T.; Naylor, B. A.; Townsend, S.

    2010-01-01

    Within the Supersonics (SUP) Project of the Fundamental Aeronautics Program (FAP), an initial multidisciplinary design & analysis framework has been developed. A set of low- and intermediate-fidelity discipline design and analysis codes were integrated within a multidisciplinary design and analysis framework and demonstrated on two challenging test cases. The first test case demonstrates an initial capability to design for low boom and performance. The second test case demonstrates rapid assessment of a well-characterized design. The current system has been shown to greatly increase the design and analysis speed and capability, and many future areas for development were identified. This work has established a state-of-the-art capability for immediate use by supersonic concept designers and systems analysts at NASA, while also providing a strong base to build upon for future releases as more multifidelity capabilities are developed and integrated.

  19. Sensitivity-analysis techniques: self-teaching curriculum

    SciTech Connect

    Iman, R.L.; Conover, W.J.

    1982-06-01

    This self teaching curriculum on sensitivity analysis techniques consists of three parts: (1) Use of the Latin Hypercube Sampling Program (Iman, Davenport and Ziegler, Latin Hypercube Sampling (Program User's Guide), SAND79-1473, January 1980); (2) Use of the Stepwise Regression Program (Iman, et al., Stepwise Regression with PRESS and Rank Regression (Program User's Guide) SAND79-1472, January 1980); and (3) Application of the procedures to sensitivity and uncertainty analyses of the groundwater transport model MWFT/DVM (Campbell, Iman and Reeves, Risk Methodology for Geologic Disposal of Radioactive Waste - Transport Model Sensitivity Analysis; SAND80-0644, NUREG/CR-1377, June 1980: Campbell, Longsine, and Reeves, The Distributed Velocity Method of Solving the Convective-Dispersion Equation, SAND80-0717, NUREG/CR-1376, July 1980). This curriculum is one in a series developed by Sandia National Laboratories for transfer of the capability to use the technology developed under the NRC funded High Level Waste Methodology Development Program.

  20. LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    2000-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).

  1. An analytic method for sensitivity analysis of complex systems

    NASA Astrophysics Data System (ADS)

    Zhu, Yueying; Wang, Qiuping Alexandre; Li, Wei; Cai, Xu

    2017-03-01

    Sensitivity analysis is concerned with understanding how the model output depends on uncertainties (variances) in inputs and identifying which inputs are important in contributing to the prediction imprecision. Uncertainty determination in output is the most crucial step in sensitivity analysis. In the present paper, an analytic expression, which can exactly evaluate the uncertainty in output as a function of the output's derivatives and inputs' central moments, is firstly deduced for general multivariate models with given relationship between output and inputs in terms of Taylor series expansion. A γ-order relative uncertainty for output, denoted by Rvγ, is introduced to quantify the contributions of input uncertainty of different orders. On this basis, it is shown that the widely used approximation considering the first order contribution from the variance of input variable can satisfactorily express the output uncertainty only when the input variance is very small or the input-output function is almost linear. Two applications of the analytic formula are performed to the power grid and economic systems where the sensitivities of both actual power output and Economic Order Quantity models are analyzed. The importance of each input variable in response to the model output is quantified by the analytic formula.

  2. Hyperspectral data analysis procedures with reduced sensitivity to noise

    NASA Technical Reports Server (NTRS)

    Landgrebe, David A.

    1993-01-01

    Multispectral sensor systems have become steadily improved over the years in their ability to deliver increased spectral detail. With the advent of hyperspectral sensors, including imaging spectrometers, this technology is in the process of taking a large leap forward, thus providing the possibility of enabling delivery of much more detailed information. However, this direction of development has drawn even more attention to the matter of noise and other deleterious effects in the data, because reducing the fundamental limitations of spectral detail on information collection raises the limitations presented by noise to even greater importance. Much current effort in remote sensing research is thus being devoted to adjusting the data to mitigate the effects of noise and other deleterious effects. A parallel approach to the problem is to look for analysis approaches and procedures which have reduced sensitivity to such effects. We discuss some of the fundamental principles which define analysis algorithm characteristics providing such reduced sensitivity. One such analysis procedure including an example analysis of a data set is described, illustrating this effect.

  3. Sensitivity Analysis of Hardwired Parameters in GALE Codes

    SciTech Connect

    Geelhood, Kenneth J.; Mitchell, Mark R.; Droppo, James G.

    2008-12-01

    The U.S. Nuclear Regulatory Commission asked Pacific Northwest National Laboratory to provide a data-gathering plan for updating the hardwired data tables and parameters of the Gaseous and Liquid Effluents (GALE) codes to reflect current nuclear reactor performance. This would enable the GALE codes to make more accurate predictions about the normal radioactive release source term applicable to currently operating reactors and to the cohort of reactors planned for construction in the next few years. A sensitivity analysis was conducted to define the importance of hardwired parameters in terms of each parameter’s effect on the emission rate of the nuclides that are most important in computing potential exposures. The results of this study were used to compile a list of parameters that should be updated based on the sensitivity of these parameters to outputs of interest.

  4. SENSITIVITY ANALYSIS OF A TPB DEGRADATION RATE MODEL

    SciTech Connect

    Crawford, C; Tommy Edwards, T; Bill Wilmarth, B

    2006-08-01

    A tetraphenylborate (TPB) degradation model for use in aggregating Tank 48 material in Tank 50 is developed in this report. The influential factors for this model are listed as the headings in the table below. A sensitivity study of the predictions of the model over intervals of values for the influential factors affecting the model was conducted. These intervals bound the levels of these factors expected during Tank 50 aggregations. The results from the sensitivity analysis were used to identify settings for the influential factors that yielded the largest predicted TPB degradation rate. Thus, these factor settings are considered as those that yield the ''worst-case'' scenario for TPB degradation rate for Tank 50 aggregation, and, as such they would define the test conditions that should be studied in a waste qualification program whose dual purpose would be the investigation of the introduction of Tank 48 material for aggregation in Tank 50 and the bounding of TPB degradation rates for such aggregations.

  5. Design of plasmonic photonic crystal resonant cavities for polarization sensitive infrared photodetectors

    NASA Astrophysics Data System (ADS)

    Rosenberg, Jessie; Shenoi, Rajeev V.; Krishna, Sanjay; Painter, Oskar

    2010-02-01

    We design a polarization-sensitive resonator for use in midinfrared photodetectors, utilizing a photonic crystal cavity and a single or double-metal plasmonic waveguide to achieve enhanced detector efficiency due to superior optical confinement within the active region. As the cavity is highly frequency and polarization-sensitive, this resonator structure could be used in chip-based infrared spectrometers and cameras that can distinguish among different materials and temperatures to a high degree of precision.

  6. Design of guided Bloch surface wave resonance bio-sensors with high sensitivity

    NASA Astrophysics Data System (ADS)

    Kang, Xiu-Bao; Wen, Li-Wei; Wang, Zhi-Guo

    2017-01-01

    The sensing performance of bio-sensors based on guided Bloch surface wave (BSW) resonance (GBR) is studied. GBR is realized by coupling the propagating electromagnetic wave with BSW on one side of a one-dimensional photonic crystal slab via the grating on the other side. The sensitivity of the designed bio-sensors is proportional to the grating constant when the wavelength spectrum is analyzed, and inversely proportional to the normal wave vector of the incident electromagnetic wave when the angular spectrum is resolved. For a GBR bio-sensor designed to operate near 70° angle of incidence from air, the angular sensitivity is very high, reaching 128 deg RIU-1. The sensitivity can be substantially increased by designing bio-sensors for operating at larger angles of incidence.

  7. Biosphere dose conversion Factor Importance and Sensitivity Analysis

    SciTech Connect

    M. Wasiolek

    2004-10-15

    This report presents importance and sensitivity analysis for the environmental radiation model for Yucca Mountain, Nevada (ERMYN). ERMYN is a biosphere model supporting the total system performance assessment (TSPA) for the license application (LA) for the Yucca Mountain repository. This analysis concerns the output of the model, biosphere dose conversion factors (BDCFs) for the groundwater, and the volcanic ash exposure scenarios. It identifies important processes and parameters that influence the BDCF values and distributions, enhances understanding of the relative importance of the physical and environmental processes on the outcome of the biosphere model, includes a detailed pathway analysis for key radionuclides, and evaluates the appropriateness of selected parameter values that are not site-specific or have large uncertainty.

  8. Design, validation, and absolute sensitivity of a novel test for the molecular detection of avian pneumovirus.

    PubMed

    Cecchinato, Mattia; Catelli, Elena; Savage, Carol E; Jones, Richard C; Naylor, Clive J

    2004-11-01

    This study describes attempts to increase and measure sensitivity of molecular tests to detect avian pneumovirus (APV). Polymerase chain reaction (PCR) diagnostic tests were designed for the detection of nucleic acid from an A-type APV genome. The objective was selection of PCR oligonucleotide combinations, which would provide the greatest test sensitivity and thereby enable optimal detection when used for later testing of field materials. Relative and absolute test sensitivities could be determined because of laboratory access to known quantities of purified full-length DNA copies of APV genome derived from the same A-type virus. Four new nested PCR tests were designed in the fusion (F) protein (2 tests), small hydrophobic (SH) protein (1 test), and nucleocapsid (N) protein (1 test) genes and compared with an established test in the attachment (G) protein gene. Known amounts of full-length APV genome were serially diluted 10-fold, and these dilutions were used as templates for the different tests. Sensitivities were found to differ between the tests, the most sensitive being the established G test, which proved able to detect 6,000 copies of the G gene. The G test contained predominantly pyrimidine residues at its 3' termini, and because of this, oligonucleotides for the most sensitive F test were modified to incorporate the same residue types at their 3' termini. This was found to increase sensitivity, so that after full 3' pyrimidine substitutions, the F test became able to detect 600 copies of the F gene.

  9. Blade design and analysis using a modified Euler solver

    NASA Technical Reports Server (NTRS)

    Leonard, O.; Vandenbraembussche, R. A.

    1991-01-01

    An iterative method for blade design based on Euler solver and described in an earlier paper is used to design compressor and turbine blades providing shock free transonic flows. The method shows a rapid convergence, and indicates how much the flow is sensitive to small modifications of the blade geometry, that the classical iterative use of analysis methods might not be able to define. The relationship between the required Mach number distribution and the resulting geometry is discussed. Examples show how geometrical constraints imposed upon the blade shape can be respected by using free geometrical parameters or by relaxing the required Mach number distribution. The same code is used both for the design of the required geometry and for the off-design calculations. Examples illustrate the difficulty of designing blade shapes with optimal performance also outside of the design point.

  10. Web Page Design and Network Analysis.

    ERIC Educational Resources Information Center

    Wan, Hakman A.; Chung, Chi-wai

    1998-01-01

    Examines problems in Web-site design from the perspective of network analysis. In view of the similarity between the hypertext structure of Web pages and a generic network, network analysis presents concepts and theories that provide insight for Web-site design. Describes the problem of home-page location and control of number of Web pages and…

  11. Distributed Design and Analysis of Computer Experiments

    SciTech Connect

    Doak, Justin

    2002-11-11

    DDACE is a C++ object-oriented software library for the design and analysis of computer experiments. DDACE can be used to generate samples from a variety of sampling techniques. These samples may be used as input to a application code. DDACE also contains statistical tools such as response surface models and correlation coefficients to analyze input/output relationships between variables in an application code. DDACE can generate input values for uncertain variables within a user's application. For example, a user might like to vary a temperature variable as well as some material variables in a series of simulations. Through the series of simulations the user might be looking for optimal settings of parameters based on some user criteria. Or the user may be interested in the sensitivity to input variability shown by an output variable. In either case, the user may provide information about the suspected ranges and distributions of a set of input variables, along with a sampling scheme, and DDACE will generate input points based on these specifications. The input values generated by DDACE and the one or more outputs computed through the user's application code can be analyzed with a variety of statistical methods. This can lead to a wealth of information about the relationships between the variables in the problem. While statistical and mathematical packages may be employeed to carry out the analysis on the input/output relationships, DDACE also contains some tools for analyzing the simulation data. DDACE incorporates a software package called MARS (Multivariate Adaptive Regression Splines), developed by Jerome Friedman. MARS is used for generating a spline surface fit of the data. With MARS, a model simplification may be calculated using the input and corresponding output, values for the user's application problem. The MARS grid data may be used for generating 3-dimensional response surface plots of the simulation data. DDACE also contains an implementation of an

  12. Structural Analysis in a Conceptual Design Framework

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Robinson, Jay H.; Eldred, Lloyd B.

    2012-01-01

    Supersonic aircraft designers must shape the outer mold line of the aircraft to improve multiple objectives, such as mission performance, cruise efficiency, and sonic-boom signatures. Conceptual designers have demonstrated an ability to assess these objectives for a large number of candidate designs. Other critical objectives and constraints, such as weight, fuel volume, aeroelastic effects, and structural soundness, are more difficult to address during the conceptual design process. The present research adds both static structural analysis and sizing to an existing conceptual design framework. The ultimate goal is to include structural analysis in the multidisciplinary optimization of a supersonic aircraft. Progress towards that goal is discussed and demonstrated.

  13. Sensitivity Analysis of OECD Benchmark Tests in BISON

    SciTech Connect

    Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.; Williamson, Richard

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining core boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.

  14. Parametric sensitivity analysis of an agro-economic model of management of irrigation water

    NASA Astrophysics Data System (ADS)

    El Ouadi, Ihssan; Ouazar, Driss; El Menyari, Younesse

    2015-04-01

    The current work aims to build an analysis and decision support tool for policy options concerning the optimal allocation of water resources, while allowing a better reflection on the issue of valuation of water by the agricultural sector in particular. Thus, a model disaggregated by farm type was developed for the rural town of Ait Ben Yacoub located in the east Morocco. This model integrates economic, agronomic and hydraulic data and simulates agricultural gross margin across in this area taking into consideration changes in public policy and climatic conditions, taking into account the competition for collective resources. To identify the model input parameters that influence over the results of the model, a parametric sensitivity analysis is performed by the "One-Factor-At-A-Time" approach within the "Screening Designs" method. Preliminary results of this analysis show that among the 10 parameters analyzed, 6 parameters affect significantly the objective function of the model, it is in order of influence: i) Coefficient of crop yield response to water, ii) Average daily gain in weight of livestock, iii) Exchange of livestock reproduction, iv) maximum yield of crops, v) Supply of irrigation water and vi) precipitation. These 6 parameters register sensitivity indexes ranging between 0.22 and 1.28. Those results show high uncertainties on these parameters that can dramatically skew the results of the model or the need to pay particular attention to their estimates. Keywords: water, agriculture, modeling, optimal allocation, parametric sensitivity analysis, Screening Designs, One-Factor-At-A-Time, agricultural policy, climate change.

  15. Experiment Design and Analysis Guide - Neutronics & Physics

    SciTech Connect

    Misti A Lillo

    2014-06-01

    The purpose of this guide is to provide a consistent, standardized approach to performing neutronics/physics analysis for experiments inserted into the Advanced Test Reactor (ATR). This document provides neutronics/physics analysis guidance to support experiment design and analysis needs for experiments irradiated in the ATR. This guide addresses neutronics/physics analysis in support of experiment design, experiment safety, and experiment program objectives and goals. The intent of this guide is to provide a standardized approach for performing typical neutronics/physics analyses. Deviation from this guide is allowed provided that neutronics/physics analysis details are properly documented in an analysis report.

  16. SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES

    SciTech Connect

    Flach, G.

    2014-10-28

    PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.

  17. Path-sensitive analysis for reducing rollback overheads

    DOEpatents

    O'Brien, John K.P.; Wang, Kai-Ting Amy; Yamashita, Mark; Zhuang, Xiaotong

    2014-07-22

    A mechanism is provided for path-sensitive analysis for reducing rollback overheads. The mechanism receives, in a compiler, program code to be compiled to form compiled code. The mechanism divides the code into basic blocks. The mechanism then determines a restore register set for each of the one or more basic blocks to form one or more restore register sets. The mechanism then stores the one or more register sets such that responsive to a rollback during execution of the compiled code. A rollback routine identifies a restore register set from the one or more restore register sets and restores registers identified in the identified restore register set.

  18. Use of inelastic analysis in cask design

    SciTech Connect

    AMMERMAN,DOUGLAS J.; BREIVIK,NICOLE L.

    2000-05-15

    In this paper, the advantages and disadvantages of inelastic analysis are discussed. Example calculations and designs showing the implications and significance of factors affecting inelastic analysis are given. From the results described in this paper it can be seen that inelastic analysis provides an improved method for the design of casks. It can also be seen that additional code and standards work is needed to give designers guidance in the use of inelastic analysis. Development of these codes and standards is an area where there is a definite need for additional work. The authors hope that this paper will help to define the areas where that need is most acute.

  19. Sensitivity and uncertainty analysis of a polyurethane foam decomposition model

    SciTech Connect

    HOBBS,MICHAEL L.; ROBINSON,DAVID G.

    2000-03-14

    Sensitivity/uncertainty analyses are not commonly performed on complex, finite-element engineering models because the analyses are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, an analytical sensitivity/uncertainty analysis is used to determine the standard deviation and the primary factors affecting the burn velocity of polyurethane foam exposed to firelike radiative boundary conditions. The complex, finite element model has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state burn velocity calculated as the derivative of the burn front location versus time. The standard deviation of the burn velocity was determined by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation is essentially determined from a second derivative that is extremely sensitive to numerical noise. To minimize the numerical noise, 50-micron elements and approximately 1-msec time steps were required to obtain stable uncertainty results. The primary effect variable was shown to be the emissivity of the foam.

  20. On options for interdisciplinary analysis and design optimization

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.; Sobieszczanski-Sobieski, J.; Padula, S. L.

    1992-01-01

    The interdisciplinary optimization of engineering systems is discussed from the standpoint of the computational alternatives available to the designer. The analysis of such systems typically requires the solution of coupled systems of nonlinear algebraic equations. The solution procedure is necessarily iterative in nature. It is shown that the system can be solved by fixed point iteration, by Newton's method, or by a combination of the two. However, the need for sensitivity analysis may affect the choice of analysis solution method. Similarly, the optimization of the system can be formulated in several ways that are discussed in the paper. It is shown that the effect of the topology of the interaction between disciplines is a key factor in the choice of analysis, sensitivity and optimization methods. Several examples are presented to illustrate the discussion.

  1. A global sensitivity analysis of crop virtual water content

    NASA Astrophysics Data System (ADS)

    Tamea, S.; Tuninetti, M.; D'Odorico, P.; Laio, F.; Ridolfi, L.

    2015-12-01

    The concepts of virtual water and water footprint are becoming widely used in the scientific literature and they are proving their usefulness in a number of multidisciplinary contexts. With such growing interest a measure of data reliability (and uncertainty) is becoming pressing but, as of today, assessments of data sensitivity to model parameters, performed at the global scale, are not known. This contribution aims at filling this gap. Starting point of this study is the evaluation of the green and blue virtual water content (VWC) of four staple crops (i.e. wheat, rice, maize, and soybean) at a global high resolution scale. In each grid cell, the crop VWC is given by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield, where evapotranspiration is determined with a detailed daily soil water balance and actual yield is estimated using country-based data, adjusted to account for spatial variability. The model provides estimates of the VWC at a 5x5 arc minutes and it improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The model is then used as the basis for a sensitivity analysis, in order to evaluate the role of model parameters in affecting the VWC and to understand how uncertainties in input data propagate and impact the VWC accounting. In each cell, small changes are exerted to one parameter at a time, and a sensitivity index is determined as the ratio between the relative change of VWC and the relative change of the input parameter with respect to its reference value. At the global scale, VWC is found to be most sensitive to the planting date, with a positive (direct) or negative (inverse) sensitivity index depending on the typical season of crop planting date. VWC is also markedly dependent on the length of the growing period, with an increase in length always producing an increase of VWC, but with higher spatial variability for rice than for

  2. Spacecraft design optimization using Taguchi analysis

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1991-01-01

    The quality engineering methods of Dr. Genichi Taguchi, employing design of experiments, are important statistical tools for designing high quality systems at reduced cost. The Taguchi method was utilized to study several simultaneous parameter level variations of a lunar aerobrake structure to arrive at the lightest weight configuration. Finite element analysis was used to analyze the unique experimental aerobrake configurations selected by Taguchi method. Important design parameters affecting weight and global buckling were identified and the lowest weight design configuration was selected.

  3. Theoretical Noise Analysis on a Position-sensitive Metallic Magnetic Calorimeter

    NASA Technical Reports Server (NTRS)

    Smith, Stephen J.

    2007-01-01

    We report on the theoretical noise analysis for a position-sensitive Metallic Magnetic Calorimeter (MMC), consisting of MMC read-out at both ends of a large X-ray absorber. Such devices are under consideration as alternatives to other cryogenic technologies for future X-ray astronomy missions. We use a finite-element model (FEM) to numerically calculate the signal and noise response at the detector outputs and investigate the correlations between the noise measured at each MMC coupled by the absorber. We then calculate, using the optimal filter concept, the theoretical energy and position resolution across the detector and discuss the trade-offs involved in optimizing the detector design for energy resolution, position resolution and count rate. The results show, theoretically, the position-sensitive MMC concept offers impressive spectral and spatial resolving capabilities compared to pixel arrays and similar position-sensitive cryogenic technologies using Transition Edge Sensor (TES) read-out.

  4. Design Through Analysis (DTA) roadmap vision.

    SciTech Connect

    Blacker, Teddy Dean; Adams, Charles R.; Hoffman, Edward L.; White, David Roger; Sjaardema, Gregory D.

    2004-10-01

    The Design through Analysis Realization Team (DART) will provide analysts with a complete toolset that reduces the time to create, generate, analyze, and manage the data generated in a computational analysis. The toolset will be both easy to learn and easy to use. The DART Roadmap Vision provides for progressive improvements that will reduce the Design through Analysis (DTA) cycle time by 90-percent over a three-year period while improving both the quality and accountability of the analyses.

  5. Panel Flutter Constraints: Analytic Sensitivities and Approximations Including Planform Shape Design Variables

    NASA Technical Reports Server (NTRS)

    Livne, Eli; Mineau, David

    1997-01-01

    Analytical sensitivities of panel flutter constraints with respect to panel shape as well as thickness and material properties are derived and numerically tested. Cases of fixed in-plane loads and cases in which in-plane loads are variable (depending on panel and overall wing shape as well as material and sizing design variables) are considered. Accuracy of approximations and range of move limits required are studied in preparation for integration with nonlinear programming/approximation concept aeroelastic design synthesis methodology.

  6. Analysis of Transition-Sensitized Turbulent Transport Equations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Thacker, William D.; Gatski, Thomas B.; Grosch, Chester E,

    2005-01-01

    The dynamics of an ensemble of linear disturbances in boundary-layer flows at various Reynolds numbers is studied through an analysis of the transport equations for the mean disturbance kinetic energy and energy dissipation rate. Effects of adverse and favorable pressure-gradients on the disturbance dynamics are also included in the analysis Unlike the fully turbulent regime where nonlinear phase scrambling of the fluctuations affects the flow field even in proximity to the wall, the early stage transition regime fluctuations studied here are influenced cross the boundary layer by the solid boundary. The dominating dynamics in the disturbance kinetic energy and dissipation rate equations are described. These results are then used to formulate transition-sensitized turbulent transport equations, which are solved in a two-step process and applied to zero-pressure-gradient flow over a flat plate. Computed results are in good agreement with experimental data.

  7. Impact of multiple matched controls on design sensitivity in observational studies.

    PubMed

    Rosenbaum, Paul R

    2013-03-01

    In an observational study, one treated subject may be matched for observed covariates to either one or several untreated controls. The common motivation for using several controls rather than one is to increase the power of a test of no effect under the doubtful assumption that matching for observed covariates suffices to remove bias from nonrandom treatment assignment. Does the choice between one or several matched controls affect the sensitivity of conclusions to violations of this doubtful assumption? With continuous responses, it is known that reducing the heterogeneity of matched pair differences reduces sensitivity to unmeasured biases, but increasing the sample size has a highly circumscribed effect on sensitivity to bias. Is the use of several controls rather than one analogous to a reduction in heterogeneity or to an increase in sample size? The issue is examined for Huber's m-statistics, including the t-test, the examination having three components: an example, asymptotic calculations using design sensitivity, and a simulation. Use of multiple controls with continuous responses yields a nontrivial reduction in sensitivity to unmeasured biases. An example looks at lead and cadmium in the blood of smokers from the 2008 National Health and Nutrition Examination Survey. A by-product of the discussion is a new result giving the design sensitivity for the permutation distribution of m-statistics.

  8. Simple Sensitivity Analysis for Orion Guidance Navigation and Control

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch. We describe in this paper a sensitivity analysis tool ("Critical Factors Tool" or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  9. Parametric sensitivity analysis of avian pancreatic polypeptide (APP).

    PubMed

    Zhang, H; Wong, C F; Thacher, T; Rabitz, H

    1995-10-01

    Computer simulations utilizing a classical force field have been widely used to study biomolecular properties. It is important to identify the key force field parameters or structural groups controlling the molecular properties. In the present paper the sensitivity analysis method is applied to study how various partial charges and solvation parameters affect the equilibrium structure and free energy of avian pancreatic polypeptide (APP). The general shape of APP is characterized by its three principal moments of inertia. A molecular dynamics simulation of APP was carried out with the OPLS/Amber force field and a continuum model of solvation energy. The analysis pinpoints the parameters which have the largest (or smallest) impact on the protein equilibrium structure (i.e., the moments of inertia) or free energy. A display of the protein with its atoms colored according to their sensitivities illustrates the patterns of the interactions responsible for the protein stability. The results suggest that the electrostatic interactions play a more dominant role in protein stability than the part of the solvation effect modeled by the atomic solvation parameters.

  10. Design and Synthesis of an MOF Thermometer with High Sensitivity in the Physiological Temperature Range.

    PubMed

    Zhao, Dian; Rao, Xingtang; Yu, Jiancan; Cui, Yuanjing; Yang, Yu; Qian, Guodong

    2015-12-07

    An important result of research on mixed-lanthanide metal-organic frameworks (M'LnMOFs) is the realization of highly sensitive ratiometric luminescent thermometers. Here, we report the design and synthesis of the new M'LnMOF Tb0.80Eu0.20BPDA with high relative sensitivity in the physiological temperature regime (298-318 K). The emission intensity and luminescence lifetime were investigated and compared to those of existing materials. It was found that the temperature-dependent luminescence properties of Tb0.80Eu0.20BPDA are strongly associated with the distribution of the energy levels of the ligand. Such a property can be useful in the design of highly sensitive M'LnMOF thermometers.

  11. Capability Test Design and Analysis

    DTIC Science & Technology

    2009-01-13

    UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Joint Test and Evaluation Methodology (JTEM),Washington,DC,20301 8. PERFORMING...refinement process for testing in a joint environment (TIJE) 2. Review the methods and processes for an evaluation strategy refinement process 3...Environment System Design Document (SDD) JTEM Capability Test Methodology (CTM) v2.0 Event Management Plan Test Plan Joint Capability Evaluation (JCE

  12. Design of a portable fluoroquinolone analyzer based on terbium-sensitized luminescence

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A portable fluoroquinolone (FQ) analyzer is developed in this laboratory based on terbium-sensitized luminescence (TSL). The optical, hardware and software design aspects are described in detail. A 327-nm light emitting diode (LED) is used in pulsed mode as the excitation source; and a photomultip...

  13. Design of a turbojet engine controller via eigenvalue/eigenvector assignment - A new sensitivity formulation

    NASA Technical Reports Server (NTRS)

    Liberty, S. R.; Maynard, R. A.; Mielke, R. R.

    1979-01-01

    This brief paper summarizes the approach the authors will take in designing a feedback controller for the F-100 turbofan engine. The technique to be utilized simultaneously realizes dominant closed-loop eigenvalues, approximates specified modal behavior, and achieves low eigensystem sensitivity with respect to certain plant parameter variations.

  14. Low-sensitivity H ∞ filter design for linear delta operator systems with sampling time jitter

    NASA Astrophysics Data System (ADS)

    Guo, Xiang-Gui; Yang, Guang-Hong

    2012-04-01

    This article is concerned with the problem of designing H ∞ filters for a class of linear discrete-time systems with low-sensitivity to sampling time jitter via delta operator approach. Delta-domain model is used to avoid the inherent numerical ill-condition resulting from the use of the standard shift-domain model at high sampling rates. Based on projection lemma in combination with the descriptor system approach often used to solve problems related to delay, a novel bounded real lemma with three slack variables for delta operator systems is presented. A sensitivity approach based on this novel lemma is proposed to mitigate the effects of sampling time jitter on system performance. Then, the problem of designing a low-sensitivity filter can be reduced to a convex optimisation problem. An important consideration in the design of correlation filters is the optimal trade-off between the standard H ∞ criterion and the sensitivity of the transfer function with respect to sampling time jitter. Finally, a numerical example demonstrating the validity of the proposed design method is given.

  15. Radiometer Design Analysis Based Upon Measurement Uncertainty

    NASA Technical Reports Server (NTRS)

    Racette, Paul E.; Lang, Roger H.

    2004-01-01

    This paper introduces a method for predicting the performance of a radiometer design based on calculating the measurement uncertainty. The variety in radiometer designs and the demand for improved radiometric measurements justify the need for a more general and comprehensive method to assess system performance. Radiometric resolution, or sensitivity, is a figure of merit that has been commonly used to characterize the performance of a radiometer. However when evaluating the performance of a calibration design for a radiometer, the use of radiometric resolution has limited application. These limitations are overcome by considering instead the measurement uncertainty. A method for calculating measurement uncertainty for a generic radiometer design including its calibration algorithm is presented. The result is a generalized technique by which system calibration architectures and design parameters can be studied to optimize instrument performance for given requirements and constraints. Example applications demonstrate the utility of using measurement uncertainty as a figure of merit.

  16. Robust Sensitivity Analysis for Multi-Attribute Deterministic Hierarchical Value Models

    DTIC Science & Technology

    2002-03-01

    Method (Satisfying Method) Disjunctive Method Standart Level Elimination by Aspect Lexicograhic Semi order Lexicographic Method Ordinal Weigted Sum...framework for sensitivity analysis of hierarchical additive value models and standardizes the sensitivity analysis notation and terminology . Finally

  17. Least Squares Shadowing Sensitivity Analysis of Chaotic and Turbulent Fluid Flows

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Wang, Qiqi; Gomez, Steven

    2013-11-01

    Computational methods for sensitivity analysis are invaluable tools for fluid dynamics research and engineering design. These methods are used in many applications, including aerodynamic shape optimization and adaptive grid refinement. However, traditional sensitivity analysis methods break down when applied to long-time averaged quantities in chaotic fluid flow fields, such as those obtained using high-fidelity turbulence simulations. This break down is due to the ``Butterfly Effect'' the high sensitivity of chaotic dynamical systems to the initial condition. A new sensitivity analysis method developed by the authors, Least Squares Shadowing (LSS), can compute useful and accurate gradients for quantities of interest in chaotic and turbulent fluid flows. LSS computes gradients using the ``shadow trajectory,'' a phase space trajectory (or solution) for which perturbations to the flow field do not grow exponentially in time. This talk will outline Least Squares Shadowing and demonstrate it on several chaotic and turbulent fluid flows, including homogeneous isotropic turbulence, Rayleigh-Bénard convection and turbulent channel flow. We would like to acknowledge AFSOR Award F11B-T06-0007 under Dr. Fariba Fahroo, NASA Award NNH11ZEA001N under Dr. Harold Atkins, as well as financial support from ConocoPhillips, the NDSEG fellowship and the ANSYS Fellowship.

  18. Aeroelastic optimization of a helicopter rotor using an efficient sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Lim, Joon W.; Chopra, Inderjit

    1990-01-01

    To reduce oscillatory hub loads in forward flight, a structural optimization analysis of a hingeless helicopter rotor has been developed and applied. The aeroelastic analysis of the rotor is based on a finite element method in space and time, and linked with automated optimization algorithms. For the optimization analysis two types of structural representation are used: a generic stiffness-distribution and a single-cell thin-walled beam. For the first type, the design variables are nonstructural mass and its placement, chordwise center of gravity offset from the elastic axis, and stiffness. For the second type, width, height and thickness of spar are used as design variables. For the behavior constraints, frequency placement, autorotational inertia and aeroelastic stability of the blade are included. The required sensitivity derivatives are obtained using a direct analytical approach. An optimum oscillatory hub load shows a 25-77 percent reduction for the generic blade, and 30-50 percent reduction for the box-beam.

  19. SAFE(R): A Matlab/Octave Toolbox (and R Package) for Global Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Pianosi, Francesca; Sarrazin, Fanny; Gollini, Isabella; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis (GSA) is increasingly used in the development and assessment of hydrological models, as well as for dominant control analysis and for scenario discovery to support water resource management under deep uncertainty. Here we present a toolbox for the application of GSA, called SAFE (Sensitivity Analysis For Everybody) that implements several established GSA methods, including method of Morris, Regional Sensitivity Analysis, variance-based sensitivity Analysis (Sobol') and FAST. It also includes new approaches and visualization tools to complement these established methods. The Toolbox is released in two versions, one running under Matlab/Octave (called SAFE) and one running in R (called SAFER). Thanks to its modular structure, SAFE(R) can be easily integrated with other toolbox and packages, and with models running in a different computing environment. Another interesting feature of SAFE(R) is that all the implemented methods include specific functions for assessing the robustness and convergence of the sensitivity estimates. Furthermore, SAFE(R) includes numerous visualisation tools for the effective investigation and communication of GSA results. The toolbox is designed to make GSA accessible to non-specialist users, and to provide a fully commented code for more experienced users to complement their own tools. The documentation includes a set of workflow scripts with practical guidelines on how to apply GSA and how to use the toolbox. SAFE(R) is open source and freely available from the following website: http://bristol.ac.uk/cabot/resources/safe-toolbox/ Ultimately, SAFE(R) aims at improving the diffusion and quality of GSA practice in the hydrological modelling community.

  20. Sensitivity analysis of radionuclides atmospheric dispersion following the Fukushima accident

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien

    2014-05-01

    Atmospheric dispersion models are used in response to accidental releases with two purposes: - minimising the population exposure during the accident; - complementing field measurements for the assessment of short and long term environmental and sanitary impacts. The predictions of these models are subject to considerable uncertainties of various origins. Notably, input data, such as meteorological fields or estimations of emitted quantities as function of time, are highly uncertain. The case studied here is the atmospheric release of radionuclides following the Fukushima Daiichi disaster. The model used in this study is Polyphemus/Polair3D, from which derives IRSN's operational long distance atmospheric dispersion model ldX. A sensitivity analysis was conducted in order to estimate the relative importance of a set of identified uncertainty sources. The complexity of this task was increased by four characteristics shared by most environmental models: - high dimensional inputs; - correlated inputs or inputs with complex structures; - high dimensional output; - multiplicity of purposes that require sophisticated and non-systematic post-processing of the output. The sensitivities of a set of outputs were estimated with the Morris screening method. The input ranking was highly dependent on the considered output. Yet, a few variables, such as horizontal diffusion coefficient or clouds thickness, were found to have a weak influence on most of them and could be discarded from further studies. The sensitivity analysis procedure was also applied to indicators of the model performance computed on a set of gamma dose rates observations. This original approach is of particular interest since observations could be used later to calibrate the input variables probability distributions. Indeed, only the variables that are influential on performance scores are likely to allow for calibration. An indicator based on emission peaks time matching was elaborated in order to complement

  1. GPU-based Integration with Application in Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Atanassov, Emanouil; Ivanovska, Sofiya; Karaivanova, Aneta; Slavov, Dimitar

    2010-05-01

    The presented work is an important part of the grid application MCSAES (Monte Carlo Sensitivity Analysis for Environmental Studies) which aim is to develop an efficient Grid implementation of a Monte Carlo based approach for sensitivity studies in the domains of Environmental modelling and Environmental security. The goal is to study the damaging effects that can be caused by high pollution levels (especially effects on human health), when the main modeling tool is the Danish Eulerian Model (DEM). Generally speaking, sensitivity analysis (SA) is the study of how the variation in the output of a mathematical model can be apportioned to, qualitatively or quantitatively, different sources of variation in the input of a model. One of the important classes of methods for Sensitivity Analysis are Monte Carlo based, first proposed by Sobol, and then developed by Saltelli and his group. In MCSAES the general Saltelli procedure has been adapted for SA of the Danish Eulerian model. In our case we consider as factors the constants determining the speeds of the chemical reactions in the DEM and as output a certain aggregated measure of the pollution. Sensitivity simulations lead to huge computational tasks (systems with up to 4 × 109 equations at every time-step, and the number of time-steps can be more than a million) which motivates its grid implementation. MCSAES grid implementation scheme includes two main tasks: (i) Grid implementation of the DEM, (ii) Grid implementation of the Monte Carlo integration. In this work we present our new developments in the integration part of the application. We have developed an algorithm for GPU-based generation of scrambled quasirandom sequences which can be combined with the CPU-based computations related to the SA. Owen first proposed scrambling of Sobol sequence through permutation in a manner that improves the convergence rates. Scrambling is necessary not only for error analysis but for parallel implementations. Good scrambling is

  2. Design of a gaze-sensitive virtual social interactive system for children with autism.

    PubMed

    Lahiri, Uttama; Warren, Zachary; Sarkar, Nilanjan

    2011-08-01

    Impairments in social communication skills are thought to be core deficits in children with autism spectrum disorder (ASD). In recent years, several assistive technologies, particularly Virtual Reality (VR), have been investigated to promote social interactions in this population. It is well known that children with ASD demonstrate atypical viewing patterns during social interactions and thus monitoring eye-gaze can be valuable to design intervention strategies. While several studies have used eye-tracking technology to monitor eye-gaze for offline analysis, there exists no real-time system that can monitor eye-gaze dynamically and provide individualized feedback. Given the promise of VR-based social interaction and the usefulness of monitoring eye-gaze in real-time, a novel VR-based dynamic eye-tracking system is developed in this work. This system, called Virtual Interactive system with Gaze-sensitive Adaptive Response Technology (VIGART), is capable of delivering individualized feedback based on a child's dynamic gaze patterns during VR-based interaction. Results from a usability study with six adolescents with ASD are presented that examines the acceptability and usefulness of VIGART. The results in terms of improvement in behavioral viewing and changes in relevant eye physiological indexes of participants while interacting with VIGART indicate the potential of this novel technology.

  3. Design of a Gaze-Sensitive Virtual Social Interactive System for Children With Autism

    PubMed Central

    Lahiri, Uttama; Warren, Zachary; Sarkar, Nilanjan

    2013-01-01

    Impairments in social communication skills are thought to be core deficits in children with autism spectrum disorder (ASD). In recent years, several assistive technologies, particularly Virtual Reality (VR), have been investigated to promote social interactions in this population. It is well known that children with ASD demonstrate atypical viewing patterns during social interactions and thus monitoring eye-gaze can be valuable to design intervention strategies. While several studies have used eye-tracking technology to monitor eye-gaze for offline analysis, there exists no real-time system that can monitor eye-gaze dynamically and provide individualized feedback. Given the promise of VR-based social interaction and the usefulness of monitoring eye-gaze in real-time, a novel VR-based dynamic eye-tracking system is developed in this work. This system, called Virtual Interactive system with Gaze-sensitive Adaptive Response Technology (VIGART), is capable of delivering individualized feedback based on a child’s dynamic gaze patterns during VR-based interaction. Results from a usability study with six adolescents with ASD are presented that examines the acceptability and usefulness of VIGART. The results in terms of improvement in behavioral viewing and changes in relevant eye physiological indexes of participants while interacting with VIGART indicate the potential of this novel technology. PMID:21609889

  4. Integrated reflector antenna design and analysis

    NASA Technical Reports Server (NTRS)

    Zimmerman, M. L.; Lee, S. W.; Ni, S.; Christensen, M.; Wang, Y. M.

    1993-01-01

    Reflector antenna design is a mature field and most aspects were studied. However, of that most previous work is distinguished by the fact that it is narrow in scope, analyzing only a particular problem under certain conditions. Methods of analysis of this type are not useful for working on real-life problems since they can not handle the many and various types of perturbations of basic antenna design. The idea of an integrated design and analysis is proposed. By broadening the scope of the analysis, it becomes possible to deal with the intricacies attendant with modem reflector antenna design problems. The concept of integrated reflector antenna design is put forward. A number of electromagnetic problems related to reflector antenna design are investigated. Some of these show how tools for reflector antenna design are created. In particular, a method for estimating spillover loss for open-ended waveguide feeds is examined. The problem of calculating and optimizing beam efficiency (an important figure of merit in radiometry applications) is also solved. Other chapters deal with applications of this general analysis. The wide angle scan abilities of reflector antennas is examined and a design is proposed for the ATDRSS triband reflector antenna. The development of a general phased-array pattern computation program is discussed and how the concept of integrated design can be extended to other types of antennas is shown. The conclusions are contained in the final chapter.

  5. Sensitivity of the optimal preliminary design of a transport to operational constraints and performance index

    NASA Technical Reports Server (NTRS)

    Sliwa, S. M.

    1980-01-01

    Constrained parameter optimization was used to perform the optimal preliminary design of a medium range transport configuration. The impact of choosing a performance index was studied and the required fare for a 15 percent return-on-investment was proposed as a figure-of-merit. A number of design constants and constraint functions were systematically varied to document the sensitivities of the optimal design to a variety of economic and technological assumptions. Additionally, a comparison is made for each of the parameter variations between the baseline configuration and the optimally redesigned configuration.

  6. Sensitivity Analysis of a Wireless Power Transfer (WPT) System for Electric Vehicle Application

    SciTech Connect

    Chinthavali, Madhu Sudhan; Wang, Zhiqiang

    2016-01-01

    This paper presents a detailed parametric sensitivity analysis for a wireless power transfer (WPT) system in electric vehicle application. Specifically, several key parameters for sensitivity analysis of a series-parallel (SP) WPT system are derived first based on analytical modeling approach, which includes the equivalent input impedance, active / reactive power, and DC voltage gain. Based on the derivation, the impact of primary side compensation capacitance, coupling coefficient, transformer leakage inductance, and different load conditions on the DC voltage gain curve and power curve are studied and analyzed. It is shown that the desired power can be achieved by just changing frequency or voltage depending on the design value of coupling coefficient. However, in some cases both have to be modified in order to achieve the required power transfer.

  7. Decomposition method of complex optimization model based on global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Qiu, Qingying; Li, Bing; Feng, Peien; Gao, Yu

    2014-07-01

    The current research of the decomposition methods of complex optimization model is mostly based on the principle of disciplines, problems or components. However, numerous coupling variables will appear among the sub-models decomposed, thereby make the efficiency of decomposed optimization low and the effect poor. Though some collaborative optimization methods are proposed to process the coupling variables, there lacks the original strategy planning to reduce the coupling degree among the decomposed sub-models when we start decomposing a complex optimization model. Therefore, this paper proposes a decomposition method based on the global sensitivity information. In this method, the complex optimization model is decomposed based on the principle of minimizing the sensitivity sum between the design functions and design variables among different sub-models. The design functions and design variables, which are sensitive to each other, will be assigned to the same sub-models as much as possible to reduce the impacts to other sub-models caused by the changing of coupling variables in one sub-model. Two different collaborative optimization models of a gear reducer are built up separately in the multidisciplinary design optimization software iSIGHT, the optimized results turned out that the decomposition method proposed in this paper has less analysis times and increases the computational efficiency by 29.6%. This new decomposition method is also successfully applied in the complex optimization problem of hydraulic excavator working devices, which shows the proposed research can reduce the mutual coupling degree between sub-models. This research proposes a decomposition method based on the global sensitivity information, which makes the linkages least among sub-models after decomposition, and provides reference for decomposing complex optimization models and has practical engineering significance.

  8. Global sensitivity analysis of the radiative transfer model

    NASA Astrophysics Data System (ADS)

    Neelam, Maheshwari; Mohanty, Binayak P.

    2015-04-01

    With the recently launched Soil Moisture Active Passive (SMAP) mission, it is very important to have a complete understanding of the radiative transfer model for better soil moisture retrievals and to direct future research and field campaigns in areas of necessity. Because natural systems show great variability and complexity with respect to soil, land cover, topography, precipitation, there exist large uncertainties and heterogeneities in model input factors. In this paper, we explore the possibility of using global sensitivity analysis (GSA) technique to study the influence of heterogeneity and uncertainties in model inputs on zero order radiative transfer (ZRT) model and to quantify interactions between parameters. GSA technique is based on decomposition of variance and can handle nonlinear and nonmonotonic functions. We direct our analyses toward growing agricultural fields of corn and soybean in two different regions, Iowa, USA (SMEX02) and Winnipeg, Canada (SMAPVEX12). We noticed that, there exists a spatio-temporal variation in parameter interactions under different soil moisture and vegetation conditions. Radiative Transfer Model (RTM) behaves more non-linearly in SMEX02 and linearly in SMAPVEX12, with average parameter interactions of 14% in SMEX02 and 5% in SMAPVEX12. Also, parameter interactions increased with vegetation water content (VWC) and roughness conditions. Interestingly, soil moisture shows an exponentially decreasing sensitivity function whereas parameters such as root mean square height (RMS height) and vegetation water content show increasing sensitivity with 0.05 v/v increase in soil moisture range. Overall, considering the SMAPVEX12 fields to be water rich environment (due to higher observed SM) and SMEX02 fields to be energy rich environment (due to lower SM and wide ranges of TSURF), our results indicate that first order as well as interactions between the parameters change with water and energy rich environments.

  9. Sensitivity analysis of channel-bend hydraulics influenced by vegetation

    NASA Astrophysics Data System (ADS)

    Bywater-Reyes, S.; Manners, R.; McDonald, R.; Wilcox, A. C.

    2015-12-01

    Alternating bars influence hydraulics by changing the force balance of channels as part of a morphodynamic feedback loop that dictates channel geometry. Pioneer woody riparian trees recruit on river bars and may steer flow, alter cross-stream and downstream force balances, and ultimately change channel morphology. Quantifying the influence of vegetation on stream hydraulics is difficult, and researchers increasingly rely on two-dimensional hydraulic models. In many cases, channel characteristics (channel drag and lateral eddy viscosity) and vegetation characteristics (density, frontal area, and drag coefficient) are uncertain. This study uses a beta version of FaSTMECH that models vegetation explicitly as a drag force to test the sensitivity of channel-bend hydraulics to riparian vegetation. We use a simplified, scale model of a meandering river with bars and conduct a global sensitivity analysis that ranks the influence of specified channel characteristics (channel drag and lateral eddy viscosity) against vegetation characteristics (density, frontal area, and drag coefficient) on cross-stream hydraulics. The primary influence on cross-stream velocity and shear stress is channel drag (i.e., bed roughness), followed by the near-equal influence of all vegetation parameters and lateral eddy viscosity. To test the implication of the sensitivity indices on bend hydraulics, we hold calibrated channel characteristics constant for a wandering gravel-bed river with bars (Bitterroot River, MT), and vary vegetation parameters on a bar. For a dense vegetation scenario, we find flow to be steered away from the bar, and velocity and shear stress to be reduced within the thalweg. This provides insight into how the morphodynamic evolution of vegetated bars differs from unvegetated bars.

  10. Effect of electrode material and design on sensitivity and selectivity for high temperature impedancemetric NOx sensors

    SciTech Connect

    Woo, L Y; Glass, R S; Novak, R F; Visser, J H

    2009-09-23

    Solid-state electrochemical sensors using two different sensing electrode compositions, gold and strontium-doped lanthanum manganite (LSM), were evaluated for gas phase sensing of NO{sub x} (NO and NO{sub 2}) using an impedance-metric technique. An asymmetric cell design utilizing porous YSZ electrolyte exposed both electrodes to the test gas (i.e., no reference gas). Sensitivity to less than 5 ppm NO and response/recovery times (10-90%) less than 10 s were demonstrated. Using an LSM sensing electrode, virtual identical sensitivity towards NO and NO{sub 2} was obtained, indicating that the equilibrium gas concentration was measured by the sensing electrode. In contrast, for cells employing a gold sensing electrode the NO{sub x} sensitivity varied depending on the cell design: increasing the amount of porous YSZ electrolyte on the sensor surface produced higher NO{sub 2} sensitivity compared to NO. In order to achieve comparable sensitivity for both NO and NO{sub 2}, the cell with the LSM sensing electrode required operation at a lower temperature (575 C) than the cell with the gold sensing electrode (650 C). The role of surface reactions are proposed to explain the differences in NO and NO{sub 2} selectivity using the two different electrode materials.

  11. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint

    SciTech Connect

    Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad

    2015-12-08

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.

  12. Dynamic global sensitivity analysis in bioreactor networks for bioethanol production.

    PubMed

    Ochoa, M P; Estrada, V; Di Maggio, J; Hoch, P M

    2016-01-01

    Dynamic global sensitivity analysis (GSA) was performed for three different dynamic bioreactor models of increasing complexity: a fermenter for bioethanol production, a bioreactors network, where two types of bioreactors were considered: aerobic for biomass production and anaerobic for bioethanol production and a co-fermenter bioreactor, to identify the parameters that most contribute to uncertainty in model outputs. Sobol's method was used to calculate time profiles for sensitivity indices. Numerical results have shown the time-variant influence of uncertain parameters on model variables. Most influential model parameters have been determined. For the model of the bioethanol fermenter, μmax (maximum growth rate) and Ks (half-saturation constant) are the parameters with largest contribution to model variables uncertainty; in the bioreactors network, the most influential parameter is μmax,1 (maximum growth rate in bioreactor 1); whereas λ (glucose-to-total sugars concentration ratio in the feed) is the most influential parameter over all model variables in the co-fermentation bioreactor.

  13. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis

    SciTech Connect

    Cheung, WanYin; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Lu, Siyuan; Hamann, Hendrik F.; Sun, Qian; Lehman, Brad

    2015-10-02

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance, cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.

  14. Flat heat pipe design, construction, and analysis

    SciTech Connect

    Voegler, G.; Boughey, B.; Cerza, M.; Lindler, K.W.

    1999-08-02

    This paper details the design, construction and partial analysis of a low temperature flat heat pipe in order to determine the feasibility of implementing flat heat pipes into thermophotovoltaic (TPV) energy conversion systems.

  15. Design And Analysis Of Linear Control Systems

    NASA Technical Reports Server (NTRS)

    Jamison, John W.

    1991-01-01

    Package of five computer programs developed to assist in design and analysis of linear control systems by use of root-locus and frequency-response methods. Package written in FORTRAN (BODE, TPEAK) and BASIC (LOCUS, KTUNE, and POLYROOT).

  16. NDARC NASA Design and Analysis of Rotorcraft

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne R.

    2009-01-01

    The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool intended to support both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility; a hierarchy of models; and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with lowfidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single main-rotor and

  17. NDARC - NASA Design and Analysis of Rotorcraft

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2015-01-01

    The NASA Design and Analysis of Rotorcraft (NDARC) software is an aircraft system analysis tool that supports both conceptual design efforts and technology impact assessments. The principal tasks are to design (or size) a rotorcraft to meet specified requirements, including vertical takeoff and landing (VTOL) operation, and then analyze the performance of the aircraft for a set of conditions. For broad and lasting utility, it is important that the code have the capability to model general rotorcraft configurations, and estimate the performance and weights of advanced rotor concepts. The architecture of the NDARC code accommodates configuration flexibility, a hierarchy of models, and ultimately multidisciplinary design, analysis, and optimization. Initially the software is implemented with low-fidelity models, typically appropriate for the conceptual design environment. An NDARC job consists of one or more cases, each case optionally performing design and analysis tasks. The design task involves sizing the rotorcraft to satisfy specified design conditions and missions. The analysis tasks can include off-design mission performance calculation, flight performance calculation for point operating conditions, and generation of subsystem or component performance maps. For analysis tasks, the aircraft description can come from the sizing task, from a previous case or a previous NDARC job, or be independently generated (typically the description of an existing aircraft). The aircraft consists of a set of components, including fuselage, rotors, wings, tails, and propulsion. For each component, attributes such as performance, drag, and weight can be calculated; and the aircraft attributes are obtained from the sum of the component attributes. Description and analysis of conventional rotorcraft configurations is facilitated, while retaining the capability to model novel and advanced concepts. Specific rotorcraft configurations considered are single-main-rotor and tail

  18. Computational Aspects of Sensitivity Calculations in Linear Transient Structural Analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1989-01-01

    A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.

  19. Airbreathing hypersonic vehicle design and analysis methods

    NASA Technical Reports Server (NTRS)

    Lockwood, Mary Kae; Petley, Dennis H.; Hunt, James L.; Martin, John G.

    1996-01-01

    The design, analysis, and optimization of airbreathing hypersonic vehicles requires analyses involving many highly coupled disciplines at levels of accuracy exceeding those traditionally considered in a conceptual or preliminary-level design. Discipline analysis methods including propulsion, structures, thermal management, geometry, aerodynamics, performance, synthesis, sizing, closure, and cost are discussed. Also, the on-going integration of these methods into a working environment, known as HOLIST, is described.

  20. Space Shuttle Orbiter entry guidance and control system sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Stone, H. W.; Powell, R. W.

    1976-01-01

    An approach has been developed to determine the guidance and control system sensitivity to off-nominal aerodynamics for the Space Shuttle Orbiter during entry. This approach, which uses a nonlinear six-degree-of-freedom interactive, digital simulation, has been applied to both the longitudinal and lateral-directional axes for a portion of the orbiter entry. Boundary values for each of the aerodynamic parameters have been identified, the key parameters have been determined, and system modifications that will increase system tolerance to off-nominal aerodynamics have been recommended. The simulations were judged by specified criteria and the performance was evaluated by use of key dependent variables. The analysis is now being expanded to include the latest shuttle guidance and control systems throughout the entry speed range.

  1. Neutron activation analysis; A sensitive test for trace elements

    SciTech Connect

    Hossain, T.Z. . Ward Lab.)

    1992-01-01

    This paper discusses neutron activation analysis (NAA), an extremely sensitive technique for determining the elemental constituents of an unknown specimen. Currently, there are some twenty-five moderate-power TRIGA reactors scattered across the United States (fourteen of them at universities), and one of their principal uses is for NAA. NAA is procedurally simple. A small amount of the material to be tested (typically between one and one hundred milligrams) is irradiated for a period that varies from a few minutes to several hours in a neutron flux of around 10{sup 12} neutrons per square centimeter per second. A tiny fraction of the nuclei present (about 10{sup {minus}8}) is transmuted by nuclear reactions into radioactive forms. Subsequently, the nuclei decay, and the energy and intensity of the gamma rays that they emit can be measured in a gamma-ray spectrometer.

  2. Sensitivity analysis and optimization of thin-film thermoelectric coolers

    NASA Astrophysics Data System (ADS)

    Harsha Choday, Sri; Roy, Kaushik

    2013-06-01

    The cooling performance of a thermoelectric (TE) material is dependent on the figure-of-merit (ZT = S2σT/κ), where S is the Seebeck coefficient, σ and κ are the electrical and thermal conductivities, respectively. The standard definition of ZT assigns equal importance to power factor (S2σ) and thermal conductivity. In this paper, we analyze the relative importance of each thermoelectric parameter on the cooling performance using the mathematical framework of sensitivity analysis. In addition, the impact of the electrical/thermal contact parasitics on bulk and superlattice Bi2Te3 is also investigated. In the presence of significant contact parasitics, we find that the carrier concentration that results in best cooling is lower than that of the highest ZT. We also establish the level of contact parasitics that are needed such that their impact on TE cooling is negligible.

  3. Sensitivity and uncertainty analysis of the recharge boundary condition

    NASA Astrophysics Data System (ADS)

    Jyrkama, M. I.; Sykes, J. F.

    2006-01-01

    The reliability analysis method is integrated with MODFLOW to study the impact of recharge on the groundwater flow system at a study area in New Jersey. The performance function is formulated in terms of head or flow rate at a pumping well, while the recharge sensitivity vector is computed efficiently by implementing the adjoint method in MODFLOW. The developed methodology not only quantifies the reliability of head at the well in terms of uncertainties in the recharge boundary condition, but it also delineates areas of recharge that have the highest impact on the head and flow rate at the well. The results clearly identify the most important land use areas that should be protected in order to maintain the head and hence production at the pumping well. These areas extend far beyond the steady state well capture zone used for land use planning and management within traditional wellhead protection programs.

  4. Sensitivity analysis for causal inference using inverse probability weighting.

    PubMed

    Shen, Changyu; Li, Xiaochun; Li, Lingling; Were, Martin C

    2011-09-01

    Evaluation of impact of potential uncontrolled confounding is an important component for causal inference based on observational studies. In this article, we introduce a general framework of sensitivity analysis that is based on inverse probability weighting. We propose a general methodology that allows both non-parametric and parametric analyses, which are driven by two parameters that govern the magnitude of the variation of the multiplicative errors of the propensity score and their correlations with the potential outcomes. We also introduce a specific parametric model that offers a mechanistic view on how the uncontrolled confounding may bias the inference through these parameters. Our method can be readily applied to both binary and continuous outcomes and depends on the covariates only through the propensity score that can be estimated by any parametric or non-parametric method. We illustrate our method with two medical data sets.

  5. Design and Analysis Tools for Supersonic Inlets

    NASA Technical Reports Server (NTRS)

    Slater, John W.; Folk, Thomas C.

    2009-01-01

    Computational tools are being developed for the design and analysis of supersonic inlets. The objective is to update existing tools and provide design and low-order aerodynamic analysis capability for advanced inlet concepts. The Inlet Tools effort includes aspects of creating an electronic database of inlet design information, a document describing inlet design and analysis methods, a geometry model for describing the shape of inlets, and computer tools that implement the geometry model and methods. The geometry model has a set of basic inlet shapes that include pitot, two-dimensional, axisymmetric, and stream-traced inlet shapes. The inlet model divides the inlet flow field into parts that facilitate the design and analysis methods. The inlet geometry model constructs the inlet surfaces through the generation and transformation of planar entities based on key inlet design factors. Future efforts will focus on developing the inlet geometry model, the inlet design and analysis methods, a Fortran 95 code to implement the model and methods. Other computational platforms, such as Java, will also be explored.

  6. Electric motor designs for attenuating torque disturbance in sensitive space mechanisms

    NASA Astrophysics Data System (ADS)

    Marks, David B.; Fink, Richard A.

    2003-09-01

    When a motion control system introduces unwanted torque jitter and motion anomalies into sensitive space flight optical or positioning mechanisms, the pointing accuracy, positioning capability, or scanning resolution of the mission suffers. Special motion control technology must be employed to provide attenuation of the harmful torque disturbances. Brushless DC (BLDC) Motors with low torque disturbance characteristics have been successfully used on such notable missions as the Hubble Space Telescope when conventional approaches to motor design would not work. Motor designs for low disturbance mechanisms can include two and three phase sinusoidal BLDC motors, BLDC motors without iron teeth, and sometimes skewed or non-integral slot designs for motors commutated with Hall effect devices. The principal components of motor torque disturbance, successful BLDC motor designs for attenuating disturbances, and design trade-offs for optimum performance are examined.

  7. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations.

    PubMed

    Arampatzis, Georgios; Katsoulakis, Markos A

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-"coupled"- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz-Kalos-Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB

  8. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    SciTech Connect

    Arampatzis, Georgios; Katsoulakis, Markos A.

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that the new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary

  9. Strain response of stretchable micro-electrodes: Controlling sensitivity with serpentine designs and encapsulation

    SciTech Connect

    Gutruf, Philipp; Walia, Sumeet; Nur Ali, Md; Sriram, Sharath E-mail: madhu.bhaskaran@gmail.com; Bhaskaran, Madhu E-mail: madhu.bhaskaran@gmail.com

    2014-01-13

    The functionality of flexible electronics relies on stable performance of thin film micro-electrodes. This letter investigates the behavior of gold thin films on polyimide, a prevalent combination in flexible devices. The dynamic behavior of gold micro-electrodes has been studied by subjecting them to stress while monitoring their resistance in situ. The shape of the electrodes was systematically varied to examine resistive strain sensitivity, while an additional encapsulation was applied to characterize multilayer behavior. The realized designs show remarkable tolerance to repetitive strain, demonstrating that curvature and encapsulation are excellent approaches for minimizing resistive strain sensitivity to enable durable flexible electronics.

  10. Design and Comparative Evaluation of In-vitro Drug Release, Pharmacokinetics and Gamma Scintigraphic Analysis of Controlled Release Tablets Using Novel pH Sensitive Starch and Modified Starch- acrylate Graft Copolymer Matrices

    PubMed Central

    Kumar, Pankaj; Ganure, Ashok Laxmanrao; Subudhi, Bharat Bhushan; Shukla, Shubhanjali

    2015-01-01

    The present investigation deals with the development of controlled release tablets of salbutamol sulphate using graft copolymers (St-g-PMMA and Ast-g-PMMA) of starch and acetylated starch. Drug excipient compatibility was spectroscopically analyzed via FT-IR, which confirmed no interaction between drug and other excipients. Formulations were evaluated for physical characteristics like hardness, friability, weight variations, drug release and drug content analysis which satisfies all the pharmacopoeial requirement of tablet dosage form. Release rate of a model drug from formulated matrix tablets were studied at two different pH namely 1.2 and 6.8, spectrophotometrically. Drug release from the tablets of graft copolymer matrices is profoundly pH-dependent and showed a reduced release rate under acidic conditions as compared to the alkaline conditions. Study of release mechanism by Korsmeyer’s model with n values between 0.61-0.67, proved that release was governed by both diffusion and erosion. In comparison to starch and acetylated starch matrix formulations, pharmacokinetic parameters of graft copolymers matrix formulations showed a significant decrease in Cmax with an increase in tmax, indicating the effect of dosage form would last for longer duration. The gastro intestinal transit behavior of the formulation was determined by gamma scintigraphy, using 99mTc as a marker in healthy rabbits. The amount of radioactive tracer released from the labelled tablets was minimal when the tablets were in the stomach, whereas it increased as tablets reached to intestine. Thus, in-vitro and in-vivo drug release studies of starch-acrylate graft copolymers proved their controlled release behavior with preferential delivery into alkaline pH environment. PMID:26330856

  11. Sensitivity analysis of ecosystem service valuation in a Mediterranean watershed.

    PubMed

    Sánchez-Canales, María; López Benito, Alfredo; Passuello, Ana; Terrado, Marta; Ziv, Guy; Acuña, Vicenç; Schuhmacher, Marta; Elorza, F Javier

    2012-12-01

    The services of natural ecosystems are clearly very important to our societies. In the last years, efforts to conserve and value ecosystem services have been fomented. By way of illustration, the Natural Capital Project integrates ecosystem services into everyday decision making around the world. This project has developed InVEST (a system for Integrated Valuation of Ecosystem Services and Tradeoffs). The InVEST model is a spatially integrated modelling tool that allows us to predict changes in ecosystem services, biodiversity conservation and commodity production levels. Here, InVEST model is applied to a stakeholder-defined scenario of land-use/land-cover change in a Mediterranean region basin (the Llobregat basin, Catalonia, Spain). Of all InVEST modules and sub-modules, only the behaviour of the water provisioning one is investigated in this article. The main novel aspect of this work is the sensitivity analysis (SA) carried out to the InVEST model in order to determine the variability of the model response when the values of three of its main coefficients: Z (seasonal precipitation distribution), prec (annual precipitation) and eto (annual evapotranspiration), change. The SA technique used here is a One-At-a-Time (OAT) screening method known as Morris method, applied over each one of the one hundred and fifty four sub-watersheds in which the Llobregat River basin is divided. As a result, this method provides three sensitivity indices for each one of the sub-watersheds under consideration, which are mapped to study how they are spatially distributed. From their analysis, the study shows that, in the case under consideration and between the limits considered for each factor, the effect of the Z coefficient on the model response is negligible, while the other two need to be accurately determined in order to obtain precise output variables. The results of this study will be applicable to the others watersheds assessed in the Consolider Scarce Project.

  12. Uncertainty and sensitivity analysis for photovoltaic system modeling.

    SciTech Connect

    Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk

    2013-12-01

    We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.

  13. A Multivariate Analysis of Extratropical Cyclone Environmental Sensitivity

    NASA Astrophysics Data System (ADS)

    Tierney, G.; Posselt, D. J.; Booth, J. F.

    2015-12-01

    The implications of a changing climate system include more than a simple temperature increase. A changing climate also modifies atmospheric conditions responsible for shaping the genesis and evolution of atmospheric circulations. In the mid-latitudes, the effects of climate change on extratropical cyclones (ETCs) can be expressed through changes in bulk temperature, horizontal and vertical temperature gradients (leading to changes in mean state winds) as well as atmospheric moisture content. Understanding how these changes impact ETC evolution and dynamics will help to inform climate mitigation and adaptation strategies, and allow for better informed weather emergency planning. However, our understanding is complicated by the complex interplay between a variety of environmental influences, and their potentially opposing effects on extratropical cyclone strength. Attempting to untangle competing influences from a theoretical or observational standpoint is complicated by nonlinear responses to environmental perturbations and a lack of data. As such, numerical models can serve as a useful tool for examining this complex issue. We present results from an analysis framework that combines the computational power of idealized modeling with the statistical robustness of multivariate sensitivity analysis. We first establish control variables, such as baroclinicity, bulk temperature, and moisture content, and specify a range of values that simulate possible changes in a future climate. The Weather Research and Forecasting (WRF) model serves as the link between changes in climate state and ETC relevant outcomes. A diverse set of output metrics (e.g., sea level pressure, average precipitation rates, eddy kinetic energy, and latent heat release) facilitates examination of storm dynamics, thermodynamic properties, and hydrologic cycles. Exploration of the multivariate sensitivity of ETCs to changes in control parameters space is performed via an ensemble of WRF runs coupled with

  14. Design of retrodiffraction gratings for polarization-insensitive and polarization-sensitive characteristics by using the Taguchi method.

    PubMed

    Lee, ChaBum; Hane, Kazuhiro; Kim, WanSoo; Lee, Sun-Kyu

    2008-06-20

    We present the design of retrodiffraction gratings that utilize total internal reflection (TIR) in a lamellar configuration to achieve high performance for both TE and TM polarized light and polarization-sensitive performance for gratings behaving as polarizer filters; the design was based on rigorous coupled wave analysis (RCWA) and the Taguchi method. The components can thus be fabricated from a single dielectric material and do not have to be coated with a metallic or dielectric film layer to enhance the reflectance. The effects of the structural and optical parameters of lamellar gratings were investigated, and the TIR gratings in a lamellar configuration were structurally and optically optimized in terms of the signal-to-noise ratio (S/N) and a statistical analysis of variance (ANOVA) of the refractive index, grating period, filling factor, and grating depth as control factors and the estimated efficiency by RCWA as a noise factor. For more accurate robustness, a two-step optimization process was used for each purpose. For TIR gratings designed to perform similarly for TE and TM incident polarization, the -1st-order efficiencies were estimated to be up to 92.0% and 88.5% for TE and TM polarization, respectively. Also, for the TIR gratings designed to achieve polarization-sensitive performance when behaving as a polarizer filters, the -1st-order diffraction efficiencies for TE and TM polarization were estimated to be up to 95.5% and 2.7%, respectively. From these analysis results, it was confirmed that the Taguchi method shows feasibility for an optimization approach to a technique for designing optical devices.

  15. Global sensitivity analysis of the dispersion maximum position of the PCFs with circular holes

    NASA Astrophysics Data System (ADS)

    Guryev, Igor; Sukhoivanov, Igor; Andrade Lucio, Jose A.; Vargas Rodrigues, Everardo; Shulika, Oleksiy; Mata Chavez, Ruth I.; Baca Montero, Eric R.

    2015-08-01

    Microstructured fibers have recently become popular due to their numerous applications for fiber lasers,1 super-continuum generationi2 and pulse reshaping.3 One of the most important properties of such fibers that is taken into account is its dispersion. Fine tuning of the dispersion (i.e. dispersion management) is one of the crucial peculiarities of the photonic crystal fibers (PCFs)4 that are particular case of the microstructured fibers. During last years, there have been presented various designs of the PCFs possessing specially-designed dispersion shapes. 5-7 However, no universal technique exists which would allow tuning the PCF dispersion without using optimization methods. In our work, we investigate the sensitivity of the PCF dispersion as respect to variation of its basic parameters. This knowledge allows fine-tuning the position of local maximum of the PCF dispersion while maintaining other properties unchanged. The work is organized as follows. In the first section we discuss the dispersion computation method that is suitable for the global sensitivity analysis. The second section presents the global sensitivity analysis for this specific case. We also discuss there possible selection of the variable parameters.

  16. Influence analysis on crossover design experiment in bioequivalence studies.

    PubMed

    Huang, Yufen; Ke, Bo-Shiang

    2014-01-01

    Crossover designs are commonly used in bioequivalence studies. However, the results can be affected by some outlying observations, which may lead to the wrong decision on bioequivalence. Therefore, it is essential to investigate the influence of aberrant observations. Chow and Tse in 1990 discussed this issue by considering the methods based on the likelihood distance and estimates distance. Perturbation theory provides a useful tool for the sensitivity analysis on statistical models. Hence, in this paper, we develop the influence functions via the perturbation scheme proposed by Hampel as an alternative approach on the influence analysis for a crossover design experiment. Moreover, the comparisons between the proposed approach and the method proposed by Chow and Tse are investigated. Two real data examples are provided to illustrate the results of these approaches. Our proposed influence functions show excellent performance on the identification of outlier/influential observations and are suitable for use with small sample size crossover designs commonly used in bioequivalence studies.

  17. Neural networks in structural analysis and design - An overview

    NASA Technical Reports Server (NTRS)

    Hajela, P.; Berke, L.

    1992-01-01

    The present paper provides an overview of the state-of-the-art in the application of neural networks in problems of structural analysis and design, including a survey of published applications in structural engineering. Such applications have included, among others, the use of neural networks in modeling nonlinear analysis of structures, as a rapid reanalysis capability in optimal design, and in developing problem parameter sensitivity of optimal solutions for use in multilevel decomposition based design. While most of the applications reported in the literature have been restricted to the use of the multilayer perceptron architecture and minor variations thereof, other network architectures have also been successfully explored, including the ART network, the counterpropagation network and the Hopfield-Tank model.

  18. GALACSI system design and analysis

    NASA Astrophysics Data System (ADS)

    Ströbele, S.; La Penna, P.; Arsenault, R.; Conzelmann, R. D.; Delabre, B.; Duchateau, M.; Dorn, R.; Fedrigo, E.; Hubin, N.; Quentin, J.; Jolley, P.; Kiekebusch, M.; Kirchbauer, J. P.; Klein, B.; Kolb, J.; Kuntschner, H.; Le Louarn, M.; Lizon, J. L.; Madec, P.-Y.; Pettazzi, L.; Soenke, C.; Tordo, S.; Vernet, J.; Muradore, R.

    2012-07-01

    GALACSI is one of the Adaptive Optics (AO) systems part of the ESO Adaptive Optics Facility (AOF). It will use the VLT 4-Laser Guide Stars system, high speed and low noise WaveFront Sensor cameras (<1e-, 1000Hz) the Deformable Secondary Mirror (DSM) and the SPARTA Real Time Computer to sharpen images and enhance faint object detectability of the MUSE Instrument. MUSE is an Integral Field Spectrograph working at wavelengths from 465nm to 930nm. GALACSI implements 2 different AO modes; in Wide Field Mode (WFM) it will perform Ground Layer AO correction and enhance the collected energy in a 0.2" by 0.2" pixel by a factor 2 at 750nm over a Field of View (FoV) of 1' by 1'. The 4 LGSs and one tip tilt reference star (R-mag <17.5) are located outside the MUSE FoV. Key requirements are to provide this performance and a very good image stability for a 1hour long integration time. In Narrow Field Mode (NFM) Laser Tomography AO will be used to reconstruct and correct the turbulence for the center field using the 4 LGSs at 15" off axis and the Near Infra Red (NIR) light of one reference star on axis for tip tilt and focus sensing. In NFM GALACSI will provide a moderate Strehl Ratio of 5% (goal 10%) at 650nm. The NFM hosts several challenges and many subsystems will be pushed to their limits. The opto mechanical design and error budgets of GALACSI is described here.

  19. Margin and sensitivity methods for security analysis of electric power systems

    NASA Astrophysics Data System (ADS)

    Greene, Scott L.

    Reliable operation of large scale electric power networks requires that system voltages and currents stay within design limits. Operation beyond those limits can lead to equipment failures and blackouts. Security margins measure the amount by which system loads or power transfers can change before a security violation, such as an overloaded transmission line, is encountered. This thesis shows how to efficiently compute security margins defined by limiting events and instabilities, and the sensitivity of those margins with respect to assumptions, system parameters, operating policy, and transactions. Security margins to voltage collapse blackouts, oscillatory instability, generator limits, voltage constraints and line overloads are considered. The usefulness of computing the sensitivities of these margins with respect to interarea transfers, loading parameters, generator dispatch, transmission line parameters, and VAR support is established for networks as large as 1500 buses. The sensitivity formulas presented apply to a range of power system models. Conventional sensitivity formulas such as line distribution factors, outage distribution factors, participation factors and penalty factors are shown to be special cases of the general sensitivity formulas derived in this thesis. The sensitivity formulas readily accommodate sparse matrix techniques. Margin sensitivity methods are shown to work effectively for avoiding voltage collapse blackouts caused by either saddle node bifurcation of equilibria or immediate instability due to generator reactive power limits. Extremely fast contingency analysis for voltage collapse can be implemented with margin sensitivity based rankings. Interarea transfer can be limited by voltage limits, line limits, or voltage stability. The sensitivity formulas presented in this thesis apply to security margins defined by any limit criteria. A method to compute transfer margins by directly locating intermediate events reduces the total number

  20. Sensitivity analysis and model reduction of nonlinear differential-algebraic systems. Final progress report

    SciTech Connect

    Petzold, L.R.; Rosen, J.B.

    1997-12-30

    Differential-algebraic equations arise in a wide variety of engineering and scientific problems. Relatively little work has been done regarding sensitivity analysis and model reduction for this class of problems. Efficient methods for sensitivity analysis are required in model development and as an intermediate step in design optimization of engineering processes. Reduced order models are needed for modelling complex physical phenomena like turbulent reacting flows, where it is not feasible to use a fully-detailed model. The objective of this work has been to develop numerical methods and software for sensitivity analysis and model reduction of nonlinear differential-algebraic systems, including large-scale systems. In collaboration with Peter Brown and Alan Hindmarsh of LLNL, the authors developed an algorithm for finding consistent initial conditions for several widely occurring classes of differential-algebraic equations (DAEs). The new algorithm is much more robust than the previous algorithm. It is also very easy to use, having been designed to require almost no information about the differential equation, Jacobian matrix, etc. in addition to what is already needed to take the subsequent time steps. The new algorithm has been implemented in a version of the software for solution of large-scale DAEs, DASPK, which has been made available on the internet. The new methods and software have been used to solve a Tokamak edge plasma problem at LLNL which could not be solved with the previous methods and software because of difficulties in finding consistent initial conditions. The capability of finding consistent initial values is also needed for the sensitivity and optimization efforts described in this paper.

  1. Neutral density filters with Risley prisms: analysis and design.

    PubMed

    Duma, Virgil-Florin; Nicolov, Mirela

    2009-05-10

    We achieve the analysis and design of optical attenuators with double-prism neutral density filters. A comparative study is performed on three possible device configurations; only two are presented in the literature but without their design calculus. The characteristic parameters of this optical attenuator with Risley translating prisms for each of the three setups are defined and their analytical expressions are derived: adjustment scale (attenuation range) and interval, minimum transmission coefficient and sensitivity. The setups are compared to select the optimal device, and, from this study, the best solution for double-prism neutral density filters, both from a mechanical and an optical point of view, is determined with two identical, symmetrically movable, no mechanical contact prisms. The design calculus of this optimal device is developed in essential steps. The parameters of the prisms, particularly their angles, are studied to improve the design, and we demonstrate the maximum attenuation range that this type of attenuator can provide.

  2. Worst case estimation of homology design by convex analysis

    NASA Technical Reports Server (NTRS)

    Yoshikawa, N.; Elishakoff, Isaac; Nakagiri, S.

    1998-01-01

    The methodology of homology design is investigated for optimum design of advanced structures. for which the achievement of delicate tasks by the aid of active control system is demanded. The proposed formulation of homology design, based on the finite element sensitivity analysis, necessarily requires the specification of external loadings. The formulation to evaluate the worst case for homology design caused by uncertain fluctuation of loadings is presented by means of the convex model of uncertainty, in which uncertainty variables are assigned to discretized nodal forces and are confined within a conceivable convex hull given as a hyperellipse. The worst case of the distortion from objective homologous deformation is estimated by the Lagrange multiplier method searching the point to maximize the error index on the boundary of the convex hull. The validity of the proposed method is demonstrated in a numerical example using the eleven-bar truss structure.

  3. Ecological sensitivity analysis in Fengshun County based on GIS

    NASA Astrophysics Data System (ADS)

    Zhou, Xia; Zhang, Hong-ou

    2008-10-01

    Ecological sensitivity in Fengshun County was analyzed by using GIS technology. Several factors were considered, which included sensitivity to acid rain, soil erosion, flood and geological disaster. Meanwhile, nature reserve and economic indicator were also considered. After single sensitivity assessment, the general ecological sensitivity was computed through GIS software. Ranging from low to extreme the ecological sensitivity was divided into five levels: not sensitive, low sensitive, moderately sensitive, highly sensitive and extremely sensitive. The results showed there was highly sensitivity in the south-east Fengshun. With the sensitivity and environment characters, the ecological function zone was also worked out, which included three big ecological function zones and ten sub-ecological zones. The three big ecological function zones were hill eco-environmental function zone, platform and plain ecological construction zone, ecological restoration and control zone. Based on the analyzed results, some strategies on environmental protection to each zone were brought forward, which provided the gist for making urban planning and environmental protection planning to Fengshun.

  4. Spatial risk assessment for critical network infrastructure using sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Möderl, Michael; Rauch, Wolfgang

    2011-12-01

    The presented spatial risk assessment method allows for managing critical network infrastructure in urban areas under abnormal and future conditions caused e.g., by terrorist attacks, infrastructure deterioration or climate change. For the spatial risk assessment, vulnerability maps for critical network infrastructure are merged with hazard maps for an interfering process. Vulnerability maps are generated using a spatial sensitivity analysis of network transport models to evaluate performance decrease under investigated thread scenarios. Thereby parameters are varied according to the specific impact of a particular threat scenario. Hazard maps are generated with a geographical information system using raster data of the same threat scenario derived from structured interviews and cluster analysis of events in the past. The application of the spatial risk assessment is exemplified by means of a case study for a water supply system, but the principal concept is applicable likewise to other critical network infrastructure. The aim of the approach is to help decision makers in choosing zones for preventive measures.

  5. Relative performance of academic departments using DEA with sensitivity analysis.

    PubMed

    Tyagi, Preeti; Yadav, Shiv Prasad; Singh, S P

    2009-05-01

    The process of liberalization and globalization of Indian economy has brought new opportunities and challenges in all areas of human endeavor including education. Educational institutions have to adopt new strategies to make best use of the opportunities and counter the challenges. One of these challenges is how to assess the performance of academic programs based on multiple criteria. Keeping this in view, this paper attempts to evaluate the performance efficiencies of 19 academic departments of IIT Roorkee (India) through data envelopment analysis (DEA) technique. The technique has been used to assess the performance of academic institutions in a number of countries like USA, UK, Australia, etc. But we are using it first time in Indian context to the best of our knowledge. Applying DEA models, we calculate technical, pure technical and scale efficiencies and identify the reference sets for inefficient departments. Input and output projections are also suggested for inefficient departments to reach the frontier. Overall performance, research performance and teaching performance are assessed separately using sensitivity analysis.

  6. Robust and sensitive video motion detection for sleep analysis.

    PubMed

    Heinrich, Adrienne; Geng, Di; Znamenskiy, Dmitry; Vink, Jelte Peter; de Haan, Gerard

    2014-05-01

    In this paper, we propose a camera-based system combining video motion detection, motion estimation, and texture analysis with machine learning for sleep analysis. The system is robust to time-varying illumination conditions while using standard camera and infrared illumination hardware. We tested the system for periodic limb movement (PLM) detection during sleep, using EMG signals as a reference. We evaluated the motion detection performance both per frame and with respect to movement event classification relevant for PLM detection. The Matthews correlation coefficient improved by a factor of 2, compared to a state-of-the-art motion detection method, while sensitivity and specificity increased with 45% and 15%, respectively. Movement event classification improved by a factor of 6 and 3 in constant and highly varying lighting conditions, respectively. On 11 PLM patient test sequences, the proposed system achieved a 100% accurate PLM index (PLMI) score with a slight temporal misalignment of the starting time (<1 s) regarding one movement. We conclude that camera-based PLM detection during sleep is feasible and can give an indication of the PLMI score.

  7. Sensitivity and uncertainty analysis of a regulatory risk model

    SciTech Connect

    Kumar, A.; Manocha, A.; Shenoy, T.

    1999-07-01

    Health Risk Assessments (H.R.A.s) are increasingly being used in the environmental decision making process, starting from problem identification to the final clean up activities. A key issue concerning the results of these risk assessments is the uncertainty associated with them. This uncertainty has been associated with highly conservative estimates of risk assessment parameters in past studies. The primary purpose of this study was to investigate error propagation through a risk model. A hypothetical glass plant situated in the state of California was studied. Air emissions from this plant were modeled using the ISCST2 model and the risk was calculated using the ACE2588 model. The downwash was also considered during the concentration calculations. A sensitivity analysis on the risk computations identified five parameters--mixing depth for human consumption, deposition velocity, weathering constant, interception factors for vine crop and the average leaf vegetable consumption--which had the greatest impact on the calculated risk. A Monte Carlo analysis using these five parameters resulted in a distribution with a lesser percentage deviation than the percentage standard deviation of the input parameters.

  8. Structural analysis at aircraft conceptual design stage

    NASA Astrophysics Data System (ADS)

    Mansouri, Reza

    In the past 50 years, computers have helped by augmenting human efforts with tremendous pace. The aircraft industry is not an exception. Aircraft industry is more than ever dependent on computing because of a high level of complexity and the increasing need for excellence to survive a highly competitive marketplace. Designers choose computers to perform almost every analysis task. But while doing so, existing effective, accurate and easy to use classical analytical methods are often forgotten, which can be very useful especially in the early phases of the aircraft design where concept generation and evaluation demands physical visibility of design parameters to make decisions [39, 2004]. Structural analysis methods have been used by human beings since the very early civilization. Centuries before computers were invented; the pyramids were designed and constructed by Egyptians around 2000 B.C, the Parthenon was built by the Greeks, around 240 B.C, Dujiangyan was built by the Chinese. Persepolis, Hagia Sophia, Taj Mahal, Eiffel tower are only few more examples of historical buildings, bridges and monuments that were constructed before we had any advancement made in computer aided engineering. Aircraft industry is no exception either. In the first half of the 20th century, engineers used classical method and designed civil transport aircraft such as Ford Tri Motor (1926), Lockheed Vega (1927), Lockheed 9 Orion (1931), Douglas DC-3 (1935), Douglas DC-4/C-54 Skymaster (1938), Boeing 307 (1938) and Boeing 314 Clipper (1939) and managed to become airborne without difficulty. Evidencing, while advanced numerical methods such as the finite element analysis is one of the most effective structural analysis methods; classical structural analysis methods can also be as useful especially during the early phase of a fixed wing aircraft design where major decisions are made and concept generation and evaluation demands physical visibility of design parameters to make decisions

  9. The design of charge-sensitive preamplifier with differential JFET input

    NASA Astrophysics Data System (ADS)

    Xiao, Hai-jun; Zhang, Liu-qiang; Xiao, Sha-li; Li, Xian-cang; Huang, Zhen-hua

    2013-09-01

    In the highly sensitive detection field, charge-sensitive amplifier is widely used in the preamplifier of detectors, however, the high voltage applied to these detectors (such as CZT nuclear detector) often make serious noise, which may influence the sensitivity of the detector. Despite the traditional passive filter circuit to eliminate the noise of the power, but if the power supply accuracy isn't high enough and the passive circuit eliminate the power supply noise is incomplete. The noise still may affect the performance of the final system. According to the need of nuclear detection and photoelectric detection, a kind of differential JFET charge-sensitive preamplifier is proposed in this paper, which eliminates the power-supply noise and Johnson noise of bias resistance. First, theoretical analysis of the traditional JFET circuit is proved and simulation of the JFET circuit is performed with ORCAD software, which prove that power-supply noise effect the preamplifier. Next, simulation of the innovative circuit is performed with ORCAD software. Finally, the fabricated circuit board is tested with avalanche photo diode (APD). It is shown that the charge-sensitive preamplifier with differential JFET input can significantly eliminate the power-supply noise and Johnson noise of resistance (both low frequency and high frequency) and realize a high sensitivity.

  10. Grid and design variables sensitivity analyses for NACA four-digit wing-sections

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, Ideen; Smith, Robert E.; Tiwari, Surendra N.

    1993-01-01

    Two distinct parameterization procedures are developed for investigating the grid sensitivity with respect to design parameters of a wing-section example. The first procedure is based on traditional (physical) relations defining NACA four-digit wing-sections. The second is advocating a novel (geometrical) parameterization using spline functions such as NURBS (Non-Uniform Rational B-Splines) for defining the wing-section geometry. An interactive algebraic grid generation technique, known as Hermite Cubic Interpolation, is employed to generate C-type grids around wing-sections. The grid sensitivity of the domain with respect to design and grid parameters has been obtained by direct differentiation of the grid equations. A hybrid approach is proposed for more geometrically complex configurations. A comparison of the sensitivity coefficients with those obtained using a finite-difference approach has been made to verify the feasibility of the approach. The aerodynamic sensitivity coefficients are obtained using the compressible two-dimensional thin-layer Navier-Stokes equations.

  11. A fully multiple-criteria implementation of the Sobol‧ method for parameter sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Rosolem, Rafael; Gupta, Hoshin V.; Shuttleworth, W. James; Zeng, Xubin; de Gonçalves, Luis Gustavo Gonçalves

    2012-04-01

    We present a novel rank-based fully multiple-criteria implementation of the Sobol' variance-based sensitivity analysis approach that implements an objective strategy to evaluate parameter sensitivity when model evaluation involves several metrics of performance. The method is superior to single-criterion approaches while avoiding the subjectivity observed in "pseudo" multiple-criteria methods. Further, it contributes to our understanding of structural characteristics of a model and simplifies parameter estimation by identifying insensitive parameters that can be fixed to default values during model calibration studies. We illustrate the approach by applying it to the problem of identifying the most influential parameters in the Simple Biosphere 3 (SiB3) model using a network of flux towers in Brazil. We find 27-31 (out of 42) parameters to be influential, most (˜78%) of which are primarily associated with physiology, soil, and carbon properties, and that uncertainties in the physiological properties of the model contribute most to total model uncertainty in regard to energy and carbon fluxes. We also find that the second most important model component contributing to the total output uncertainty varies according to the flux analyzed; whereas morphological properties play an important role in sensible heat flux, soil properties are important for latent heat flux, and carbon properties (mainly associated with the soil respiration submodel) are important for carbon flux (as expected). These distinct sensitivities emphasize the need to account for the multioutput nature of land surface models during sensitivity analysis and parameter estimation. Applied to other similar models, our approach can help to establish which soil-plant-atmosphere processes matter most in land surface models of Amazonia and thereby aid in the design of field campaigns to characterize and measure the associated parameters. The approach can also be used with other sensitivity analysis

  12. On the sensitivity analysis of separated-loop MRS data

    NASA Astrophysics Data System (ADS)

    Behroozmand, A.; Auken, E.; Fiandaca, G.

    2013-12-01

    In this study we investigate the sensitivity analysis of separated-loop magnetic resonance sounding (MRS) data and, in light of deploying a separate MRS receiver system from the transmitter system, compare the parameter determination of the separated-loop with the conventional coincident-loop MRS data. MRS has emerged as a promising surface-based geophysical technique for groundwater investigations, as it provides a direct estimate of the water content. The method works based on the physical principle of NMR during which a large volume of protons of the water molecules in the subsurface is excited at the specific Larmor frequency. The measurement consists of a large wire loop (typically 25 - 100 m in side length/diameter) deployed on the surface which typically acts as both a transmitter and a receiver, the so-called coincident-loop configuration. An alternating current is passed through the loop deployed and the superposition of signals from all precessing protons within the investigated volume is measured in a receiver loop; a decaying NMR signal called Free Induction Decay (FID). To provide depth information, the FID signal is measured for a series of pulse moments (Q; product of current amplitude and transmitting pulse length) during which different earth volumes are excited. One of the main and inevitable limitations of MRS measurements is a relatively long measurement dead time, i.e. a non-zero time between the end of the energizing pulse and the beginning of the measurement, which makes it difficult, and in some places impossible, to record SNMR signal from fine-grained geologic units and limits the application of advanced pulse sequences. Therefore, one of the current research activities is the idea of building separate receiver units, which will diminish the dead time. In light of that, the aims of this study are twofold: 1) Using a forward modeling approach, the sensitivity kernels of different separated-loop MRS soundings are studied and compared with

  13. Sensitivity analysis for hydrology and pesticide supply towards the river in SWAT

    NASA Astrophysics Data System (ADS)

    Holvoet, K.; van Griensven, A.; Seuntjens, P.; Vanrolleghem, P. A.

    The dynamic behaviour of pesticides in river systems strongly depends on varying climatological conditions and agricultural management practices. To describe this behaviour at the river-basin scale, integrated hydrological and water quality models are needed. A crucial step in understanding the various processes determining pesticide fate is to perform a sensitivity analysis. Sensitivity analysis for hydrology and pesticide supply in SWAT (Soil and Water Assessment Tool) will provide useful support for the development of a reliable hydrological model and will give insight in which parameters are most sensitive concerning pesticide supply towards rivers. The study was performed on the Nil catchment in Belgium. In this study we utilised an LH-OAT sensitivity analysis. The LH-OAT method combines the One-factor-At-a-Time (OAT) design and Latin Hypercube (LH) sampling by taking the Latin Hypercube samples as initial points for an OAT design. By means of the LH-OAT sensitivity analysis, the dominant hydrological parameters were determined and a reduction of the number of model parameters was performed. Dominant hydrological parameters were the curve number (CN2), the surface runoff lag (surlag), the recharge to deep aquifer (rchrg_dp) and the threshold depth of water in the shallow aquifer (GWQMN). Next, the selected parameters were estimated by manual calibration. Hereby, the Nash-Sutcliffe coefficient of efficiency improved from an initial value of -22.4 to +0.53. In the second part, sensitivity analyses were performed to provide insight in which parameters or model inputs contribute most to variance in pesticide output. The results of this study show that for the Nil catchment, hydrologic parameters are dominant in controlling pesticide predictions. The other parameter that affects pesticide concentrations in surface water is ‘apfp_pest’, which meaning was changed into a parameter that controls direct losses to the river system (e.g., through the clean up of spray

  14. Time course analysis of baroreflex sensitivity during postural stress.

    PubMed

    Westerhof, Berend E; Gisolf, Janneke; Karemaker, John M; Wesseling, Karel H; Secher, Niels H; van Lieshout, Johannes J

    2006-12-01

    Postural stress requires immediate autonomic nervous action to maintain blood pressure. We determined time-domain cardiac baroreflex sensitivity (BRS) and time delay (tau) between systolic blood pressure and interbeat interval variations during stepwise changes in the angle of vertical body axis (alpha). The assumption was that with increasing postural stress, BRS becomes attenuated, accompanied by a shift in tau toward higher values. In 10 healthy young volunteers, alpha included 20 degrees head-down tilt (-20 degrees), supine (0 degree), 30 and 70 degrees head-up tilt (30 degrees, 70 degrees), and free standing (90 degrees). Noninvasive blood pressures were analyzed over 6-min periods before and after each change in alpha. The BRS was determined by frequency-domain analysis and with xBRS, a cross-correlation time-domain method. On average, between 28 (-20 degrees) to 45 (90 degrees) xBRS estimates per minute became available. Following a change in alpha, xBRS reached a different mean level in the first minute in 78% of the cases and in 93% after 6 min. With increasing alpha, BRS decreased: BRS = -10.1.sin(alpha) + 18.7 (r(2) = 0.99) with tight correlation between xBRS and cross-spectral gain (r(2) approximately 0.97). Delay tau shifted toward higher values. In conclusion, in healthy subjects the sensitivity of the cardiac baroreflex obtained from time domain decreases linearly with sin(alpha), and the start of baroreflex adaptation to a physiological perturbation like postural stress occurs rapidly. The decreases of BRS and reduction of short tau may be the result of reduced vagal activity with increasing alpha.

  15. Radar sensitivity and resolution in presence of range sidelobe reducing networks designed using linear programming

    NASA Astrophysics Data System (ADS)

    Bicocchi, R.; Melacci, P. T.; Bucciarelli, T.

    1984-06-01

    The design of a sidelobe-reduction network for coherent high-resolution radars using Barker codes and the results of an analytical investigation of its performance are presented and illustrated graphically. Compression is achieved by a matched filter followed by a weighting network designed using linear programming to minimize the implementation to adapt to different operating modes. It is found that the network gives significant increases in sensitivity and resolution while limiting mismatching losses to about 0.2 dB. A typical digital implementation requires only 66 devices for 10-bit input and sampling rate 150 nsec.

  16. Sensitivity analysis of simulated SOA loadings using a variance-based statistical approach: SENSITIVITY ANALYSIS OF SOA

    SciTech Connect

    Shrivastava, Manish; Zhao, Chun; Easter, Richard C.; Qian, Yun; Zelenyuk, Alla; Fast, Jerome D.; Liu, Ying; Zhang, Qi; Guenther, Alex

    2016-04-08

    We investigate the sensitivity of secondary organic aerosol (SOA) loadings simulated by a regional chemical transport model to 7 selected tunable model parameters: 4 involving emissions of anthropogenic and biogenic volatile organic compounds, anthropogenic semi-volatile and intermediate volatility organics (SIVOCs), and NOx, 2 involving dry deposition of SOA precursor gases, and one involving particle-phase transformation of SOA to low volatility. We adopt a quasi-Monte Carlo sampling approach to effectively sample the high-dimensional parameter space, and perform a 250 member ensemble of simulations using a regional model, accounting for some of the latest advances in SOA treatments based on our recent work. We then conduct a variance-based sensitivity analysis using the generalized linear model method to study the responses of simulated SOA loadings to the tunable parameters. Analysis of SOA variance from all 250 simulations shows that the volatility transformation parameter, which controls whether particle-phase transformation of SOA from semi-volatile SOA to non-volatile is on or off, is the dominant contributor to variance of simulated surface-level daytime SOA (65% domain average contribution). We also split the simulations into 2 subsets of 125 each, depending on whether the volatility transformation is turned on/off. For each subset, the SOA variances are dominated by the parameters involving biogenic VOC and anthropogenic SIVOC emissions. Furthermore, biogenic VOC emissions have a larger contribution to SOA variance when the SOA transformation to non-volatile is on, while anthropogenic SIVOC emissions have a larger contribution when the transformation is off. NOx contributes less than 4.3% to SOA variance, and this low contribution is mainly attributed to dominance of intermediate to high NOx conditions throughout the simulated domain. The two parameters related to dry deposition of SOA precursor gases also have very low contributions to SOA variance

  17. Key Reliability Drivers of Liquid Propulsion Engines and A Reliability Model for Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Huang, Zhao-Feng; Fint, Jeffry A.; Kuck, Frederick M.

    2005-01-01

    This paper is to address the in-flight reliability of a liquid propulsion engine system for a launch vehicle. We first establish a comprehensive list of system and sub-system reliability drivers for any liquid propulsion engine system. We then build a reliability model to parametrically analyze the impact of some reliability parameters. We present sensitivity analysis results for a selected subset of the key reliability drivers using the model. Reliability drivers identified include: number of engines for the liquid propulsion stage, single engine total reliability, engine operation duration, engine thrust size, reusability, engine de-rating or up-rating, engine-out design (including engine-out switching reliability, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction), propellant specific hazards, engine start and cutoff transient hazards, engine combustion cycles, vehicle and engine interface and interaction hazards, engine health management system, engine modification, engine ground start hold down with launch commit criteria, engine altitude start (1 in. start), Multiple altitude restart (less than 1 restart), component, subsystem and system design, manufacturing/ground operation support/pre and post flight check outs and inspection, extensiveness of the development program. We present some sensitivity analysis results for the following subset of the drivers: number of engines for the propulsion stage, single engine total reliability, engine operation duration, engine de-rating or up-rating requirements, engine-out design, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction, and engine health management system implementation (basic redlines and more advanced health management systems).

  18. Microstructure design of nanoporous TiO2 photoelectrodes for dye-sensitized solar cell modules.

    PubMed

    Hu, Linhua; Dai, Songyuan; Weng, Jian; Xiao, Shangfeng; Sui, Yifeng; Huang, Yang; Chen, Shuanghong; Kong, Fantai; Pan, Xu; Liang, Linyun; Wang, Kongjia

    2007-01-18

    The optimization of dye-sensitized solar cells, especially the design of nanoporous TiO2 film microstructure, is an urgent problem for high efficiency and future commercial applications. However, up to now, little attention has been focused on the design of nanoporous TiO2 microstructure for a high efficiency of dye-sensitized solar cell modules. The optimization and design of TiO2 photoelectrode microstructure are discussed in this paper. TiO2 photoelectrodes with three different layers, including layers of small pore size films, larger pore size films, and light-scattering particles on the conducting glass with the desirable thickness, were designed and investigated. Moreover, the photovoltaic properties showed that the different porosities, pore size distribution, and BET surface area of each layer have a dramatic influence on short-circuit current, open-circuit voltage, and fill factor of the modules. The optimization and design of TiO2 photoelectrode microstructure contribute a high efficiency of DSC modules. The photoelectric conversion efficiency around 6% with 15 x 20 cm2 modules under illumination of simulated AM1.5 sunlight (100 mW/cm2) and 40 x 60 cm2 panels with the same performance tested outdoor have been achieved by our group.

  19. The Design and Optimization of a Highly Sensitive and Overload-Resistant Piezoresistive Pressure Sensor.

    PubMed

    Meng, Xiawei; Zhao, Yulong

    2016-03-09

    A piezoresistive pressure sensor with a beam-membrane-dual-island structure is developed for micro-pressure monitoring in the field of aviation, which requires great sensitivity and overload resistance capacity. The design, fabrication, and test of the sensor are presented in this paper. By analyzing the stress distribution of sensitive elements using the finite element method, a novel structure incorporating sensitive beams with a traditional bossed diaphragm is built up. The proposed structure proved to be advantageous in terms of high sensitivity and high overload resistance compared with the conventional bossed diaphragm and flat diaphragm structures. Curve fittings of surface stress and deflection based on ANSYS simulation results are performed to establish the sensor equations. Fabricated on an n-type single crystal silicon wafer, the sensor chips are wire-bonded to a printed circuit board (PCB) and packaged for experiments. The static and dynamic characteristics are tested and discussed. Experimental results show that the sensor has a sensitivity as high as 17.339 μV/V/Pa in the range of 500 Pa at room temperature, and a high overload resistance of 200 times overpressure. Due to the excellent performance, the sensor can be applied in measuring micro-pressure lower than 500 Pa.

  20. Design and simulation of adaptive optics controller based on mixed sensitivity H∞ control

    NASA Astrophysics Data System (ADS)

    Song, Dingan; Li, Xinyang; Peng, Zhenming

    2016-10-01

    Optical systems such as telescopes are very complex, and their model usually with the uncertainty. To deal with the uncertainty of adaptive optics system and improve system robust stability, the mixed sensitivity H-infinity control has been introduced to design system controller. In order to testify the validity, wavefront aberration correction capability, as well as the robust stability, has been compared between the mixed sensitivity H-infinity controller and the classic integral controller. The computer simulation results demonstrate that the system with the mixed sensitivity H-infinity controller, while can't guarantee a better correction performance, has greater robust stability than the one with the classic integral controller. That is to say, greater robust stability is achieved at the expense of the correction capability in the system with H-infinity controller. Moreover, the greater the uncertainty is, the more proceeds the mixed sensitivity H-infinity controller will produce. It proves the efficiency of the mixed sensitivity H-infinity controller in dealing with the uncertainty of adaptive optics system.

  1. Design Rules for High-Efficiency Quantum-Dot-Sensitized Solar Cells: A Multilayer Approach.

    PubMed

    Shalom, Menny; Buhbut, Sophia; Tirosh, Shay; Zaban, Arie

    2012-09-06

    The effect of multilayer sensitization in quantum-dot (QD)-sensitized solar cells is reported. A series of electrodes, consisting of multilayer CdSe QDs were assembled on a compact TiO2 layer. Photocurrent measurements along with internal quantum efficiency calculation reveal similar electron collection efficiency up to a 100 nm thickness of the QD layers. Moreover, the optical density and the internal quantum efficiency measurements reveal that the desired surface area of the TiO2 electrode should be increased only by a factor of 17 compared with a compact electrode. We show that the sensitization of low-surface-area TiO2 electrode with QD layers increases the performance of the solar cell, resulting in 3.86% efficiency. These results demonstrate a conceptual difference between the QD-sensitized solar cell and the dye-based system in which dye multilayer decreases the cell performance. The utilization of multilayer QDs opens new opportunities for a significant improvement of quantum-dot-sensitized solar cells via innovative cell design.

  2. Sensitivity analysis of an accident prediction model by the fractional factorial method.

    PubMed

    Akgüngör, Ali P; Yildiz, Osman

    2007-01-01

    Sensitivity analysis of a model can help us determine relative effects of model parameters on model results. In this study, the sensitivity of the accident prediction model proposed by Zegeer et al. [Zegeer, C.V., Reinfurt, D., Hummer, J., Herf, L., Hunter, W., 1987. Safety Effect of Cross-section Design for Two-lane Roads, vols. 1-2. Report FHWA-RD-87/008 and 009 Federal Highway Administration, Department of Transportation, USA] to its parameters was investigated by the fractional factorial analysis method. The reason for selecting this particular model is that it incorporates both traffic and road geometry parameters besides terrain characteristics. The evaluation of sensitivity analysis indicated that average daily traffic (ADT), lane width (W), width of paved shoulder (PA), median (H) and their interactions (i.e., ADT-W, ADT-PA and ADT-H) have significant effects on number of accidents. Based on the absolute value of parameter effects at the three- and two-standard deviation thresholds ADT was found to be of primary importance, while the remaining identified parameters seemed to be of secondary importance. This agrees with the fact that ADT is among the most effective parameters to determine road geometry and therefore, it is directly related to number of accidents. Overall, the fractional factorial method was found to be an efficient tool to examine the relative importance of the selected accident prediction model parameters.

  3. CFD-based surrogate modeling of liquid rocket engine components via design space refinement and sensitivity assessment

    NASA Astrophysics Data System (ADS)

    Mack, Yolanda

    Computational fluid dynamics (CFD) can be used to improve the design and optimization of rocket engine components that traditionally rely on empirical calculations and limited experimentation. CFD based-design optimization can be made computationally affordable through the use of surrogate modeling which can then facilitate additional parameter sensitivity assessments. The present study investigates surrogate-based adaptive design space refinement (DSR) using estimates of surrogate uncertainty to probe the CFD analyses and to perform sensitivity assessments for complex fluid physics associated with liquid rocket engine components. Three studies were conducted. First, a surrogate-based preliminary design optimization was conducted to improve the efficiency of a compact radial turbine for an expander cycle rocket engine while maintaining low weight. Design space refinement was used to identify function constraints and to obtain a high accuracy surrogate model in the region of interest. A merit function formulation for multi-objective design point selection reduced the number of design points by an order of magnitude while maintaining good surrogate accuracy among the best trade-off points. Second, bluff body-induced flow was investigated to identify the physics and surrogate modeling issues related to the flow's mixing dynamics. Multiple surrogates and DSR were instrumental in identifying designs for which the CFD model was deficient and to help to pinpoint the nature of the deficiency. Next, a three-dimensional computational model was developed to explore the wall heat transfer of a GO2/GH2 shear coaxial single element injector. The interactions between turbulent recirculating flow structures, chemical kinetics, and heat transfer are highlighted. Finally, a simplified computational model of multi-element injector flows was constructed to explore the sensitivity of wall heating and improve combustion efficiency to injector element spacing. Design space refinement

  4. Design and realization of a side-polished single-mode fiber optic high-sensitive temperature sensor

    NASA Astrophysics Data System (ADS)

    Nagaraju, B.; Varshney, R. K.; Pal, B. P.; Singh, A.; Monnom, G.; Dussardier, B.

    2008-11-01

    A high sensitive temperature sensor based on evanescent field coupling between a side-polished fiber half-coupler (SPFHC) and a thermo-optic multimode overlay waveguide (MMOW) is designed and demonstrated. Such a structure essentially functions as an asymmetric directional coupler with a band-stop characteristic attributable to the wavelengthdependent resonant coupling between the mode of the SPFHC and one or more modes of the MMOW. A slight change in temperature leads to a significant shift in the phase resonance-coupling wavelength ( λr ) between the MMOW and SPFHC λr, which is easily measurable. The wavelength sensitivity of the device is measured to be ~ 5.3 nm/°C within the measurement range of 26-70°C this sensitivity is more than 5 times higher compared to earlier reported temperature sensors of this kind. The SPFHC was fabricated by selective polishing of the cladding from one side of a bent telecommunication standard single-mode fiber and the MMOW was formed on top of the SPFHC through spin coating. A semi- numerical rigorous normal mode analysis was employed at the design stage by including the curvature effect of the fiber lay in the half-coupler block and the resultant z-dependent evanescent coupling mechanism. An excellent agreement between theoretical and experimental results is found.

  5. Sensitive quantitative analysis of murine LINE1 DNA methylation using high resolution melt analysis.

    PubMed

    Newman, Michelle; Blyth, Benjamin J; Hussey, Damian J; Jardine, Daniel; Sykes, Pamela J; Ormsby, Rebecca J

    2012-01-01

    We present here the first high resolution melt (HRM) assay to quantitatively analyze differences in murine DNA methylation levels utilizing CpG methylation of Long Interspersed Elements-1 (LINE1 or L1). By calculating the integral difference in melt temperature between samples and a methylated control, and biasing PCR primers for unmethylated CpGs, the assay demonstrates enhanced sensitivity to detect changes in methylation in a cell line treated with low doses of 5-aza-2'-deoxycytidine (5-aza). The L1 assay was confirmed to be a good marker of changes in DNA methylation of L1 elements at multiple regions across the genome when compared with total 5-methyl-cytosine content, measured by Liquid Chromatography-Mass Spectrometry (LC-MS). The assay design was also used to detect changes in methylation at other murine repeat elements (B1 and Intracisternal-A-particle Long-terminal Repeat elements). Pyrosequencing analysis revealed that L1 methylation changes were non-uniform across the CpGs within the L1-HRM target region, demonstrating that the L1 assay can detect small changes in CpG methylation among a large pool of heterogeneously methylated DNA templates. Application of the assay to various tissues from Balb/c and CBA mice, including previously unreported peripheral blood (PB), revealed a tissue hierarchy (from hypermethylated to hypomethylated) of PB > kidney > liver > prostate > spleen. CBA mice demonstrated overall greater methylation than Balb/c mice, and male mice demonstrated higher tissue methylation compared with female mice in both strains. Changes in DNA methylation have been reported to be an early and fundamental event in the pathogenesis of many human diseases, including cancer. Mouse studies designed to identify modulators of DNA methylation, the critical doses, relevant time points and the tissues affected are limited by the low throughput nature and exorbitant cost of many DNA methylation assays. The L1 assay provides a high throughput, inexpensive

  6. DARHT : integration of shielding design and analysis with facility design /

    SciTech Connect

    Boudrie, R. L.; Brown, T. H.; Gilmore, W. E.; Downing, J. N. , Jr.; Hack, Alan; McClure, D. A.; Nelson, C. A.; Wadlinger, E. Alan; Zumbro, M. V.

    2002-01-01

    The design of the interior portions of the Dual Axis Radiographic Hydrodynamic Test (DARHT) Facility incorporated shielding and controls from the beginning of the installation of the Accelerators. The purpose of the design and analysis was to demonstrate the adequacy of shielding or to determine the need for additional shielding or controls. Two classes of events were considered: (1) routine operation defined as the annual production of 10,000 2000-ns pulses of electrons at a nominal energy of 20 MeV, some of which are converted to the x-ray imaging beam consisting of four nominal 60-ns pulses over the 2000-ns time frame, and (2) accident case defined as up to 100 2000-ns pulses of electrons accidentally impinging on some metallic surface, thereby producing x rays. Several locations for both classes of events were considered inside and outside of the accelerator hall buildings. The analysis method consisted of the definition of a source term for each case studied and the definition of a model of the shielding and equipment present between the source and the dose areas. A minimal model of the fixed existing or proposed shielding and equipment structures was used for a first approximation. If the resulting dose from the first approximation was below the design goal (1 rem/yr for routine operations, 5 rem for accident cases), then no further investigations were performed. If the result of the first approximation was above our design goals, the model was refined to include existing or proposed shielding and equipment. In some cases existing shielding and equipment were adequate to meet our goals and in some cases additional shielding was added or administrative controls were imposed to protect the workers. It is expected that the radiation shielding design, exclusion area designations, and access control features, will result in low doses to personnel at the DARHT Facility.

  7. Application of advanced multidisciplinary analysis and optimization methods to vehicle design synthesis

    NASA Technical Reports Server (NTRS)

    Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw

    1990-01-01

    Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.

  8. DESIGN PACKAGE 1E SYSTEM SAFETY ANALYSIS

    SciTech Connect

    M. Salem

    1995-06-23

    The purpose of this analysis is to systematically identify and evaluate hazards related to the Yucca Mountain Project Exploratory Studies Facility (ESF) Design Package 1E, Surface Facilities, (for a list of design items included in the package 1E system safety analysis see section 3). This process is an integral part of the systems engineering process; whereby safety is considered during planning, design, testing, and construction. A largely qualitative approach was used since a radiological System Safety Analysis is not required. The risk assessment in this analysis characterizes the accident scenarios associated with the Design Package 1E structures/systems/components(S/S/Cs) in terms of relative risk and includes recommendations for mitigating all identified risks. The priority for recommending and implementing mitigation control features is: (1) Incorporate measures to reduce risks and hazards into the structure/system/component design, (2) add safety devices and capabilities to the designs that reduce risk, (3) provide devices that detect and warn personnel of hazardous conditions, and (4) develop procedures and conduct training to increase worker awareness of potential hazards, on methods to reduce exposure to hazards, and on the actions required to avoid accidents or correct hazardous conditions.

  9. Multidisciplinary design optimization using response surface analysis

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1992-01-01

    Aerospace conceptual vehicle design is a complex process which involves multidisciplinary studies of configuration and technology options considering many parameters at many values. NASA Langley's Vehicle Analysis Branch (VAB) has detailed computerized analysis capabilities in most of the key disciplines required by advanced vehicle design. Given a configuration, the capability exists to quickly determine its performance and lifecycle cost. The next step in vehicle design is to determine the best settings of design parameters that optimize the performance characteristics. Typical approach to design optimization is experience based, trial and error variation of many parameters one at a time where possible combinations usually number in the thousands. However, this approach can either lead to a very long and expensive design process or to a premature termination of the design process due to budget and/or schedule pressures. Furthermore, one variable at a time approach can not account for the interactions that occur among parts of systems and among disciplines. As a result, vehicle design may be far from optimal. Advanced multidisciplinary design optimization (MDO) methods are needed to direct the search in an efficient and intelligent manner in order to drastically reduce the number of candidate designs to be evaluated. The payoffs in terms of enhanced performance and reduced cost are significant. A literature review yields two such advanced MDO methods used in aerospace design optimization; Taguchi methods and response surface methods. Taguchi methods provide a systematic and efficient method for design optimization for performance and cost. However, response surface method (RSM) leads to a better, more accurate exploration of the parameter space and to estimated optimum conditions with a small expenditure on experimental data. These two methods are described.

  10. High-Sensitivity Low-Noise Miniature Fluxgate Magnetometers Using a Flip Chip Conceptual Design

    PubMed Central

    Lu, Chih-Cheng; Huang, Jeff; Chiu, Po-Kai; Chiu, Shih-Liang; Jeng, Jen-Tzong

    2014-01-01

    This paper presents a novel class of miniature fluxgate magnetometers fabricated on a print circuit board (PCB) substrate and electrically connected to each other similar to the current “flip chip” concept in semiconductor package. This sensor is soldered together by reversely flipping a 5 cm × 3 cm PCB substrate to the other identical one which includes dual magnetic cores, planar pick-up coils, and 3-D excitation coils constructed by planar Cu interconnections patterned on PCB substrates. Principles and analysis of the fluxgate sensor are introduced first, and followed by FEA electromagnetic modeling and simulation for the proposed sensor. Comprehensive characteristic experiments of the miniature fluxgate device exhibit favorable results in terms of sensitivity (or “responsivity” for magnetometers) and field noise spectrum. The sensor is driven and characterized by employing the improved second-harmonic detection technique that enables linear V-B correlation and responsivity verification. In addition, the double magnitude of responsivity measured under very low frequency (1 Hz) magnetic fields is experimentally demonstrated. As a result, the maximum responsivity of 593 V/T occurs at 50 kHz of excitation frequency with the second harmonic wave of excitation; however, the minimum magnetic field noise is found to be 0.05 nT/Hz1/2 at 1 Hz under the same excitation. In comparison with other miniature planar fluxgates published to date, the fluxgate magnetic sensor with flip chip configuration offers advances in both device functionality and fabrication simplicity. More importantly, the novel design can be further extended to a silicon-based micro-fluxgate chip manufactured by emerging CMOS-MEMS technologies, thus enriching its potential range of applications in modern engineering and the consumer electronics market. PMID:25196107

  11. High-sensitivity low-noise miniature fluxgate magnetometers using a flip chip conceptual design.

    PubMed

    Lu, Chih-Cheng; Huang, Jeff; Chiu, Po-Kai; Chiu, Shih-Liang; Jeng, Jen-Tzong

    2014-07-30

    This paper presents a novel class of miniature fluxgate magnetometers fabricated on a print circuit board (PCB) substrate and electrically connected to each other similar to the current "flip chip" concept in semiconductor package. This sensor is soldered together by reversely flipping a 5 cm × 3 cm PCB substrate to the other identical one which includes dual magnetic cores, planar pick-up coils, and 3-D excitation coils constructed by planar Cu interconnections patterned on PCB substrates. Principles and analysis of the fluxgate sensor are introduced first, and followed by FEA electromagnetic modeling and simulation for the proposed sensor. Comprehensive characteristic experiments of the miniature fluxgate device exhibit favorable results in terms of sensitivity (or "responsivity" for magnetometers) and field noise spectrum. The sensor is driven and characterized by employing the improved second-harmonic detection technique that enables linear V-B correlation and responsivity verification. In addition, the double magnitude of responsivity measured under very low frequency (1 Hz) magnetic fields is experimentally demonstrated. As a result, the maximum responsivity of 593 V/T occurs at 50 kHz of excitation frequency with the second harmonic wave of excitation; however, the minimum magnetic field noise is found to be 0.05 nT/Hz(1/2) at 1 Hz under the same excitation. In comparison with other miniature planar fluxgates published to date, the fluxgate magnetic sensor with flip chip configuration offers advances in both device functionality and fabrication simplicity. More importantly, the novel design can be further extended to a silicon-based micro-fluxgate chip manufactured by emerging CMOS-MEMS technologies, thus enriching its potential range of applications in modern engineering and the consumer electronics market.

  12. Space shuttle orbiter digital data processing system timing sensitivity analysis OFT ascent phase

    NASA Technical Reports Server (NTRS)

    Lagas, J. J.; Peterka, J. J.; Becker, D. A.

    1977-01-01

    Dynamic loads were investigated to provide simulation and analysis of the space shuttle orbiter digital data processing system (DDPS). Segments of the ascent test (OFT) configuration were modeled utilizing the information management system interpretive model (IMSIM) in a computerized simulation modeling of the OFT hardware and software workload. System requirements for simulation of the OFT configuration were defined, and sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and these sensitivity analyses, a test design was developed for adapting, parameterizing, and executing IMSIM, using varying load and stress conditions for model execution. Analyses of the computer simulation runs are documented, including results, conclusions, and recommendations for DDPS improvements.

  13. Design and characterization of alkoxy-wrapped push-pull porphyrins for dye-sensitized solar cells.

    PubMed

    Ripolles-Sanchis, Teresa; Guo, Bo-Cheng; Wu, Hui-Ping; Pan, Tsung-Yu; Lee, Hsuan-Wei; Raga, Sonia R; Fabregat-Santiago, Francisco; Bisquert, Juan; Yeh, Chen-Yu; Diau, Eric Wei-Guang

    2012-05-07

    Three alkoxy-wrapped push-pull porphyrins were designed and synthesized for dye-sensitized solar cell (DSSC) applications. Spectral, electrochemical, photovoltaic and electrochemical impedance spectroscopy properties of these porphyrin sensitizers were well investigated to provide evidence for the molecular design.

  14. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    DOE PAGES

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more

  15. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    SciTech Connect

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; Tong, Charles; Sun, Yunwei; Chu, Wei; Ye, Aizhong; Miao, Chiyuan; Di, Zhenhua

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin near Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient

  16. Pressurized thermal shock probabilistic fracture mechanics sensitivity analysis for Yankee Rowe reactor pressure vessel

    SciTech Connect

    Dickson, T.L.; Cheverton, R.D.; Bryson, J.W.; Bass, B.R.; Shum, D.K.M.; Keeney, J.A.

    1993-08-01

    The Nuclear Regulatory Commission (NRC) requested Oak Ridge National Laboratory (ORNL) to perform a pressurized-thermal-shock (PTS) probabilistic fracture mechanics (PFM) sensitivity analysis for the Yankee Rowe reactor pressure vessel, for the fluences corresponding to the end of operating cycle 22, using a specific small-break-loss- of-coolant transient as the loading condition. Regions of the vessel with distinguishing features were to be treated individually -- upper axial weld, lower axial weld, circumferential weld, upper plate spot welds, upper plate regions between the spot welds, lower plate spot welds, and the lower plate regions between the spot welds. The fracture analysis methods used in the analysis of through-clad surface flaws were those contained in the established OCA-P computer code, which was developed during the Integrated Pressurized Thermal Shock (IPTS) Program. The NRC request specified that the OCA-P code be enhanced for this study to also calculate the conditional probabilities of failure for subclad flaws and embedded flaws. The results of this sensitivity analysis provide the NRC with (1) data that could be used to assess the relative influence of a number of key input parameters in the Yankee Rowe PTS analysis and (2) data that can be used for readily determining the probability of vessel failure once a more accurate indication of vessel embrittlement becomes available. This report is designated as HSST report No. 117.

  17. Sensitivity analysis of a two-dimensional probabilistic risk assessment model using analysis of variance.

    PubMed

    Mokhtari, Amirhossein; Frey, H Christopher

    2005-12-01

    This article demonstrates application of sensitivity analysis to risk assessment models with two-dimensional probabilistic frameworks that distinguish between variability and uncertainty. A microbial food safety process risk (MFSPR) model is used as a test bed. The process of identifying key controllable inputs and key sources of uncertainty using sensitivity analysis is challenged by typical characteristics of MFSPR models such as nonlinearity, thresholds, interactions, and categorical inputs. Among many available sensitivity analysis methods, analysis of variance (ANOVA) is evaluated in comparison to commonly used methods based on correlation coefficients. In a two-dimensional risk model, the identification of key controllable inputs that can be prioritized with respect to risk management is confounded by uncertainty. However, as shown here, ANOVA provided robust insights regarding controllable inputs most likely to lead to effective risk reduction despite uncertainty. ANOVA appropriately selected the top six important inputs, while correlation-based methods provided misleading insights. Bootstrap simulation is used to quantify uncertainty in ranks of inputs due to sampling error. For the selected sample size, differences in F values of 60% or more were associated with clear differences in rank order between inputs. Sensitivity analysis results identified inputs related to the storage of ground beef servings at home as the most important. Risk management recommendations are suggested in the form of a consumer advisory for better handling and storage practices.

  18. Development of a generalized perturbation theory method for sensitivity analysis using continuous-energy Monte Carlo methods

    SciTech Connect

    Perfetti, Christopher M.; Rearden, Bradley T.

    2016-03-01

    The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in process optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.

  19. Development of a generalized perturbation theory method for sensitivity analysis using continuous-energy Monte Carlo methods

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.

    2016-03-01

    The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less

  20. A comprehensive sensitivity analysis of central-loop MRS data

    NASA Astrophysics Data System (ADS)

    Behroozmand, Ahmad; Auken, Esben; Dalgaard, Esben; Rejkjaer, Simon

    2014-05-01

    In this study we investigate the sensitivity analysis of separated-loop magnetic resonance sounding (MRS) data and, in light of deploying a separate MRS receiver system from the transmitter system, compare the parameter determination of the central-loop with the conventional coincident-loop MRS data. MRS, also called surface NMR, has emerged as a promising surface-based geophysical technique for groundwater investigations, as it provides a direct estimate of the water content and, through empirical relations, is linked to hydraulic properties of the subsurface such as hydraulic conductivity. The method works based on the physical principle of NMR during which a large volume of protons of the water molecules in the subsurface is excited at the specific Larmor frequency. The measurement consists of a large wire loop deployed on the surface which typically acts as both a transmitter and a receiver, the so-called coincident-loop configuration. An alternating current is passed through the loop deployed and the superposition of signals from all precessing protons within the investigated volume is measured in a receiver loop; a decaying NMR signal called Free Induction Decay (FID). To provide depth information, the FID signal is measured for a series of pulse moments (Q; product of current amplitude and transmitting pulse length) during which different earth volumes are excited. One of the main and inevitable limitations of MRS measurements is a relatively long measurement dead time, i.e. a non-zero time between the end of the energizing pulse and the beginning of the measurement, which makes it difficult, and in some places impossible, to record MRS signal from fine-grained geologic units and limits the application of advanced pulse sequences. Therefore, one of the current research activities is the idea of building separate receiver units, which will diminish the dead time. In light of that, the aims of this study are twofold: 1) Using a forward modeling approach, the

  1. NASA Multidisciplinary Design and Analysis Fellowship Program

    NASA Technical Reports Server (NTRS)

    Schrage, D. P.; Craig, J. I.; Mavris, D. N.; Hale, M. A.; DeLaurentis, D.

    1999-01-01

    This report summarizes the results of a multi-year training grant for the development and implementation of a Multidisciplinary Design and Analysis (MDA) Fellowship Program at Georgia Tech. The Program funded the creation of graduate MS and PhD degree programs in aerospace systems design, analysis and integration. It also provided prestigious Fellowships with associated Industry Internships for outstanding engineering students. The graduate program has become the foundation for a vigorous and productive research effort and has produced: 20 MS degrees, 7 Ph.D. degrees, and has contributed to 9 ongoing Ph.D. students. The results of the research are documented in 32 publications (23 of which are included on a companion CDROM) and 4 annual student design reports (included on a companion CDROM). The legacy of this critical funding is the Center for Aerospace Systems Analysis at Georgia Tech which is continuing the graduate program, the research, and the industry internships established by this grant.

  2. Simultaneous analysis and design. [in structural engineering

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.

    1985-01-01

    Optimization techniques are increasingly being used for performing nonlinear structural analysis. The development of element by element (EBE) preconditioned conjugate gradient (CG) techniques is expected to extend this trend to linear analysis. Under these circumstances the structural design problem can be viewed as a nested optimization problem. There are computational benefits to treating this nested problem as a large single optimization problem. The response variables (such as displacements) and the structural parameters are all treated as design variables in a unified formulation which performs simultaneously the design and analysis. Two examples are used for demonstration. A seventy-two bar truss is optimized subject to linear stress constraints and a wing box structure is optimized subject to nonlinear collapse constraints. Both examples show substantial computational savings with the unified approach as compared to the traditional nested approach.

  3. Sensitivity analysis on parameters and processes affecting vapor intrusion risk.

    PubMed

    Picone, Sara; Valstar, Johan; van Gaans, Pauline; Grotenhuis, Tim; Rijnaarts, Huub

    2012-05-01

    A one-dimensional numerical model was developed and used to identify the key processes controlling vapor intrusion risks by means of a sensitivity analysis. The model simulates the fate of a dissolved volatile organic compound present below the ventilated crawl space of a house. In contrast to the vast majority of previous studies, this model accounts for vertical variation of soil water saturation and includes aerobic biodegradation. The attenuation factor (ratio between concentration in the crawl space and source concentration) and the characteristic time to approach maximum concentrations were calculated and compared for a variety of scenarios. These concepts allow an understanding of controlling mechanisms and aid in the identification of critical parameters to be collected for field situations. The relative distance of the source to the nearest gas-filled pores of the unsaturated zone is the most critical parameter because diffusive contaminant transport is significantly slower in water-filled pores than in gas-filled pores. Therefore, attenuation factors decrease and characteristic times increase with increasing relative distance of the contaminant dissolved source to the nearest gas diffusion front. Aerobic biodegradation may decrease the attenuation factor by up to three orders of magnitude. Moreover, the occurrence of water table oscillations is of importance. Dynamic processes leading to a retreating water table increase the attenuation factor by two orders of magnitude because of the enhanced gas phase diffusion.

  4. Sensitivity analysis of near-infrared functional lymphatic imaging

    PubMed Central

    Weiler, Michael; Kassis, Timothy

    2012-01-01

    Abstract. Near-infrared imaging of lymphatic drainage of injected indocyanine green (ICG) has emerged as a new technology for clinical imaging of lymphatic architecture and quantification of vessel function, yet the imaging capabilities of this approach have yet to be quantitatively characterized. We seek to quantify its capabilities as a diagnostic tool for lymphatic disease. Imaging is performed in a tissue phantom for sensitivity analysis and in hairless rats for in vivo testing. To demonstrate the efficacy of this imaging approach to quantifying immediate functional changes in lymphatics, we investigate the effects of a topically applied nitric oxide (NO) donor glyceryl trinitrate ointment. Premixing ICG with albumin induces greater fluorescence intensity, with the ideal concentration being 150  μg/mL ICG and 60  g/L albumin. ICG fluorescence can be detected at a concentration of 150  μg/mL as deep as 6 mm with our system, but spatial resolution deteriorates below 3 mm, skewing measurements of vessel geometry. NO treatment slows lymphatic transport, which is reflected in increased transport time, reduced packet frequency, reduced packet velocity, and reduced effective contraction length. NIR imaging may be an alternative to invasive procedures measuring lymphatic function in vivo in real time. PMID:22734775

  5. Nonparametric Bounds and Sensitivity Analysis of Treatment Effects

    PubMed Central

    Richardson, Amy; Hudgens, Michael G.; Gilbert, Peter B.; Fine, Jason P.

    2015-01-01

    This paper considers conducting inference about the effect of a treatment (or exposure) on an outcome of interest. In the ideal setting where treatment is assigned randomly, under certain assumptions the treatment effect is identifiable from the observable data and inference is straightforward. However, in other settings such as observational studies or randomized trials with noncompliance, the treatment effect is no longer identifiable without relying on untestable assumptions. Nonetheless, the observable data often do provide some information about the effect of treatment, that is, the parameter of interest is partially identifiable. Two approaches are often employed in this setting: (i) bounds are derived for the treatment effect under minimal assumptions, or (ii) additional untestable assumptions are invoked that render the treatment effect identifiable and then sensitivity analysis is conducted to assess how inference about the treatment effect changes as the untestable assumptions are varied. Approaches (i) and (ii) are considered in various settings, including assessing principal strata effects, direct and indirect effects and effects of time-varying exposures. Methods for drawing formal inference about partially identified parameters are also discussed. PMID:25663743

  6. Experimental sensitivity analysis of oxygen transfer in the capillary fringe.

    PubMed

    Haberer, Christina M; Cirpka, Olaf A; Rolle, Massimo; Grathwohl, Peter

    2014-01-01

    Oxygen transfer in the capillary fringe (CF) is of primary importance for a wide variety of biogeochemical processes occurring in shallow groundwater systems. In case of a fluctuating groundwater table two distinct mechanisms of oxygen transfer within the capillary zone can be identified: vertical predominantly diffusive mass flux of oxygen, and mass transfer between entrapped gas and groundwater. In this study, we perform a systematic experimental sensitivity analysis in order to assess the influence of different parameters on oxygen transfer from entrapped air within the CF to underlying anoxic groundwater. We carry out quasi two-dimensional flow-through experiments focusing on the transient phase following imbibition to investigate the influence of the horizontal flow velocity, the average grain diameter of the porous medium, as well as the magnitude and the speed of the water table rise. We present a numerical flow and transport model that quantitatively represents the main mechanisms governing oxygen transfer. Assuming local equilibrium between the aqueous and the gaseous phase, the partitioning process from entrapped air can be satisfactorily simulated. The different experiments are monitored by measuring vertical oxygen concentration profiles at high spatial resolution with a noninvasive optode technique as well as by determining oxygen fluxes at the outlet of the flow-through chamber. The results show that all parameters investigated have a significant effect and determine different amounts of oxygen transferred to the oxygen-depleted groundwater. Particularly relevant are the magnitude of the water table rise and the grain size of the porous medium.

  7. Sensitivity analysis of near-infrared functional lymphatic imaging

    NASA Astrophysics Data System (ADS)

    Weiler, Michael; Kassis, Timothy; Dixon, J. Brandon

    2012-06-01

    Near-infrared imaging of lymphatic drainage of injected indocyanine green (ICG) has emerged as a new technology for clinical imaging of lymphatic architecture and quantification of vessel function, yet the imaging capabilities of this approach have yet to be quantitatively characterized. We seek to quantify its capabilities as a diagnostic tool for lymphatic disease. Imaging is performed in a tissue phantom for sensitivity analysis and in hairless rats for in vivo testing. To demonstrate the efficacy of this imaging approach to quantifying immediate functional changes in lymphatics, we investigate the effects of a topically applied nitric oxide (NO) donor glyceryl trinitrate ointment. Premixing ICG with albumin induces greater fluorescence intensity, with the ideal concentration being 150 μg/mL ICG and 60 g/L albumin. ICG fluorescence can be detected at a concentration of 150 μg/mL as deep as 6 mm with our system, but spatial resolution deteriorates below 3 mm, skewing measurements of vessel geometry. NO treatment slows lymphatic transport, which is reflected in increased transport time, reduced packet frequency, reduced packet velocity, and reduced effective contraction length. NIR imaging may be an alternative to invasive procedures measuring lymphatic function in vivo in real time.

  8. Sensitivity analysis and optimization of the nuclear fuel cycle

    SciTech Connect

    Passerini, S.; Kazimi, M. S.; Shwageraus, E.

    2012-07-01

    A sensitivity study has been conducted to assess the robustness of the conclusions presented in the MIT Fuel Cycle Study. The Once Through Cycle (OTC) is considered as the base-line case, while advanced technologies with fuel recycling characterize the alternative fuel cycles. The options include limited recycling in LWRs and full recycling in fast reactors and in high conversion LWRs. Fast reactor technologies studied include both oxide and metal fueled reactors. The analysis allowed optimization of the fast reactor conversion ratio with respect to desired fuel cycle performance characteristics. The following parameters were found to significantly affect the performance of recycling technologies and their penetration over time: Capacity Factors of the fuel cycle facilities, Spent Fuel Cooling Time, Thermal Reprocessing Introduction Date, and in core and Out-of-core TRU Inventory Requirements for recycling technology. An optimization scheme of the nuclear fuel cycle is proposed. Optimization criteria and metrics of interest for different stakeholders in the fuel cycle (economics, waste management, environmental impact, etc.) are utilized for two different optimization techniques (linear and stochastic). Preliminary results covering single and multi-variable and single and multi-objective optimization demonstrate the viability of the optimization scheme. (authors)

  9. Comparison of the sensitivity of mass spectrometry atmospheric pressure ionization techniques in the analysis of porphyrinoids.

    PubMed

    Swider, Paweł; Lewtak, Jan P; Gryko, Daniel T; Danikiewicz, Witold

    2013-10-01

    The porphyrinoids chemistry is greatly dependent on the data obtained in mass spectrometry. For this reason, it is essential to determine the range of applicability of mass spectrometry ionization methods. In this study, the sensitivity of three different atmospheric pressure ionization techniques, electrospray ionization, atmospheric pressure chemical ionization and atmospheric pressure photoionization, was tested for several porphyrinods and their metallocomplexes. Electrospray ionization method was shown to be the best ionization technique because of its high sensitivity for derivatives of cyanocobalamin, free-base corroles and porphyrins. In the case of metallocorroles and metalloporphyrins, atmospheric pressure photoionization with dopant proved to be the most sensitive ionization method. It was also shown that for relatively acidic compounds, particularly for corroles, the negative ion mode provides better sensitivity than the positive ion mode. The results supply a lot of relevant information on the methodology of porphyrinoids analysis carried out by mass spectrometry. The information can be useful in designing future MS or liquid chromatography-MS experiments.

  10. Microgravity isolation system design: A modern control analysis framework

    NASA Technical Reports Server (NTRS)