Sample records for present sensitivity analysis

  1. Sensitivity Analysis in Engineering

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)

    1987-01-01

    The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.

  2. MOVES sensitivity analysis update : Transportation Research Board Summer Meeting 2012 : ADC-20 Air Quality Committee

    DOT National Transportation Integrated Search

    2012-01-01

    OVERVIEW OF PRESENTATION : Evaluation Parameters : EPAs Sensitivity Analysis : Comparison to Baseline Case : MOVES Sensitivity Run Specification : MOVES Sensitivity Input Parameters : Results : Uses of Study

  3. The application of sensitivity analysis to models of large scale physiological systems

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  4. Design sensitivity analysis using EAL. Part 1: Conventional design parameters

    NASA Technical Reports Server (NTRS)

    Dopker, B.; Choi, Kyung K.; Lee, J.

    1986-01-01

    A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.

  5. Design sensitivity analysis with Applicon IFAD using the adjoint variable method

    NASA Technical Reports Server (NTRS)

    Frederick, Marjorie C.; Choi, Kyung K.

    1984-01-01

    A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.

  6. Design sensitivity analysis of boundary element substructures

    NASA Technical Reports Server (NTRS)

    Kane, James H.; Saigal, Sunil; Gallagher, Richard H.

    1989-01-01

    The ability to reduce or condense a three-dimensional model exactly, and then iterate on this reduced size model representing the parts of the design that are allowed to change in an optimization loop is discussed. The discussion presents the results obtained from an ongoing research effort to exploit the concept of substructuring within the structural shape optimization context using a Boundary Element Analysis (BEA) formulation. The first part contains a formulation for the exact condensation of portions of the overall boundary element model designated as substructures. The use of reduced boundary element models in shape optimization requires that structural sensitivity analysis can be performed. A reduced sensitivity analysis formulation is then presented that allows for the calculation of structural response sensitivities of both the substructured (reduced) and unsubstructured parts of the model. It is shown that this approach produces significant computational economy in the design sensitivity analysis and reanalysis process by facilitating the block triangular factorization and forward reduction and backward substitution of smaller matrices. The implementatior of this formulation is discussed and timings and accuracies of representative test cases presented.

  7. Optimum sensitivity derivatives of objective functions in nonlinear programming

    NASA Technical Reports Server (NTRS)

    Barthelemy, J.-F. M.; Sobieszczanski-Sobieski, J.

    1983-01-01

    The feasibility of eliminating second derivatives from the input of optimum sensitivity analyses of optimization problems is demonstrated. This elimination restricts the sensitivity analysis to the first-order sensitivity derivatives of the objective function. It is also shown that when a complete first-order sensitivity analysis is performed, second-order sensitivity derivatives of the objective function are available at little additional cost. An expression is derived whose application to linear programming is presented.

  8. Analysis of sensitivity and uncertainty in an individual-based model of a threatened wildlife species

    Treesearch

    Bruce G. Marcot; Peter H. Singleton; Nathan H. Schumaker

    2015-01-01

    Sensitivity analysis—determination of how prediction variables affect response variables—of individual-based models (IBMs) are few but important to the interpretation of model output. We present sensitivity analysis of a spatially explicit IBM (HexSim) of a threatened species, the Northern Spotted Owl (NSO; Strix occidentalis caurina) in Washington...

  9. Sensitivity analysis as an aid in modelling and control of (poorly-defined) ecological systems. [closed ecological systems

    NASA Technical Reports Server (NTRS)

    Hornberger, G. M.; Rastetter, E. B.

    1982-01-01

    A literature review of the use of sensitivity analyses in modelling nonlinear, ill-defined systems, such as ecological interactions is presented. Discussions of previous work, and a proposed scheme for generalized sensitivity analysis applicable to ill-defined systems are included. This scheme considers classes of mathematical models, problem-defining behavior, analysis procedures (especially the use of Monte-Carlo methods), sensitivity ranking of parameters, and extension to control system design.

  10. Analysis of Sensitivity and Uncertainty in an Individual-Based Model of a Threatened Wildlife Species

    EPA Science Inventory

    We present a multi-faceted sensitivity analysis of a spatially explicit, individual-based model (IBM) (HexSim) of a threatened species, the Northern Spotted Owl (Strix occidentalis caurina) on a national forest in Washington, USA. Few sensitivity analyses have been conducted on ...

  11. Sensitivity analysis for large-scale problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  12. Sensitivity Analysis for some Water Pollution Problem

    NASA Astrophysics Data System (ADS)

    Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff

    2014-05-01

    Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .

  13. A discourse on sensitivity analysis for discretely-modeled structures

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Haftka, Raphael T.

    1991-01-01

    A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.

  14. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    NASA Technical Reports Server (NTRS)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  15. Boundary formulations for sensitivity analysis without matrix derivatives

    NASA Technical Reports Server (NTRS)

    Kane, J. H.; Guru Prasad, K.

    1993-01-01

    A new hybrid approach to continuum structural shape sensitivity analysis employing boundary element analysis (BEA) is presented. The approach uses iterative reanalysis to obviate the need to factor perturbed matrices in the determination of surface displacement and traction sensitivities via a univariate perturbation/finite difference (UPFD) step. The UPFD approach makes it possible to immediately reuse existing subroutines for computation of BEA matrix coefficients in the design sensitivity analysis process. The reanalysis technique computes economical response of univariately perturbed models without factoring perturbed matrices. The approach provides substantial computational economy without the burden of a large-scale reprogramming effort.

  16. Efficient sensitivity analysis method for chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao

    2016-05-01

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.

  17. What Constitutes a "Good" Sensitivity Analysis? Elements and Tools for a Robust Sensitivity Analysis with Reduced Computational Cost

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin

    2016-04-01

    Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  18. Eigenvalue and eigenvector sensitivity and approximate analysis for repeated eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Hou, Gene J. W.; Kenny, Sean P.

    1991-01-01

    A set of computationally efficient equations for eigenvalue and eigenvector sensitivity analysis are derived, and a method for eigenvalue and eigenvector approximate analysis in the presence of repeated eigenvalues is presented. The method developed for approximate analysis involves a reparamaterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations of changes in both the eigenvalues and eigenvectors associated with the repeated eigenvalue problem. Examples are given to demonstrate the application of such equations for sensitivity and approximate analysis.

  19. Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Newman, James C., III; Barnwell, Richard W.; Taylor, Arthur C., III; Hou, Gene J.-W.

    1998-01-01

    This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics. The focus here is on those methods particularly well- suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid computational fluid dynamics algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid computational fluid dynamics in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.

  20. Development of Solution Algorithm and Sensitivity Analysis for Random Fuzzy Portfolio Selection Model

    NASA Astrophysics Data System (ADS)

    Hasuike, Takashi; Katagiri, Hideki

    2010-10-01

    This paper focuses on the proposition of a portfolio selection problem considering an investor's subjectivity and the sensitivity analysis for the change of subjectivity. Since this proposed problem is formulated as a random fuzzy programming problem due to both randomness and subjectivity presented by fuzzy numbers, it is not well-defined. Therefore, introducing Sharpe ratio which is one of important performance measures of portfolio models, the main problem is transformed into the standard fuzzy programming problem. Furthermore, using the sensitivity analysis for fuzziness, the analytical optimal portfolio with the sensitivity factor is obtained.

  1. A wideband FMBEM for 2D acoustic design sensitivity analysis based on direct differentiation method

    NASA Astrophysics Data System (ADS)

    Chen, Leilei; Zheng, Changjun; Chen, Haibo

    2013-09-01

    This paper presents a wideband fast multipole boundary element method (FMBEM) for two dimensional acoustic design sensitivity analysis based on the direct differentiation method. The wideband fast multipole method (FMM) formed by combining the original FMM and the diagonal form FMM is used to accelerate the matrix-vector products in the boundary element analysis. The Burton-Miller formulation is used to overcome the fictitious frequency problem when using a single Helmholtz boundary integral equation for exterior boundary-value problems. The strongly singular and hypersingular integrals in the sensitivity equations can be evaluated explicitly and directly by using the piecewise constant discretization. The iterative solver GMRES is applied to accelerate the solution of the linear system of equations. A set of optimal parameters for the wideband FMBEM design sensitivity analysis are obtained by observing the performances of the wideband FMM algorithm in terms of computing time and memory usage. Numerical examples are presented to demonstrate the efficiency and validity of the proposed algorithm.

  2. DGSA: A Matlab toolbox for distance-based generalized sensitivity analysis of geoscientific computer experiments

    NASA Astrophysics Data System (ADS)

    Park, Jihoon; Yang, Guang; Satija, Addy; Scheidt, Céline; Caers, Jef

    2016-12-01

    Sensitivity analysis plays an important role in geoscientific computer experiments, whether for forecasting, data assimilation or model calibration. In this paper we focus on an extension of a method of regionalized sensitivity analysis (RSA) to applications typical in the Earth Sciences. Such applications involve the building of large complex spatial models, the application of computationally extensive forward modeling codes and the integration of heterogeneous sources of model uncertainty. The aim of this paper is to be practical: 1) provide a Matlab code, 2) provide novel visualization methods to aid users in getting a better understanding in the sensitivity 3) provide a method based on kernel principal component analysis (KPCA) and self-organizing maps (SOM) to account for spatial uncertainty typical in Earth Science applications and 4) provide an illustration on a real field case where the above mentioned complexities present themselves. We present methods that extend the original RSA method in several ways. First we present the calculation of conditional effects, defined as the sensitivity of a parameter given a level of another parameters. Second, we show how this conditional effect can be used to choose nominal values or ranges to fix insensitive parameters aiming to minimally affect uncertainty in the response. Third, we develop a method based on KPCA and SOM to assign a rank to spatial models in order to calculate the sensitivity on spatial variability in the models. A large oil/gas reservoir case is used as illustration of these ideas.

  3. Polarization sensitive spectroscopic optical coherence tomography for multimodal imaging

    NASA Astrophysics Data System (ADS)

    Strąkowski, Marcin R.; Kraszewski, Maciej; Strąkowska, Paulina; Trojanowski, Michał

    2015-03-01

    Optical coherence tomography (OCT) is a non-invasive method for 3D and cross-sectional imaging of biological and non-biological objects. The OCT measurements are provided in non-contact and absolutely safe way for the tested sample. Nowadays, the OCT is widely applied in medical diagnosis especially in ophthalmology, as well as dermatology, oncology and many more. Despite of great progress in OCT measurements there are still a vast number of issues like tissue recognition or imaging contrast enhancement that have not been solved yet. Here we are going to present the polarization sensitive spectroscopic OCT system (PS-SOCT). The PS-SOCT combines the polarization sensitive analysis with time-frequency analysis. Unlike standard polarization sensitive OCT the PS-SOCT delivers spectral information about measured quantities e.g. tested object birefringence changes over the light spectra. This solution overcomes the limits of polarization sensitive analysis applied in standard PS-OCT. Based on spectral data obtained from PS-SOCT the exact value of birefringence can be calculated even for the objects that provide higher order of retardation. In this contribution the benefits of using the combination of time-frequency and polarization sensitive analysis are being expressed. Moreover, the PS-SOCT system features, as well as OCT measurement examples are presented.

  4. Revisiting inconsistency in large pharmacogenomic studies

    PubMed Central

    Safikhani, Zhaleh; Smirnov, Petr; Freeman, Mark; El-Hachem, Nehme; She, Adrian; Rene, Quevedo; Goldenberg, Anna; Birkbak, Nicolai J.; Hatzis, Christos; Shi, Leming; Beck, Andrew H.; Aerts, Hugo J.W.L.; Quackenbush, John; Haibe-Kains, Benjamin

    2017-01-01

    In 2013, we published a comparative analysis of mutation and gene expression profiles and drug sensitivity measurements for 15 drugs characterized in the 471 cancer cell lines screened in the Genomics of Drug Sensitivity in Cancer (GDSC) and Cancer Cell Line Encyclopedia (CCLE). While we found good concordance in gene expression profiles, there was substantial inconsistency in the drug responses reported by the GDSC and CCLE projects. We received extensive feedback on the comparisons that we performed. This feedback, along with the release of new data, prompted us to revisit our initial analysis. We present a new analysis using these expanded data, where we address the most significant suggestions for improvements on our published analysis — that targeted therapies and broad cytotoxic drugs should have been treated differently in assessing consistency, that consistency of both molecular profiles and drug sensitivity measurements should be compared across cell lines, and that the software analysis tools provided should have been easier to run, particularly as the GDSC and CCLE released additional data. Our re-analysis supports our previous finding that gene expression data are significantly more consistent than drug sensitivity measurements. Using new statistics to assess data consistency allowed identification of two broad effect drugs and three targeted drugs with moderate to good consistency in drug sensitivity data between GDSC and CCLE. For three other targeted drugs, there were not enough sensitive cell lines to assess the consistency of the pharmacological profiles. We found evidence of inconsistencies in pharmacological phenotypes for the remaining eight drugs. Overall, our findings suggest that the drug sensitivity data in GDSC and CCLE continue to present challenges for robust biomarker discovery. This re-analysis provides additional support for the argument that experimental standardization and validation of pharmacogenomic response will be necessary to advance the broad use of large pharmacogenomic screens. PMID:28928933

  5. Accurate evaluation of sensitivity for calibration between a LiDAR and a panoramic camera used for remote sensing

    NASA Astrophysics Data System (ADS)

    García-Moreno, Angel-Iván; González-Barbosa, José-Joel; Ramírez-Pedraza, Alfonso; Hurtado-Ramos, Juan B.; Ornelas-Rodriguez, Francisco-Javier

    2016-04-01

    Computer-based reconstruction models can be used to approximate urban environments. These models are usually based on several mathematical approximations and the usage of different sensors, which implies dependency on many variables. The sensitivity analysis presented in this paper is used to weigh the relative importance of each uncertainty contributor into the calibration of a panoramic camera-LiDAR system. Both sensors are used for three-dimensional urban reconstruction. Simulated and experimental tests were conducted. For the simulated tests we analyze and compare the calibration parameters using the Monte Carlo and Latin hypercube sampling techniques. Sensitivity analysis for each variable involved into the calibration was computed by the Sobol method, which is based on the analysis of the variance breakdown, and the Fourier amplitude sensitivity test method, which is based on Fourier's analysis. Sensitivity analysis is an essential tool in simulation modeling and for performing error propagation assessments.

  6. Improving the sensitivity and accuracy of gamma activation analysis for the rapid determination of gold in mineral ores.

    PubMed

    Tickner, James; Ganly, Brianna; Lovric, Bojan; O'Dwyer, Joel

    2017-04-01

    Mining companies rely on chemical analysis methods to determine concentrations of gold in mineral ore samples. As gold is often mined commercially at concentrations around 1 part-per-million, it is necessary for any analysis method to provide good sensitivity as well as high absolute accuracy. We describe work to improve both the sensitivity and accuracy of the gamma activation analysis (GAA) method for gold. We present analysis results for several suites of ore samples and discuss the design of a GAA facility designed to replace conventional chemical assay in industrial applications. Copyright © 2017. Published by Elsevier Ltd.

  7. Design sensitivity analysis of rotorcraft airframe structures for vibration reduction

    NASA Technical Reports Server (NTRS)

    Murthy, T. Sreekanta

    1987-01-01

    Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.

  8. Eigenvalue Contributon Estimator for Sensitivity Calculations with TSUNAMI-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Williams, Mark L

    2007-01-01

    Since the release of the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) codes in SCALE [1], the use of sensitivity and uncertainty analysis techniques for criticality safety applications has greatly increased within the user community. In general, sensitivity and uncertainty analysis is transitioning from a technique used only by specialists to a practical tool in routine use. With the desire to use the tool more routinely comes the need to improve the solution methodology to reduce the input and computational burden on the user. This paper reviews the current solution methodology of the Monte Carlo eigenvalue sensitivity analysismore » sequence TSUNAMI-3D, describes an alternative approach, and presents results from both methodologies.« less

  9. Multiple shooting shadowing for sensitivity analysis of chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick J.; Wang, Qiqi

    2018-02-01

    Sensitivity analysis methods are important tools for research and design with simulations. Many important simulations exhibit chaotic dynamics, including scale-resolving turbulent fluid flow simulations. Unfortunately, conventional sensitivity analysis methods are unable to compute useful gradient information for long-time-averaged quantities in chaotic dynamical systems. Sensitivity analysis with least squares shadowing (LSS) can compute useful gradient information for a number of chaotic systems, including simulations of chaotic vortex shedding and homogeneous isotropic turbulence. However, this gradient information comes at a very high computational cost. This paper presents multiple shooting shadowing (MSS), a more computationally efficient shadowing approach than the original LSS approach. Through an analysis of the convergence rate of MSS, it is shown that MSS can have lower memory usage and run time than LSS.

  10. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

    NASA Technical Reports Server (NTRS)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2016-01-01

    Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

  11. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1991-01-01

    Continuing studies associated with the development of the quasi-analytical (QA) sensitivity method for three dimensional transonic flow about wings are presented. Furthermore, initial results using the quasi-analytical approach were obtained and compared to those computed using the finite difference (FD) approach. The basic goals achieved were: (1) carrying out various debugging operations pertaining to the quasi-analytical method; (2) addition of section design variables to the sensitivity equation in the form of multiple right hand sides; (3) reconfiguring the analysis/sensitivity package in order to facilitate the execution of analysis/FD/QA test cases; and (4) enhancing the display of output data to allow careful examination of the results and to permit various comparisons of sensitivity derivatives obtained using the FC/QA methods to be conducted easily and quickly. In addition to discussing the above goals, the results of executing subcritical and supercritical test cases are presented.

  12. New Uses for Sensitivity Analysis: How Different Movement Tasks Effect Limb Model Parameter Sensitivity

    NASA Technical Reports Server (NTRS)

    Winters, J. M.; Stark, L.

    1984-01-01

    Original results for a newly developed eight-order nonlinear limb antagonistic muscle model of elbow flexion and extension are presented. A wider variety of sensitivity analysis techniques are used and a systematic protocol is established that shows how the different methods can be used efficiently to complement one another for maximum insight into model sensitivity. It is explicitly shown how the sensitivity of output behaviors to model parameters is a function of the controller input sequence, i.e., of the movement task. When the task is changed (for instance, from an input sequence that results in the usual fast movement task to a slower movement that may also involve external loading, etc.) the set of parameters with high sensitivity will in general also change. Such task-specific use of sensitivity analysis techniques identifies the set of parameters most important for a given task, and even suggests task-specific model reduction possibilities.

  13. Reliability and sensitivity analysis of a system with multiple unreliable service stations and standby switching failures

    NASA Astrophysics Data System (ADS)

    Ke, Jyh-Bin; Lee, Wen-Chiung; Wang, Kuo-Hsiung

    2007-07-01

    This paper presents the reliability and sensitivity analysis of a system with M primary units, W warm standby units, and R unreliable service stations where warm standby units switching to the primary state might fail. Failure times of primary and warm standby units are assumed to have exponential distributions, and service times of the failed units are exponentially distributed. In addition, breakdown times and repair times of the service stations also follow exponential distributions. Expressions for system reliability, RY(t), and mean time to system failure, MTTF are derived. Sensitivity analysis, relative sensitivity analysis of the system reliability and the mean time to failure, with respect to system parameters are also investigated.

  14. Development of a sensitivity analysis technique for multiloop flight control systems

    NASA Technical Reports Server (NTRS)

    Vaillard, A. H.; Paduano, J.; Downing, D. R.

    1985-01-01

    This report presents the development and application of a sensitivity analysis technique for multiloop flight control systems. This analysis yields very useful information on the sensitivity of the relative-stability criteria of the control system, with variations or uncertainties in the system and controller elements. The sensitivity analysis technique developed is based on the computation of the singular values and singular-value gradients of a feedback-control system. The method is applicable to single-input/single-output as well as multiloop continuous-control systems. Application to sampled-data systems is also explored. The sensitivity analysis technique was applied to a continuous yaw/roll damper stability augmentation system of a typical business jet, and the results show that the analysis is very useful in determining the system elements which have the largest effect on the relative stability of the closed-loop system. As a secondary product of the research reported here, the relative stability criteria based on the concept of singular values were explored.

  15. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  16. Sampling-Based Stochastic Sensitivity Analysis Using Score Functions for RBDO Problems with Correlated Random Variables

    DTIC Science & Technology

    2010-08-01

    a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. a ...SECURITY CLASSIFICATION OF: This study presents a methodology for computing stochastic sensitivities with respect to the design variables, which are the...Random Variables Report Title ABSTRACT This study presents a methodology for computing stochastic sensitivities with respect to the design variables

  17. Omitted Variable Sensitivity Analysis with the Annotated Love Plot

    ERIC Educational Resources Information Center

    Hansen, Ben B.; Fredrickson, Mark M.

    2014-01-01

    The goal of this research is to make sensitivity analysis accessible not only to empirical researchers but also to the various stakeholders for whom educational evaluations are conducted. To do this it derives anchors for the omitted variable (OV)-program participation association intrinsically, using the Love plot to present a wide range of…

  18. Phase 1 of the near term hybrid passenger vehicle development program. Appendix D: Sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Traversi, M.

    1979-01-01

    Data are presented on the sensitivity of: (1) mission analysis results to the boundary values given for number of passenger cars and average annual vehicle miles traveled per car; (2) vehicle characteristics and performance to specifications; and (3) tradeoff study results to the expected parameters.

  19. Global sensitivity analysis for urban water quality modelling: Terminology, convergence and comparison of different methods

    NASA Astrophysics Data System (ADS)

    Vanrolleghem, Peter A.; Mannina, Giorgio; Cosenza, Alida; Neumann, Marc B.

    2015-03-01

    Sensitivity analysis represents an important step in improving the understanding and use of environmental models. Indeed, by means of global sensitivity analysis (GSA), modellers may identify both important (factor prioritisation) and non-influential (factor fixing) model factors. No general rule has yet been defined for verifying the convergence of the GSA methods. In order to fill this gap this paper presents a convergence analysis of three widely used GSA methods (SRC, Extended FAST and Morris screening) for an urban drainage stormwater quality-quantity model. After the convergence was achieved the results of each method were compared. In particular, a discussion on peculiarities, applicability, and reliability of the three methods is presented. Moreover, a graphical Venn diagram based classification scheme and a precise terminology for better identifying important, interacting and non-influential factors for each method is proposed. In terms of convergence, it was shown that sensitivity indices related to factors of the quantity model achieve convergence faster. Results for the Morris screening method deviated considerably from the other methods. Factors related to the quality model require a much higher number of simulations than the number suggested in literature for achieving convergence with this method. In fact, the results have shown that the term "screening" is improperly used as the method may exclude important factors from further analysis. Moreover, for the presented application the convergence analysis shows more stable sensitivity coefficients for the Extended-FAST method compared to SRC and Morris screening. Substantial agreement in terms of factor fixing was found between the Morris screening and Extended FAST methods. In general, the water quality related factors exhibited more important interactions than factors related to water quantity. Furthermore, in contrast to water quantity model outputs, water quality model outputs were found to be characterised by high non-linearity.

  20. Variogram Analysis of Response surfaces (VARS): A New Framework for Global Sensitivity Analysis of Earth and Environmental Systems Models

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Gupta, H. V.

    2015-12-01

    Earth and environmental systems models (EESMs) are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. Complexity and dimensionality are manifested by introducing many different factors in EESMs (i.e., model parameters, forcings, boundary conditions, etc.) to be identified. Sensitivity Analysis (SA) provides an essential means for characterizing the role and importance of such factors in producing the model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to 'variogram analysis', that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are limiting cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  1. Adjoint sensitivity analysis of plasmonic structures using the FDTD method.

    PubMed

    Zhang, Yu; Ahmed, Osman S; Bakr, Mohamed H

    2014-05-15

    We present an adjoint variable method for estimating the sensitivities of arbitrary responses with respect to the parameters of dispersive discontinuities in nanoplasmonic devices. Our theory is formulated in terms of the electric field components at the vicinity of perturbed discontinuities. The adjoint sensitivities are computed using at most one extra finite-difference time-domain (FDTD) simulation regardless of the number of parameters. Our approach is illustrated through the sensitivity analysis of an add-drop coupler consisting of a square ring resonator between two parallel waveguides. The computed adjoint sensitivities of the scattering parameters are compared with those obtained using the accurate but computationally expensive central finite difference approach.

  2. Grid sensitivity for aerodynamic optimization and flow analysis

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1993-01-01

    After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.

  3. Inferring Instantaneous, Multivariate and Nonlinear Sensitivities for the Analysis of Feedback Processes in a Dynamical System: Lorenz Model Case Study

    NASA Technical Reports Server (NTRS)

    Aires, Filipe; Rossow, William B.; Hansen, James E. (Technical Monitor)

    2001-01-01

    A new approach is presented for the analysis of feedback processes in a nonlinear dynamical system by observing its variations. The new methodology consists of statistical estimates of the sensitivities between all pairs of variables in the system based on a neural network modeling of the dynamical system. The model can then be used to estimate the instantaneous, multivariate and nonlinear sensitivities, which are shown to be essential for the analysis of the feedbacks processes involved in the dynamical system. The method is described and tested on synthetic data from the low-order Lorenz circulation model where the correct sensitivities can be evaluated analytically.

  4. Sensitivity analysis, approximate analysis, and design optimization for internal and external viscous flows

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.; Korivi, Vamshi M.

    1991-01-01

    A gradient-based design optimization strategy for practical aerodynamic design applications is presented, which uses the 2D thin-layer Navier-Stokes equations. The strategy is based on the classic idea of constructing different modules for performing the major tasks such as function evaluation, function approximation and sensitivity analysis, mesh regeneration, and grid sensitivity analysis, all driven and controlled by a general-purpose design optimization program. The accuracy of aerodynamic shape sensitivity derivatives is validated on two viscous test problems: internal flow through a double-throat nozzle and external flow over a NACA 4-digit airfoil. A significant improvement in aerodynamic performance has been achieved in both cases. Particular attention is given to a consistent treatment of the boundary conditions in the calculation of the aerodynamic sensitivity derivatives for the classic problems of external flow over an isolated lifting airfoil on 'C' or 'O' meshes.

  5. Spectral characterization of biophysical characteristics in a boreal forest - Relationship between Thematic Mapper band reflectance and leaf area index for Aspen

    NASA Technical Reports Server (NTRS)

    Badhwar, G. D.; Macdonald, R. B.; Hall, F. G.; Carnes, J. G.

    1986-01-01

    Results from analysis of a data set of simultaneous measurements of Thematic Mapper band reflectance and leaf area index are presented. The measurements were made over pure stands of Aspen in the Superior National Forest of northern Minnesota. The analysis indicates that the reflectance may be sensitive to the leaf area index of the Aspen early in the season. The sensitivity disappears as the season progresses. Based on the results of model calculations, an explanation for the observed relationship is developed. The model calculations indicate that the sensitivity of the reflectance to the Aspen overstory depends on the amount of understory present.

  6. Caregiver Sensitivity, Contingent Social Responsiveness, and Secure Infant Attachment

    ERIC Educational Resources Information Center

    Dunst, Carl J.; Kassow, Danielle Z.

    2008-01-01

    Findings from two research syntheses of the relationship between caregiver sensitivity and secure infant attachment and one research synthesis of factors associated with increased caregiver use of a sensitive interactional style are presented. The main focus of analysis was the extent to which different measures of caregiver contingent social…

  7. Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy

    NASA Astrophysics Data System (ADS)

    Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng

    2018-06-01

    To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.

  8. On the Validity and Sensitivity of the Phonics Screening Check: Erratum and Further Analysis

    ERIC Educational Resources Information Center

    Gilchrist, James M.; Snowling, Margaret J.

    2018-01-01

    Duff, Mengoni, Bailey and Snowling ("Journal of Research in Reading," 38: 109-123; 2015) evaluated the sensitivity and specificity of the phonics screening check against two reference standards. This report aims to correct a minor data error in the original article and to present further analysis of the data. The methods used are…

  9. Results of an integrated structure/control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1989-01-01

    A design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations is discussed. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changes in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient than finite difference methods for the computation of the equivalent sensitivity information.

  10. Impact of Weight Loss at Presentation on Survival in Epidermal Growth Factor Receptor Tyrosine Kinase Inhibitors (EGFR-TKI) Sensitive Mutant Advanced Non-small Cell Lung Cancer (NSCLC) Treated with First-line EGFR-TKI.

    PubMed

    Lin, Liping; Zhao, Juanjuan; Hu, Jiazhu; Huang, Fuxi; Han, Jianjun; He, Yan; Cao, Xiaolong

    2018-01-01

    Purpose The aim of this study is to evaluate the impact of weight loss at presentation on treatment outcomes of first-line EGFR-tyrosine kinase inhibitors (EGFR-TKI) in EGFR-TKI sensitive mutant NSCLC patients. Methods We retrospectively analyzed the clinical outcomes of 75 consecutive advanced NSCLC patients with EGFR-TKI sensitive mutations (exon 19 deletion or exon 21 L858R) received first-line gefitinib or erlotinib therapy according to weight loss status at presentation in our single center. Results Of 75 EGFR-TKI sensitive mutant NSCLC patients, 49 (65.3%) patients had no weight loss and 26 (34.7%) had weight loss at presentation, the objective response rate (ORR) to EGFR-TKI treatment were similar between the two groups (79.6% vs. 76.9%, p = 0.533). Patients without weight loss at presentation had significantly longer median progression free survival (PFS) (12.4 months vs. 7.6 months; hazard ratio [HR] 0.356, 95% confidence interval [CI] 0.212-0.596, p < 0.001) and overall survival (OS) (28.5 months vs. 20.7 months; HR 0.408, 95% CI 0.215-0.776, p = 0.006) than those with weight loss at presentation; moreover, the stratified analysis by EGFR-TKI sensitive mutation types also found similar trend between these two groups except for OS in EGFR exon 21 L858R mutation patients. Multivariate analysis identified weight loss at presentation and EGFR-TKI sensitive mutation types were independent predictive factors for PFS and OS. Conclusions Weight loss at presentation had a detrimental impact on PFS and OS in EGFR-TKI sensitive mutant advanced NSCLC patients treated with first-line EGFR-TKI. It should be considered as an important factor in the treatment decision or designing of EGFR-TKI clinical trials.

  11. OECD/NEA expert group on uncertainty analysis for criticality safety assessment: Results of benchmark on sensitivity calculation (phase III)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivanova, T.; Laville, C.; Dyrda, J.

    2012-07-01

    The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplificationsmore » impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)« less

  12. Adaptation of an urban land surface model to a tropical suburban area: Offline evaluation, sensitivity analysis, and optimization of TEB/ISBA (SURFEX)

    NASA Astrophysics Data System (ADS)

    Harshan, Suraj

    The main objective of the present thesis is the improvement of the TEB/ISBA (SURFEX) urban land surface model (ULSM) through comprehensive evaluation, sensitivity analysis, and optimization experiments using energy balance and radiative and air temperature data observed during 11 months at a tropical sub-urban site in Singapore. Overall the performance of the model is satisfactory, with a small underestimation of net radiation and an overestimation of sensible heat flux. Weaknesses in predicting the latent heat flux are apparent with smaller model values during daytime and the model also significantly underpredicts both the daytime peak and nighttime storage heat. Surface temperatures of all facets are generally overpredicted. Significant variation exists in the model behaviour between dry and wet seasons. The vegetation parametrization used in the model is inadequate to represent the moisture dynamics, producing unrealistically low latent heat fluxes during a particularly dry period. The comprehensive evaluation of the USLM shows the need for accurate estimation of input parameter values for present site. Since obtaining many of these parameters through empirical methods is not feasible, the present study employed a two step approach aimed at providing information about the most sensitive parameters and an optimized parameter set from model calibration. Two well established sensitivity analysis methods (global: Sobol and local: Morris) and a state-of-the-art multiobjective evolutionary algorithm (Borg) were employed for sensitivity analysis and parameter estimation. Experiments were carried out for three different weather periods. The analysis indicates that roof related parameters are the most important ones in controlling the behaviour of the sensible heat flux and net radiation flux, with roof and road albedo as the most influential parameters. Soil moisture initialization parameters are important in controlling the latent heat flux. The built (town) fraction has a significant influence on all fluxes considered. Comparison between the Sobol and Morris methods shows similar sensitivities, indicating the robustness of the present analysis and that the Morris method can be employed as a computationally cheaper alternative of Sobol's method. Optimization as well as the sensitivity experiments for the three periods (dry, wet and mixed), show a noticeable difference in parameter sensitivity and parameter convergence, indicating inadequacies in model formulation. Existence of a significant proportion of less sensitive parameters might be indicating an over-parametrized model. Borg MOEA showed great promise in optimizing the input parameters set. The optimized model modified using the site specific values for thermal roughness length parametrization shows an improvement in the performances of outgoing longwave radiation flux, overall surface temperature, heat storage flux and sensible heat flux.

  13. On the sensitivity of complex, internally coupled systems

    NASA Technical Reports Server (NTRS)

    Sobieszczanskisobieski, Jaroslaw

    1988-01-01

    A method is presented for computing sensitivity derivatives with respect to independent (input) variables for complex, internally coupled systems, while avoiding the cost and inaccuracy of finite differencing performed on the entire system analysis. The method entails two alternative algorithms: the first is based on the classical implicit function theorem formulated on residuals of governing equations, and the second develops the system sensitivity equations in a new form using the partial (local) sensitivity derivatives of the output with respect to the input of each part of the system. A few application examples are presented to illustrate the discussion.

  14. Stability, performance and sensitivity analysis of I.I.D. jump linear systems

    NASA Astrophysics Data System (ADS)

    Chávez Fuentes, Jorge R.; González, Oscar R.; Gray, W. Steven

    2018-06-01

    This paper presents a symmetric Kronecker product analysis of independent and identically distributed jump linear systems to develop new, lower dimensional equations for the stability and performance analysis of this type of systems than what is currently available. In addition, new closed form expressions characterising multi-parameter relative sensitivity functions for performance metrics are introduced. The analysis technique is illustrated with a distributed fault-tolerant flight control example where the communication links are allowed to fail randomly.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Haitao, E-mail: liaoht@cae.ac.cn

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less

  16. Adjoint-based sensitivity analysis of low-order thermoacoustic networks using a wave-based approach

    NASA Astrophysics Data System (ADS)

    Aguilar, José G.; Magri, Luca; Juniper, Matthew P.

    2017-07-01

    Strict pollutant emission regulations are pushing gas turbine manufacturers to develop devices that operate in lean conditions, with the downside that combustion instabilities are more likely to occur. Methods to predict and control unstable modes inside combustion chambers have been developed in the last decades but, in some cases, they are computationally expensive. Sensitivity analysis aided by adjoint methods provides valuable sensitivity information at a low computational cost. This paper introduces adjoint methods and their application in wave-based low order network models, which are used as industrial tools, to predict and control thermoacoustic oscillations. Two thermoacoustic models of interest are analyzed. First, in the zero Mach number limit, a nonlinear eigenvalue problem is derived, and continuous and discrete adjoint methods are used to obtain the sensitivities of the system to small modifications. Sensitivities to base-state modification and feedback devices are presented. Second, a more general case with non-zero Mach number, a moving flame front and choked outlet, is presented. The influence of the entropy waves on the computed sensitivities is shown.

  17. Sensitivity of wildlife habitat models to uncertainties in GIS data

    NASA Technical Reports Server (NTRS)

    Stoms, David M.; Davis, Frank W.; Cogan, Christopher B.

    1992-01-01

    Decision makers need to know the reliability of output products from GIS analysis. For many GIS applications, it is not possible to compare these products to an independent measure of 'truth'. Sensitivity analysis offers an alternative means of estimating reliability. In this paper, we present a CIS-based statistical procedure for estimating the sensitivity of wildlife habitat models to uncertainties in input data and model assumptions. The approach is demonstrated in an analysis of habitat associations derived from a GIS database for the endangered California condor. Alternative data sets were generated to compare results over a reasonable range of assumptions about several sources of uncertainty. Sensitivity analysis indicated that condor habitat associations are relatively robust, and the results have increased our confidence in our initial findings. Uncertainties and methods described in the paper have general relevance for many GIS applications.

  18. PREVALENCE OF METABOLIC SYNDROME IN YOUNG MEXICANS: A SENSITIVITY ANALYSIS ON ITS COMPONENTS.

    PubMed

    Murguía-Romero, Miguel; Jiménez-Flores, J Rafael; Sigrist-Flores, Santiago C; Tapia-Pancardo, Diana C; Jiménez-Ramos, Arnulfo; Méndez-Cruz, A René; Villalobos-Molina, Rafael

    2015-07-28

    obesity is a worldwide epidemic, and the high prevalence of diabetes type II (DM2) and cardiovascular disease (CVD) is in great part a consequence of that epidemic. Metabolic syndrome is a useful tool to estimate the risk of a young population to evolve to DM2 and CVD. to estimate the MetS prevalence in young Mexicans, and to evaluate each parameter as an independent indicator through a sensitivity analysis. the prevalence of MetS was estimated in 6 063 young of the México City metropolitan area. A sensitivity analysis was conducted to estimate the performance of each one of the components of MetS, as an indicator of the presence of MetS itself. Five statistical of the sensitivity analysis were calculated for each MetS component and the other parameters included: sensitivity, specificity, positive predictive value or precision, negative predictive value, and accuracy. the prevalence of MetS in Mexican young population was estimated to be 13.4%. Waist circumference presented the highest sensitivity (96.8% women; 90.0% men), blood pressure presented the highest specificity for women (97.7%) and glucose for men (91.0%). When all the five statistical are considered triglycerides is the component with the highest values, showing a value of 75% or more in four of them. Differences by sex are detected for averages of all components of MetS in young without alterations. Mexican young are highly prone to acquire MetS: 71% have at least one and up to five MetS parameters altered, and 13.4% of them have MetS. From all the five components of MetS, waist circumference presented the highest sensitivity as a predictor of MetS, and triglycerides is the best parameter if a single factor is to be taken as sole predictor of MetS in Mexican young population, triglycerides is also the parameter with the highest accuracy. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  19. Upper limb strength estimation of physically impaired persons using a musculoskeletal model: A sensitivity analysis.

    PubMed

    Carmichael, Marc G; Liu, Dikai

    2015-01-01

    Sensitivity of upper limb strength calculated from a musculoskeletal model was analyzed, with focus on how the sensitivity is affected when the model is adapted to represent a person with physical impairment. Sensitivity was calculated with respect to four muscle-tendon parameters: muscle peak isometric force, muscle optimal length, muscle pennation, and tendon slack length. Results obtained from a musculoskeletal model of average strength showed highest sensitivity to tendon slack length, followed by muscle optimal length and peak isometric force, which is consistent with existing studies. Muscle pennation angle was relatively insensitive. The analysis was repeated after adapting the musculoskeletal model to represent persons with varying severities of physical impairment. Results showed that utilizing the weakened model significantly increased the sensitivity of the calculated strength at the hand, with parameters previously insensitive becoming highly sensitive. This increased sensitivity presents a significant challenge in applications utilizing musculoskeletal models to represent impaired individuals.

  20. Approximate analysis for repeated eigenvalue problems with applications to controls-structure integrated design

    NASA Technical Reports Server (NTRS)

    Kenny, Sean P.; Hou, Gene J. W.

    1994-01-01

    A method for eigenvalue and eigenvector approximate analysis for the case of repeated eigenvalues with distinct first derivatives is presented. The approximate analysis method developed involves a reparameterization of the multivariable structural eigenvalue problem in terms of a single positive-valued parameter. The resulting equations yield first-order approximations to changes in the eigenvalues and the eigenvectors associated with the repeated eigenvalue problem. This work also presents a numerical technique that facilitates the definition of an eigenvector derivative for the case of repeated eigenvalues with repeated eigenvalue derivatives (of all orders). Examples are given which demonstrate the application of such equations for sensitivity and approximate analysis. Emphasis is placed on the application of sensitivity analysis to large-scale structural and controls-structures optimization problems.

  1. Proof-of-Concept Study for Uncertainty Quantification and Sensitivity Analysis using the BRL Shaped-Charge Example

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Justin Matthew

    These are the slides for a graduate presentation at Mississippi State University. It covers the following: the BRL Shaped-Charge Geometry in PAGOSA, mesh refinement study, surrogate modeling using a radial basis function network (RBFN), ruling out parameters using sensitivity analysis (equation of state study), uncertainty quantification (UQ) methodology, and sensitivity analysis (SA) methodology. In summary, a mesh convergence study was used to ensure that solutions were numerically stable by comparing PDV data between simulations. A Design of Experiments (DOE) method was used to reduce the simulation space to study the effects of the Jones-Wilkins-Lee (JWL) Parameters for the Composition Bmore » main charge. Uncertainty was quantified by computing the 95% data range about the median of simulation output using a brute force Monte Carlo (MC) random sampling method. Parameter sensitivities were quantified using the Fourier Amplitude Sensitivity Test (FAST) spectral analysis method where it was determined that detonation velocity, initial density, C1, and B1 controlled jet tip velocity.« less

  2. Overview of the AVT-191 Project to Assess Sensitivity Analysis and Uncertainty Quantification Methods for Military Vehicle Design

    NASA Technical Reports Server (NTRS)

    Benek, John A.; Luckring, James M.

    2017-01-01

    A NATO symposium held in 2008 identified many promising sensitivity analysis and un-certainty quantification technologies, but the maturity and suitability of these methods for realistic applications was not known. The STO Task Group AVT-191 was established to evaluate the maturity and suitability of various sensitivity analysis and uncertainty quantification methods for application to realistic problems of interest to NATO. The program ran from 2011 to 2015, and the work was organized into four discipline-centric teams: external aerodynamics, internal aerodynamics, aeroelasticity, and hydrodynamics. This paper presents an overview of the AVT-191 program content.

  3. Reinforcement Sensitivity and Social Anxiety in Combat Veterans

    PubMed Central

    Kimbrel, Nathan A.; Meyer, Eric C.; DeBeer, Bryann B.; Mitchell, John T.; Kimbrel, Azure D.; Nelson-Gray, Rosemery O.; Morissette, Sandra B.

    2017-01-01

    Objective The present study tested the hypothesis that low behavioral approach system (BAS) sensitivity is associated with social anxiety in combat veterans. Method Self-report measures of reinforcement sensitivity, combat exposure, social interaction anxiety, and social observation anxiety were administered to 197 Iraq/Afghanistan combat veterans. Results As expected, combat exposure, behavioral inhibition system (BIS) sensitivity, and fight-flight-freeze system (FFFS) sensitivity were positively associated with both social interaction anxiety and social observation anxiety. In contrast, BAS sensitivity was negatively associated with social interaction anxiety only. An analysis of the BAS subscales revealed that the Reward Responsiveness subscale was the only BAS subscale associated with social interaction anxiety. BAS-Reward Responsiveness was also associated with social observation anxiety. Conclusion The findings from the present research provide further evidence that low BAS sensitivity may be associated with social anxiety over and above the effects of BIS and FFFS sensitivity. PMID:28966424

  4. Reinforcement Sensitivity and Social Anxiety in Combat Veterans.

    PubMed

    Kimbrel, Nathan A; Meyer, Eric C; DeBeer, Bryann B; Mitchell, John T; Kimbrel, Azure D; Nelson-Gray, Rosemery O; Morissette, Sandra B

    2016-08-01

    The present study tested the hypothesis that low behavioral approach system (BAS) sensitivity is associated with social anxiety in combat veterans. Self-report measures of reinforcement sensitivity, combat exposure, social interaction anxiety, and social observation anxiety were administered to 197 Iraq/Afghanistan combat veterans. As expected, combat exposure, behavioral inhibition system (BIS) sensitivity, and fight-flight-freeze system (FFFS) sensitivity were positively associated with both social interaction anxiety and social observation anxiety. In contrast, BAS sensitivity was negatively associated with social interaction anxiety only. An analysis of the BAS subscales revealed that the Reward Responsiveness subscale was the only BAS subscale associated with social interaction anxiety. BAS-Reward Responsiveness was also associated with social observation anxiety. The findings from the present research provide further evidence that low BAS sensitivity may be associated with social anxiety over and above the effects of BIS and FFFS sensitivity.

  5. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process.

    PubMed

    Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-31

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.

  6. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process

    PubMed Central

    Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-01

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048

  7. Sensitivity analysis, calibration, and testing of a distributed hydrological model using error‐based weighting and one objective function

    USGS Publications Warehouse

    Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.

    2009-01-01

    We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.

  8. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 3: Illustrative test problems

    NASA Technical Reports Server (NTRS)

    Bittker, David A.; Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 3 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 3 explains the kinetics and kinetics-plus-sensitivity analysis problems supplied with LSENS and presents sample results. These problems illustrate the various capabilities of, and reaction models that can be solved by, the code and may provide a convenient starting point for the user to construct the problem data file required to execute LSENS. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  9. Post-buckling of a pressured biopolymer spherical shell with the mode interaction

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Ru, C. Q.

    2018-03-01

    Imperfection sensitivity is essential for mechanical behaviour of biopolymer shells characterized by high geometric heterogeneity. The present work studies initial post-buckling and imperfection sensitivity of a pressured biopolymer spherical shell based on non-axisymmetric buckling modes and associated mode interaction. Our results indicate that for biopolymer spherical shells with moderate radius-to-thickness ratio (say, less than 30) and smaller effective bending thickness (say, less than 0.2 times average shell thickness), the imperfection sensitivity predicted based on the axisymmetric mode without the mode interaction is close to the present results based on non-axisymmetric modes with the mode interaction with a small (typically, less than 10%) relative errors. However, for biopolymer spherical shells with larger effective bending thickness, the maximum load an imperfect shell can sustain predicted by the present non-axisymmetric analysis can be significantly (typically, around 30%) lower than those predicted based on the axisymmetric mode without the mode interaction. In such cases, a more accurate non-axisymmetric analysis with the mode interaction, as given in the present work, is required for imperfection sensitivity of pressured buckling of biopolymer spherical shells. Finally, the implications of the present study to two specific types of biopolymer spherical shells (viral capsids and ultrasound contrast agents) are discussed.

  10. Sensitivity analysis in a Lassa fever deterministic mathematical model

    NASA Astrophysics Data System (ADS)

    Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman

    2015-05-01

    Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.

  11. Sensitivity analysis of add-on price estimate for select silicon wafering technologies

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.

    1982-01-01

    The cost of producing wafers from silicon ingots is a major component of the add-on price of silicon sheet. Economic analyses of the add-on price estimates and their sensitivity internal-diameter (ID) sawing, multiblade slurry (MBS) sawing and fixed-abrasive slicing technique (FAST) are presented. Interim price estimation guidelines (IPEG) are used for estimating a process add-on price. Sensitivity analysis of price is performed with respect to cost parameters such as equipment, space, direct labor, materials (blade life) and utilities, and the production parameters such as slicing rate, slices per centimeter and process yield, using a computer program specifically developed to do sensitivity analysis with IPEG. The results aid in identifying the important cost parameters and assist in deciding the direction of technology development efforts.

  12. Material and morphology parameter sensitivity analysis in particulate composite materials

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyu; Oskay, Caglar

    2017-12-01

    This manuscript presents a novel parameter sensitivity analysis framework for damage and failure modeling of particulate composite materials subjected to dynamic loading. The proposed framework employs global sensitivity analysis to study the variance in the failure response as a function of model parameters. In view of the computational complexity of performing thousands of detailed microstructural simulations to characterize sensitivities, Gaussian process (GP) surrogate modeling is incorporated into the framework. In order to capture the discontinuity in response surfaces, the GP models are integrated with a support vector machine classification algorithm that identifies the discontinuities within response surfaces. The proposed framework is employed to quantify variability and sensitivities in the failure response of polymer bonded particulate energetic materials under dynamic loads to material properties and morphological parameters that define the material microstructure. Particular emphasis is placed on the identification of sensitivity to interfaces between the polymer binder and the energetic particles. The proposed framework has been demonstrated to identify the most consequential material and morphological parameters under vibrational and impact loads.

  13. A new sensitivity analysis for structural optimization of composite rotor blades

    NASA Technical Reports Server (NTRS)

    Venkatesan, C.; Friedmann, P. P.; Yuan, Kuo-An

    1993-01-01

    This paper presents a detailed mathematical derivation of the sensitivity derivatives for the structural dynamic, aeroelastic stability and response characteristics of a rotor blade in hover and forward flight. The formulation is denoted by the term semianalytical approach, because certain derivatives have to be evaluated by a finite difference scheme. Using the present formulation, sensitivity derivatives for the structural dynamic and aeroelastic stability characteristics, were evaluated for both isotropic and composite rotor blades. Based on the results, useful conclusions are obtained regarding the relative merits of the semi-analytical approach, for calculating sensitivity derivatives, when compared to a pure finite difference approach.

  14. High-sensitivity ESCA instrument

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davies, R.D.; Herglotz, H.K.; Lee, J.D.

    1973-01-01

    A new electron spectroscopy for chemical analysis (ESCA) instrument has been developed to provide high sensitivity and efficient operation for laboratory analysis of composition and chemical bonding in very thin surface layers of solid samples. High sensitivity is achieved by means of the high-intensity, efficient x-ray source described by Davies and Herglotz at the 1968 Denver X-Ray Conference, in combination with the new electron energy analyzer described by Lee at the 1972 Pittsburgh Conference on Analytical Chemistry and Applied Spectroscopy. A sample chamber designed to provide for rapid introduction and replacement of samples has adequate facilities for various sample treatmentsmore » and conditiouing followed immediately by ESCA analysis of the sample. Examples of application are presented, demonstrating the sensitivity and resolution achievable with this instrument. Its usefulness in trace surface analysis is shown and some chemical shifts'' measured by the instrument are compared with those obtained by x-ray spectroscopy. (auth)« less

  15. Analysis of Sensitivity Experiments - An Expanded Primer

    DTIC Science & Technology

    2017-03-08

    diehard practitioners. The difficulty associated with mastering statistical inference presents a true dilemma. Statistics is an extremely applied...lost, perhaps forever. In other words, when on this safari, you need a guide. This report is designed to be a guide, of sorts. It focuses on analytical...estimated accurately if our analysis is to have real meaning. For this reason, the sensitivity test procedure is designed to concentrate measurements

  16. Probabilistic analysis of the efficiency of the damping devices against nuclear fuel container falling

    NASA Astrophysics Data System (ADS)

    Králik, Juraj

    2017-07-01

    The paper presents the probabilistic and sensitivity analysis of the efficiency of the damping devices cover of nuclear power plant under impact of the container of nuclear fuel of type TK C30 drop. The finite element idealization of nuclear power plant structure is used in space. The steel pipe damper system is proposed for dissipation of the kinetic energy of the container free fall. The experimental results of the shock-damper basic element behavior under impact loads are presented. The Newmark integration method is used for solution of the dynamic equations. The sensitivity and probabilistic analysis of damping devices was realized in the AntHILL and ANSYS software.

  17. Sensitivity analysis of Jacobian determinant used in treatment planning for lung cancer

    NASA Astrophysics Data System (ADS)

    Shao, Wei; Gerard, Sarah E.; Pan, Yue; Patton, Taylor J.; Reinhardt, Joseph M.; Durumeric, Oguz C.; Bayouth, John E.; Christensen, Gary E.

    2018-03-01

    Four-dimensional computed tomography (4DCT) is regularly used to visualize tumor motion in radiation therapy for lung cancer. These 4DCT images can be analyzed to estimate local ventilation by finding a dense correspondence map between the end inhalation and the end exhalation CT image volumes using deformable image registration. Lung regions with ventilation values above a threshold are labeled as regions of high pulmonary function and are avoided when possible in the radiation plan. This paper investigates a sensitivity analysis of the relative Jacobian error to small registration errors. We present a linear approximation of the relative Jacobian error. Next, we give a formula for the sensitivity of the relative Jacobian error with respect to the Jacobian of perturbation displacement field. Preliminary sensitivity analysis results are presented using 4DCT scans from 10 individuals. For each subject, we generated 6400 random smooth biologically plausible perturbation vector fields using a cubic B-spline model. We showed that the correlation between the Jacobian determinant and the Frobenius norm of the sensitivity matrix is close to -1, which implies that the relative Jacobian error in high-functional regions is less sensitive to noise. We also showed that small displacement errors on the average of 0.53 mm may lead to a 10% relative change in Jacobian determinant. We finally showed that the average relative Jacobian error and the sensitivity of the system for all subjects are positively correlated (close to +1), i.e. regions with high sensitivity has more error in Jacobian determinant on average.

  18. Evaluating aquatic invertebrate vulnerability to insecticides based on intrinsic sensitivity, biological traits, and toxic mode of action.

    PubMed

    Rico, Andreu; Van den Brink, Paul J

    2015-08-01

    In the present study, the authors evaluated the vulnerability of aquatic invertebrates to insecticides based on their intrinsic sensitivity and their population-level recovery potential. The relative sensitivity of invertebrates to 5 different classes of insecticides was calculated at the genus, family, and order levels using the acute toxicity data available in the US Environmental Protection Agency ECOTOX database. Biological trait information was linked to the calculated relative sensitivity to evaluate correlations between traits and sensitivity and to calculate a vulnerability index, which combines intrinsic sensitivity and traits describing the recovery potential of populations partially exposed to insecticides (e.g., voltinism, flying strength, occurrence in drift). The analysis shows that the relative sensitivity of arthropods depends on the insecticide mode of action. Traits such as degree of sclerotization, size, and respiration type showed good correlation to sensitivity and can be used to make predictions for invertebrate taxa without a priori sensitivity knowledge. The vulnerability analysis revealed that some of the Ephemeroptera, Plecoptera, and Trichoptera taxa were vulnerable to all insecticide classes and indicated that particular gastropod and bivalve species were potentially vulnerable. Microcrustaceans (e.g., daphnids, copepods) showed low potential vulnerability, particularly in lentic ecosystems. The methods described in the present study can be used for the selection of focal species to be included as part of ecological scenarios and higher tier risk assessments. © 2015 SETAC.

  19. Prevalence of potent skin sensitizers in oxidative hair dye products in Korea.

    PubMed

    Kim, Hyunji; Kim, Kisok

    2016-09-01

    The objective of the present study was to elucidate the prevalence of potent skin sensitizers in oxidative hair dye products manufactured by Korean domestic companies. A database on hair dye products made by domestic companies and selling in the Korean market in 2013 was used to obtain information on company name, brand name, quantity of production, and ingredients. The prevalence of substances categorized as potent skin sensitizers was calculated using the hair dye ingredient database, and the pattern of concomitant presence of hair dye ingredients was analyzed using network analysis software. A total of 19 potent skin sensitizers were identified from a database that included 99 hair dye products manufactured by Korean domestic companies. Among 19 potent skin sensitizers, the four most frequent were resorcinol, m-aminophenol, p-phenylenediamine (PPD), and p-aminophenol; these four skin-sensitizing ingredients were found in more than 50% of the products studied. Network analysis showed that resorcinol, m-aminophenol, and PPD existed together in many hair dye products. In 99 products examined, the average product contained 4.4 potent sensitizers, and 82% of the products contained four or more skin sensitizers. The present results demonstrate that oxidative hair dye products made by Korean domestic manufacturers contain various numbers and types of potent skin sensitizers. Furthermore, these results suggest that some hair dye products should be used with caution to prevent adverse effects on the skin, including allergic contact dermatitis.

  20. Benchmark On Sensitivity Calculation (Phase III)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivanova, Tatiana; Laville, Cedric; Dyrda, James

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impactmore » the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.« less

  1. Global Sensitivity Analysis for Identifying Important Parameters of Nitrogen Nitrification and Denitrification under Model and Scenario Uncertainties

    NASA Astrophysics Data System (ADS)

    Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.

    2017-12-01

    Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.

  2. A new u-statistic with superior design sensitivity in matched observational studies.

    PubMed

    Rosenbaum, Paul R

    2011-09-01

    In an observational or nonrandomized study of treatment effects, a sensitivity analysis indicates the magnitude of bias from unmeasured covariates that would need to be present to alter the conclusions of a naïve analysis that presumes adjustments for observed covariates suffice to remove all bias. The power of sensitivity analysis is the probability that it will reject a false hypothesis about treatment effects allowing for a departure from random assignment of a specified magnitude; in particular, if this specified magnitude is "no departure" then this is the same as the power of a randomization test in a randomized experiment. A new family of u-statistics is proposed that includes Wilcoxon's signed rank statistic but also includes other statistics with substantially higher power when a sensitivity analysis is performed in an observational study. Wilcoxon's statistic has high power to detect small effects in large randomized experiments-that is, it often has good Pitman efficiency-but small effects are invariably sensitive to small unobserved biases. Members of this family of u-statistics that emphasize medium to large effects can have substantially higher power in a sensitivity analysis. For example, in one situation with 250 pair differences that are Normal with expectation 1/2 and variance 1, the power of a sensitivity analysis that uses Wilcoxon's statistic is 0.08 while the power of another member of the family of u-statistics is 0.66. The topic is examined by performing a sensitivity analysis in three observational studies, using an asymptotic measure called the design sensitivity, and by simulating power in finite samples. The three examples are drawn from epidemiology, clinical medicine, and genetic toxicology. © 2010, The International Biometric Society.

  3. Application of sensitivity-analysis techniques to the calculation of topological quantities

    NASA Astrophysics Data System (ADS)

    Gilchrist, Stuart

    2017-08-01

    Magnetic reconnection in the corona occurs preferentially at sites where the magnetic connectivity is either discontinuous or has a large spatial gradient. Hence there is a general interest in computing quantities (like the squashing factor) that characterize the gradient in the field-line mapping function. Here we present an algorithm for calculating certain (quasi)topological quantities using mathematical techniques from the field of ``sensitivity-analysis''. The method is based on the calculation of a three dimensional field-line mapping Jacobian from which all the present topological quantities of interest can be derived. We will present the algorithm and the details of a publicly available set of libraries that implement the algorithm.

  4. Identification of stochastic interactions in nonlinear models of structural mechanics

    NASA Astrophysics Data System (ADS)

    Kala, Zdeněk

    2017-07-01

    In the paper, the polynomial approximation is presented by which the Sobol sensitivity analysis can be evaluated with all sensitivity indices. The nonlinear FEM model is approximated. The input area is mapped using simulations runs of Latin Hypercube Sampling method. The domain of the approximation polynomial is chosen so that it were possible to apply large number of simulation runs of Latin Hypercube Sampling method. The method presented also makes possible to evaluate higher-order sensitivity indices, which could not be identified in case of nonlinear FEM.

  5. Adjoint sensitivity analysis of a tumor growth model and its application to spatiotemporal radiotherapy optimization.

    PubMed

    Fujarewicz, Krzysztof; Lakomiec, Krzysztof

    2016-12-01

    We investigate a spatial model of growth of a tumor and its sensitivity to radiotherapy. It is assumed that the radiation dose may vary in time and space, like in intensity modulated radiotherapy (IMRT). The change of the final state of the tumor depends on local differences in the radiation dose and varies with the time and the place of these local changes. This leads to the concept of a tumor's spatiotemporal sensitivity to radiation, which is a function of time and space. We show how adjoint sensitivity analysis may be applied to calculate the spatiotemporal sensitivity of the finite difference scheme resulting from the partial differential equation describing the tumor growth. We demonstrate results of this approach to the tumor proliferation, invasion and response to radiotherapy (PIRT) model and we compare the accuracy and the computational effort of the method to the simple forward finite difference sensitivity analysis. Furthermore, we use the spatiotemporal sensitivity during the gradient-based optimization of the spatiotemporal radiation protocol and present results for different parameters of the model.

  6. Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?

    NASA Technical Reports Server (NTRS)

    Moore, Greg; Chainyk, Mike; Schiermeier, John

    2004-01-01

    The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.

  7. Performance of the high-sensitivity troponin assay in diagnosing acute myocardial infarction: systematic review and meta-analysis

    PubMed Central

    Al-Saleh, Ayman; Alazzoni, Ashraf; Al Shalash, Saleh; Ye, Chenglin; Mbuagbaw, Lawrence; Thabane, Lehana; Jolly, Sanjit S.

    2014-01-01

    Background High-sensitivity cardiac troponin assays have been adopted by many clinical centres worldwide; however, clinicians are uncertain how to interpret the results. We sought to assess the utility of these assays in diagnosing acute myocardial infarction (MI). Methods We carried out a systematic review and meta-analysis of studies comparing high-sensitivity with conventional assays of cardiac troponin levels among adults with suspected acute MI in the emergency department. We searched MEDLINE, EMBASE and Cochrane databases up to April 2013 and used bivariable random-effects modelling to obtain summary parameters for diagnostic accuracy. Results We identified 9 studies that assessed the use of high-sensitivity troponin T assays (n = 9186 patients). The summary sensitivity of these tests in diagnosing acute MI at presentation to the emergency department was estimated to be 0.94 (95% confidence interval [CI] 0.89–0.97); for conventional tests, it was 0.72 (95% CI 0.63–0.79). The summary specificity was 0.73 (95% CI 0.64–0.81) for the high-sensitivity assay compared with 0.95 (95% CI 0.93–0.97) for the conventional assay. The differences in estimates of the summary sensitivity and specificity between the high-sensitivity and conventional assays were statistically significant (p < 0.01). The area under the curve was similar for both tests carried out 3–6 hours after presentation. Three studies assessed the use of high-sensitivity troponin I assays and showed similar results. Interpretation Used at presentation to the emergency department, the high-sensitivity cardiac troponin assay has improved sensitivity, but reduced specificity, compared with the conventional troponin assay. With repeated measurements over 6 hours, the area under the curve is similar for both tests, indicating that the major advantage of the high-sensitivity test is early diagnosis. PMID:25295240

  8. Sensitivity analysis of automatic flight control systems using singular value concepts

    NASA Technical Reports Server (NTRS)

    Herrera-Vaillard, A.; Paduano, J.; Downing, D.

    1985-01-01

    A sensitivity analysis is presented that can be used to judge the impact of vehicle dynamic model variations on the relative stability of multivariable continuous closed-loop control systems. The sensitivity analysis uses and extends the singular-value concept by developing expressions for the gradients of the singular value with respect to variations in the vehicle dynamic model and the controller design. Combined with a priori estimates of the accuracy of the model, the gradients are used to identify the elements in the vehicle dynamic model and controller that could severely impact the system's relative stability. The technique is demonstrated for a yaw/roll damper stability augmentation designed for a business jet.

  9. Sensitivity of VIIRS Polarization Measurements

    NASA Technical Reports Server (NTRS)

    Waluschka, Eugene

    2010-01-01

    The design of an optical system typically involves a sensitivity analysis where the various lens parameters, such as lens spacing and curvatures, to name two parameters, are (slightly) varied to see what, if any, effect this has on the performance and to establish manufacturing tolerances. A sinular analysis was performed for the VIIRS instruments polarization measurements to see how real world departures from perfectly linearly polarized light entering VIIRS effects the polarization measurement. The methodology and a few of the results of this polarization sensitivity analysis are presented and applied to the construction of a single polarizer which will cover the VIIRS VIS/NIR spectral range. Keywords: VIIRS, polarization, ray, trace; polarizers, Bolder Vision, MOXTEK

  10. Computer program for analysis of imperfection sensitivity of ring stiffened shells of revolution

    NASA Technical Reports Server (NTRS)

    Cohen, G. A.

    1971-01-01

    A FORTRAN 4 digital computer program is presented for the initial postbuckling and imperfection sensitivity analysis of bifurcation buckling modes for ring-stiffened orthotropic multilayered shells of revolution. The boundary value problem for the second-order contribution to the buckled state was solved by the forward integration technique using the Runge-Kutta method. The effects of nonlinear prebuckling states and live pressure loadings are included.

  11. Sensitivity analysis for parametric generalized implicit quasi-variational-like inclusions involving P-[eta]-accretive mappings

    NASA Astrophysics Data System (ADS)

    Kazmi, K. R.; Khan, F. A.

    2008-01-01

    In this paper, using proximal-point mapping technique of P-[eta]-accretive mapping and the property of the fixed-point set of set-valued contractive mappings, we study the behavior and sensitivity analysis of the solution set of a parametric generalized implicit quasi-variational-like inclusion involving P-[eta]-accretive mapping in real uniformly smooth Banach space. Further, under suitable conditions, we discuss the Lipschitz continuity of the solution set with respect to the parameter. The technique and results presented in this paper can be viewed as extension of the techniques and corresponding results given in [R.P. Agarwal, Y.-J. Cho, N.-J. Huang, Sensitivity analysis for strongly nonlinear quasi-variational inclusions, Appl. MathE Lett. 13 (2002) 19-24; S. Dafermos, Sensitivity analysis in variational inequalities, Math. Oper. Res. 13 (1988) 421-434; X.-P. Ding, Sensitivity analysis for generalized nonlinear implicit quasi-variational inclusions, Appl. Math. Lett. 17 (2) (2004) 225-235; X.-P. Ding, Parametric completely generalized mixed implicit quasi-variational inclusions involving h-maximal monotone mappings, J. Comput. Appl. Math. 182 (2) (2005) 252-269; X.-P. Ding, C.L. Luo, On parametric generalized quasi-variational inequalities, J. Optim. Theory Appl. 100 (1999) 195-205; Z. Liu, L. Debnath, S.M. Kang, J.S. Ume, Sensitivity analysis for parametric completely generalized nonlinear implicit quasi-variational inclusions, J. Math. Anal. Appl. 277 (1) (2003) 142-154; R.N. Mukherjee, H.L. Verma, Sensitivity analysis of generalized variational inequalities, J. Math. Anal. Appl. 167 (1992) 299-304; M.A. Noor, Sensitivity analysis framework for general quasi-variational inclusions, Comput. Math. Appl. 44 (2002) 1175-1181; M.A. Noor, Sensitivity analysis for quasivariational inclusions, J. Math. Anal. Appl. 236 (1999) 290-299; J.Y. Park, J.U. Jeong, Parametric generalized mixed variational inequalities, Appl. Math. Lett. 17 (2004) 43-48].

  12. Results of an integrated structure-control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1988-01-01

    Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.

  13. Space shuttle SRM plume expansion sensitivity analysis. [flow characteristics of exhaust gases from solid propellant rocket engines

    NASA Technical Reports Server (NTRS)

    Smith, S. D.; Tevepaugh, J. A.; Penny, M. M.

    1975-01-01

    The exhaust plumes of the space shuttle solid rocket motors can have a significant effect on the base pressure and base drag of the shuttle vehicle. A parametric analysis was conducted to assess the sensitivity of the initial plume expansion angle of analytical solid rocket motor flow fields to various analytical input parameters and operating conditions. The results of the analysis are presented and conclusions reached regarding the sensitivity of the initial plume expansion angle to each parameter investigated. Operating conditions parametrically varied were chamber pressure, nozzle inlet angle, nozzle throat radius of curvature ratio and propellant particle loading. Empirical particle parameters investigated were mean size, local drag coefficient and local heat transfer coefficient. Sensitivity of the initial plume expansion angle to gas thermochemistry model and local drag coefficient model assumptions were determined.

  14. Computational methods for efficient structural reliability and reliability sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Wu, Y.-T.

    1993-01-01

    This paper presents recent developments in efficient structural reliability analysis methods. The paper proposes an efficient, adaptive importance sampling (AIS) method that can be used to compute reliability and reliability sensitivities. The AIS approach uses a sampling density that is proportional to the joint PDF of the random variables. Starting from an initial approximate failure domain, sampling proceeds adaptively and incrementally with the goal of reaching a sampling domain that is slightly greater than the failure domain to minimize over-sampling in the safe region. Several reliability sensitivity coefficients are proposed that can be computed directly and easily from the above AIS-based failure points. These probability sensitivities can be used for identifying key random variables and for adjusting design to achieve reliability-based objectives. The proposed AIS methodology is demonstrated using a turbine blade reliability analysis problem.

  15. Sensitivity Analysis for Coupled Aero-structural Systems

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.

    1999-01-01

    A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.

  16. Simulations of the HDO and H2O-18 atmospheric cycles using the NASA GISS general circulation model - Sensitivity experiments for present-day conditions

    NASA Technical Reports Server (NTRS)

    Jouzel, Jean; Koster, R. D.; Suozzo, R. J.; Russell, G. L.; White, J. W. C.

    1991-01-01

    Incorporating the full geochemical cycles of stable water isotopes (HDO and H2O-18) into an atmospheric general circulation model (GCM) allows an improved understanding of global delta-D and delta-O-18 distributions and might even allow an analysis of the GCM's hydrological cycle. A detailed sensitivity analysis using the NASA/Goddard Institute for Space Studies (GISS) model II GCM is presented that examines the nature of isotope modeling. The tests indicate that delta-D and delta-O-18 values in nonpolar regions are not strongly sensitive to details in the model precipitation parameterizations. This result, while implying that isotope modeling has limited potential use in the calibration of GCM convection schemes, also suggests that certain necessarily arbitrary aspects of these schemes are adequate for many isotope studies. Deuterium excess, a second-order variable, does show some sensitivity to precipitation parameterization and thus may be more useful for GCM calibration.

  17. The Sensitivity Analysis for the Flow Past Obstacles Problem with Respect to the Reynolds Number

    PubMed Central

    Ito, Kazufumi; Li, Zhilin; Qiao, Zhonghua

    2013-01-01

    In this paper, numerical sensitivity analysis with respect to the Reynolds number for the flow past obstacle problem is presented. To carry out such analysis, at each time step, we need to solve the incompressible Navier-Stokes equations on irregular domains twice, one for the primary variables; the other is for the sensitivity variables with homogeneous boundary conditions. The Navier-Stokes solver is the augmented immersed interface method for Navier-Stokes equations on irregular domains. One of the most important contribution of this paper is that our analysis can predict the critical Reynolds number at which the vortex shading begins to develop in the wake of the obstacle. Some interesting experiments are shown to illustrate how the critical Reynolds number varies with different geometric settings. PMID:24910780

  18. Sensitive sub-Doppler nonlinear spectroscopy for hyperfine-structure analysis using simple atomizers

    NASA Astrophysics Data System (ADS)

    Mickadeit, Fritz K.; Kemp, Helen; Schafer, Julia; Tong, William M.

    1998-05-01

    Laser wave-mixing spectroscopy is presented as a sub-Doppler method that offers not only high spectral resolution, but also excellent detection sensitivity. It offers spectral resolution suitable for hyperfine structure analysis and isotope ratio measurements. In a non-planar backward- scattering four-wave mixing optical configuration, two of the three input beams counter propagate and the Doppler broadening is minimized, and hence, spectral resolution is enhanced. Since the signal is a coherent beam, optical collection is efficient and signal detection is convenient. This simple multi-photon nonlinear laser method offers un usually sensitive detection limits that are suitable for trace-concentration isotope analysis using a few different types of simple analytical atomizers. Reliable measurement of hyperfine structures allows effective determination of isotope ratios for chemical analysis.

  19. The Sensitivity Analysis for the Flow Past Obstacles Problem with Respect to the Reynolds Number.

    PubMed

    Ito, Kazufumi; Li, Zhilin; Qiao, Zhonghua

    2012-02-01

    In this paper, numerical sensitivity analysis with respect to the Reynolds number for the flow past obstacle problem is presented. To carry out such analysis, at each time step, we need to solve the incompressible Navier-Stokes equations on irregular domains twice, one for the primary variables; the other is for the sensitivity variables with homogeneous boundary conditions. The Navier-Stokes solver is the augmented immersed interface method for Navier-Stokes equations on irregular domains. One of the most important contribution of this paper is that our analysis can predict the critical Reynolds number at which the vortex shading begins to develop in the wake of the obstacle. Some interesting experiments are shown to illustrate how the critical Reynolds number varies with different geometric settings.

  20. ANSYS-based birefringence property analysis of side-hole fiber induced by pressure and temperature

    NASA Astrophysics Data System (ADS)

    Zhou, Xinbang; Gong, Zhenfeng

    2018-03-01

    In this paper, we theoretically investigate the influences of pressure and temperature on the birefringence property of side-hole fibers with different shapes of holes using the finite element analysis method. A physical mechanism of the birefringence of the side-hole fiber is discussed with the presence of different external pressures and temperatures. The strain field distribution and birefringence values of circular-core, rectangular-core, and triangular-core side-hole fibers are presented. Our analysis shows the triangular-core side-hole fiber has low temperature sensitivity which weakens the cross sensitivity of temperature and strain. Additionally, an optimized structure design of the side-hole fiber is presented which can be used for the sensing application.

  1. Space tug economic analysis study. Volume 2: Tug concepts analysis. Part 2: Economic analysis

    NASA Technical Reports Server (NTRS)

    1972-01-01

    An economic analysis of space tug operations is presented. The subjects discussed are: (1) cost uncertainties, (2) scenario analysis, (3) economic sensitivities, (4) mixed integer programming formulation of the space tug problem, and (5) critical parameters in the evaluation of a public expenditure.

  2. Testing of stack-unit/aquifer sensitivity analysis using contaminant plume distribution in the subsurface of Savannah River Site, South Carolina, USA

    USGS Publications Warehouse

    Rine, J.M.; Shafer, J.M.; Covington, E.; Berg, R.C.

    2006-01-01

    Published information on the correlation and field-testing of the technique of stack-unit/aquifer sensitivity mapping with documented subsurface contaminant plumes is rare. The inherent characteristic of stack-unit mapping, which makes it a superior technique to other analyses that amalgamate data, is the ability to deconstruct the sensitivity analysis on a unit-by-unit basis. An aquifer sensitivity map, delineating the relative sensitivity of the Crouch Branch aquifer of the Administrative/Manufacturing Area (A/M) at the Savannah River Site (SRS) in South Carolina, USA, incorporates six hydrostratigraphic units, surface soil units, and relevant hydrologic data. When this sensitivity map is compared with the distribution of the contaminant tetrachloroethylene (PCE), PCE is present within the Crouch Branch aquifer within an area classified as highly sensitive, even though the PCE was primarily released on the ground surface within areas classified with low aquifer sensitivity. This phenomenon is explained through analysis of the aquifer sensitivity map, the groundwater potentiometric surface maps, and the plume distributions within the area on a unit-by- unit basis. The results of this correlation show how the paths of the PCE plume are influenced by both the geology and the groundwater flow. ?? Springer-Verlag 2006.

  3. A techno-economic assessment of grid connected photovoltaic system for hospital building in Malaysia

    NASA Astrophysics Data System (ADS)

    Mat Isa, Normazlina; Tan, Chee Wei; Yatim, AHM

    2017-07-01

    Conventionally, electricity in hospital building are supplied by the utility grid which uses mix fuel including coal and gas. Due to enhancement in renewable technology, many building shall moving forward to install their own PV panel along with the grid to employ the advantages of the renewable energy. This paper present an analysis of grid connected photovoltaic (GCPV) system for hospital building in Malaysia. A discussion is emphasized on the economic analysis based on Levelized Cost of Energy (LCOE) and total Net Present Post (TNPC) in regards with the annual interest rate. The analysis is performed using Hybrid Optimization Model for Electric Renewables (HOMER) software which give optimization and sensitivity analysis result. An optimization result followed by the sensitivity analysis also being discuss in this article thus the impact of the grid connected PV system has be evaluated. In addition, the benefit from Net Metering (NeM) mechanism also discussed.

  4. [Analysis and experimental verification of sensitivity and SNR of laser warning receiver].

    PubMed

    Zhang, Ji-Long; Wang, Ming; Tian, Er-Ming; Li, Xiao; Wang, Zhi-Bin; Zhang, Yue

    2009-01-01

    In order to countermeasure increasingly serious threat from hostile laser in modern war, it is urgent to do research on laser warning technology and system, and the sensitivity and signal to noise ratio (SNR) are two important performance parameters in laser warning system. In the present paper, based on the signal statistical detection theory, a method for calculation of the sensitivity and SNR in coherent detection laser warning receiver (LWR) has been proposed. Firstly, the probabilities of the laser signal and receiver noise were analyzed. Secondly, based on the threshold detection theory and Neyman-Pearson criteria, the signal current equation was established by introducing detection probability factor and false alarm rate factor, then, the mathematical expressions of sensitivity and SNR were deduced. Finally, by using method, the sensitivity and SNR of the sinusoidal grating laser warning receiver developed by our group were analyzed, and the theoretic calculation and experimental results indicate that the SNR analysis method is feasible, and can be used in performance analysis of LWR.

  5. SCIENTIFIC UNCERTAINTIES IN ATMOSPHERIC MERCURY MODELS II: SENSITIVITY ANALYSIS IN THE CONUS DOMAIN

    EPA Science Inventory

    In this study, we present the response of model results to different scientific treatments in an effort to quantify the uncertainties caused by the incomplete understanding of mercury science and by model assumptions in atmospheric mercury models. Two sets of sensitivity simulati...

  6. Extending 'Deep Blue' aerosol retrieval coverage to cases of absorbing aerosols above clouds: sensitivity analysis and first case studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayer, Andrew M.; Hsu, C.; Bettenhausen, Corey

    Cases of absorbing aerosols above clouds (AAC), such as smoke or mineral dust, are omitted from most routinely-processed space-based aerosol optical depth (AOD) data products, including those from the Moderate Resolution Imaging Spectroradiometer (MODIS). This study presents a sensitivity analysis and preliminary algorithm to retrieve above-cloud AOD and liquid cloud optical depth (COD) for AAC cases from MODIS or similar

  7. A Sensitivity Analysis of Circular Error Probable Approximation Techniques

    DTIC Science & Technology

    1992-03-01

    SENSITIVITY ANALYSIS OF CIRCULAR ERROR PROBABLE APPROXIMATION TECHNIQUES THESIS Presented to the Faculty of the School of Engineering of the Air Force...programming skills. Major Paul Auclair patiently advised me in this endeavor, and Major Andy Howell added numerous insightful contributions. I thank my...techniques. The two ret(st accuratec techniiques require numerical integration and can take several hours to run ov a personal comlputer [2:1-2,4-6]. Some

  8. Efficient sensitivity analysis and optimization of a helicopter rotor

    NASA Technical Reports Server (NTRS)

    Lim, Joon W.; Chopra, Inderjit

    1989-01-01

    Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.

  9. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.

  10. Comparison between two methodologies for urban drainage decision aid.

    PubMed

    Moura, P M; Baptista, M B; Barraud, S

    2006-01-01

    The objective of the present work is to compare two methodologies based on multicriteria analysis for the evaluation of stormwater systems. The first methodology was developed in Brazil and is based on performance-cost analysis, the second one is ELECTRE III. Both methodologies were applied to a case study. Sensitivity and robustness analyses were then carried out. These analyses demonstrate that both methodologies have equivalent results, and present low sensitivity and high robustness. These results prove that the Brazilian methodology is consistent and can be used safely in order to select a good solution or a small set of good solutions that could be compared with more detailed methods afterwards.

  11. Sensitivity analysis for dose deposition in radiotherapy via a Fokker–Planck model

    DOE PAGES

    Barnard, Richard C.; Frank, Martin; Krycki, Kai

    2016-02-09

    In this paper, we study the sensitivities of electron dose calculations with respect to stopping power and transport coefficients. We focus on the application to radiotherapy simulations. We use a Fokker–Planck approximation to the Boltzmann transport equation. Equations for the sensitivities are derived by the adjoint method. The Fokker–Planck equation and its adjoint are solved numerically in slab geometry using the spherical harmonics expansion (P N) and an Harten-Lax-van Leer finite volume method. Our method is verified by comparison to finite difference approximations of the sensitivities. Finally, we present numerical results of the sensitivities for the normalized average dose depositionmore » depth with respect to the stopping power and the transport coefficients, demonstrating the increase in relative sensitivities as beam energy decreases. In conclusion, this in turn gives estimates on the uncertainty in the normalized average deposition depth, which we present.« less

  12. Validation of the colour difference plot scoring system analysis of the 103 hexagon multifocal electroretinogram in the evaluation of hydroxychloroquine retinal toxicity.

    PubMed

    Graves, Gabrielle S; Adam, Murtaza K; Stepien, Kimberly E; Han, Dennis P

    2014-08-01

    To evaluate sensitivity, specificity and reproducibility of colour difference plot analysis (CDPA) of 103 hexagon multifocal electroretinogram (mfERG) in detecting established hydroxychloroquine (HCQ) retinal toxicity. Twenty-three patients taking HCQ were divided into those with and without retinal toxicity and were compared with a control group without retinal disease and not taking HCQ. CDPA with two masked examiners was performed using age-corrected mfERG responses in the central ring (Rc ; 0-5.5 degrees from fixation) and paracentral ring (Rp ; 5.5-11 degrees from fixation). An abnormal ring was defined as containing any hexagons with a difference in two or more standard deviations from normal (colour blue or black). Categorical analysis (ring involvement or not) showed Rc had 83% sensitivity and 93% specificity. Rp had 89% sensitivity and 82% specificity. Requiring abnormal hexagons in both Rc and Rp yielded sensitivity and specificity of 83% and 95%, respectively. If required in only one ring, they were 89% and 80%, respectively. In this population, there was complete agreement in identifying toxicity when comparing CDPA using Rp with ring ratio analysis using R5/R4 P1 ring responses (89% sensitivity and 95% specificity). Continuous analysis of CDPA with receiver operating characteristic analysis showed optimized detection (83% sensitivity and 96% specificity) when ≥4 abnormal hexagons were present anywhere within the Rp ring outline. Intergrader agreement and reproducibility were good. Colour difference plot analysis had sensitivity and specificity that approached that of ring ratio analysis of R5/R4 P₁ responses. Ease of implementation and reproducibility are notable advantages of CDPA. © 2014 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  13. Navigation Design and Analysis for the Orion Cislunar Exploration Missions

    NASA Technical Reports Server (NTRS)

    D'Souza, Christopher; Holt, Greg; Gay, Robert; Zanetti, Renato

    2014-01-01

    This paper details the design and analysis of the cislunar optical navigation system being proposed for the Orion Earth-Moon (EM) missions. In particular, it presents the mathematics of the navigation filter. It also presents the sensitivity analysis that has been performed to understand the performance of the proposed system, with particular attention paid to entry flight path angle constraints and the DELTA V performance

  14. Dynamical Model of Drug Accumulation in Bacteria: Sensitivity Analysis and Experimentally Testable Predictions

    DOE PAGES

    Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.

    2016-11-08

    We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less

  15. Dynamical Model of Drug Accumulation in Bacteria: Sensitivity Analysis and Experimentally Testable Predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.

    We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less

  16. Design sensitivity analysis and optimization tool (DSO) for sizing design applications

    NASA Technical Reports Server (NTRS)

    Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa

    1992-01-01

    The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.

  17. Sensitivity analysis in economic evaluation: an audit of NICE current practice and a review of its use and value in decision-making.

    PubMed

    Andronis, L; Barton, P; Bryan, S

    2009-06-01

    To determine how we define good practice in sensitivity analysis in general and probabilistic sensitivity analysis (PSA) in particular, and to what extent it has been adhered to in the independent economic evaluations undertaken for the National Institute for Health and Clinical Excellence (NICE) over recent years; to establish what policy impact sensitivity analysis has in the context of NICE, and policy-makers' views on sensitivity analysis and uncertainty, and what use is made of sensitivity analysis in policy decision-making. Three major electronic databases, MEDLINE, EMBASE and the NHS Economic Evaluation Database, were searched from inception to February 2008. The meaning of 'good practice' in the broad area of sensitivity analysis was explored through a review of the literature. An audit was undertaken of the 15 most recent NICE multiple technology appraisal judgements and their related reports to assess how sensitivity analysis has been undertaken by independent academic teams for NICE. A review of the policy and guidance documents issued by NICE aimed to assess the policy impact of the sensitivity analysis and the PSA in particular. Qualitative interview data from NICE Technology Appraisal Committee members, collected as part of an earlier study, were also analysed to assess the value attached to the sensitivity analysis components of the economic analyses conducted for NICE. All forms of sensitivity analysis, notably both deterministic and probabilistic approaches, have their supporters and their detractors. Practice in relation to univariate sensitivity analysis is highly variable, with considerable lack of clarity in relation to the methods used and the basis of the ranges employed. In relation to PSA, there is a high level of variability in the form of distribution used for similar parameters, and the justification for such choices is rarely given. Virtually all analyses failed to consider correlations within the PSA, and this is an area of concern. Uncertainty is considered explicitly in the process of arriving at a decision by the NICE Technology Appraisal Committee, and a correlation between high levels of uncertainty and negative decisions was indicated. The findings suggest considerable value in deterministic sensitivity analysis. Such analyses serve to highlight which model parameters are critical to driving a decision. Strong support was expressed for PSA, principally because it provides an indication of the parameter uncertainty around the incremental cost-effectiveness ratio. The review and the policy impact assessment focused exclusively on documentary evidence, excluding other sources that might have revealed further insights on this issue. In seeking to address parameter uncertainty, both deterministic and probabilistic sensitivity analyses should be used. It is evident that some cost-effectiveness work, especially around the sensitivity analysis components, represents a challenge in making it accessible to those making decisions. This speaks to the training agenda for those sitting on such decision-making bodies, and to the importance of clear presentation of analyses by the academic community.

  18. Single-molecule detection: applications to ultrasensitive biochemical analysis

    NASA Astrophysics Data System (ADS)

    Castro, Alonso; Shera, E. Brooks

    1995-06-01

    Recent developments in laser-based detection of fluorescent molecules have made possible the implementation of very sensitive techniques for biochemical analysis. We present and discuss our experiments on the applications of our recently developed technique of single-molecule detection to the analysis of molecules of biological interest. These newly developed methods are capable of detecting and identifying biomolecules at the single-molecule level of sensitivity. In one case, identification is based on measuring fluorescence brightness from single molecules. In another, molecules are classified by determining their electrophoretic velocities.

  19. Sensitivity of Forecast Skill to Different Objective Analysis Schemes

    NASA Technical Reports Server (NTRS)

    Baker, W. E.

    1979-01-01

    Numerical weather forecasts are characterized by rapidly declining skill in the first 48 to 72 h. Recent estimates of the sources of forecast error indicate that the inaccurate specification of the initial conditions contributes substantially to this error. The sensitivity of the forecast skill to the initial conditions is examined by comparing a set of real-data experiments whose initial data were obtained with two different analysis schemes. Results are presented to emphasize the importance of the objective analysis techniques used in the assimilation of observational data.

  20. Accuracy of computed tomographic features in differentiating intestinal tuberculosis from Crohn's disease: a systematic review with meta-analysis.

    PubMed

    Kedia, Saurabh; Sharma, Raju; Sreenivas, Vishnubhatla; Madhusudhan, Kumble Seetharama; Sharma, Vishal; Bopanna, Sawan; Pratap Mouli, Venigalla; Dhingra, Rajan; Yadav, Dawesh Prakash; Makharia, Govind; Ahuja, Vineet

    2017-04-01

    Abdominal computed tomography (CT) can noninvasively image the entire gastrointestinal tract and assess extraintestinal features that are important in differentiating Crohn's disease (CD) and intestinal tuberculosis (ITB). The present meta-analysis pooled the results of all studies on the role of CT abdomen in differentiating between CD and ITB. We searched PubMed and Embase for all publications in English that analyzed the features differentiating between CD and ITB on abdominal CT. The features included comb sign, necrotic lymph nodes, asymmetric bowel wall thickening, skip lesions, fibrofatty proliferation, mural stratification, ileocaecal area, long segment, and left colonic involvements. Sensitivity, specificity, positive and negative likelihood ratios, and diagnostic odds ratio (DOR) were calculated for all the features. Symmetric receiver operating characteristic curve was plotted for features present in >3 studies. Heterogeneity and publication bias was assessed and sensitivity analysis was performed by excluding studies that compared features on conventional abdominal CT instead of CT enterography (CTE). We included 6 studies (4 CTE, 1 conventional abdominal CT, and 1 CTE+conventional abdominal CT) involving 417 and 195 patients with CD and ITB, respectively. Necrotic lymph nodes had the highest diagnostic accuracy (sensitivity, 23%; specificity, 100%; DOR, 30.2) for ITB diagnosis, and comb sign (sensitivity, 82%; specificity, 81%; DOR, 21.5) followed by skip lesions (sensitivity, 86%; specificity, 74%; DOR, 16.5) had the highest diagnostic accuracy for CD diagnosis. On sensitivity analysis, the diagnostic accuracy of other features excluding asymmetric bowel wall thickening remained similar. Necrotic lymph nodes and comb sign on abdominal CT had the best diagnostic accuracy in differentiating CD and ITB.

  1. Calibration of a complex activated sludge model for the full-scale wastewater treatment plant.

    PubMed

    Liwarska-Bizukojc, Ewa; Olejnik, Dorota; Biernacki, Rafal; Ledakowicz, Stanislaw

    2011-08-01

    In this study, the results of the calibration of the complex activated sludge model implemented in BioWin software for the full-scale wastewater treatment plant are presented. Within the calibration of the model, sensitivity analysis of its parameters and the fractions of carbonaceous substrate were performed. In the steady-state and dynamic calibrations, a successful agreement between the measured and simulated values of the output variables was achieved. Sensitivity analysis revealed that upon the calculations of normalized sensitivity coefficient (S(i,j)) 17 (steady-state) or 19 (dynamic conditions) kinetic and stoichiometric parameters are sensitive. Most of them are associated with growth and decay of ordinary heterotrophic organisms and phosphorus accumulating organisms. The rankings of ten most sensitive parameters established on the basis of the calculations of the mean square sensitivity measure (δ(msqr)j) indicate that irrespective of the fact, whether the steady-state or dynamic calibration was performed, there is an agreement in the sensitivity of parameters.

  2. Analytic uncertainty and sensitivity analysis of models with input correlations

    NASA Astrophysics Data System (ADS)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Lijuan; Gonder, Jeff; Burton, Evan

    This study evaluates the costs and benefits associated with the use of a plug-in hybrid electric bus and determines the cost effectiveness relative to a conventional bus and a hybrid electric bus. A sensitivity sweep analysis was performed over a number of a different battery sizes, charging powers, and charging stations. The net present value was calculated for each vehicle design and provided the basis for the design evaluation. In all cases, given present day economic assumptions, the conventional bus achieved the lowest net present value while the optimal plug-in hybrid electric bus scenario reached lower lifetime costs than themore » hybrid electric bus. The study also performed parameter sensitivity analysis under low market potential assumptions and high market potential assumptions. The net present value of plug-in hybrid electric bus is close to that of conventional bus.« less

  4. Multiobjective Sensitivity Analysis Of Sediment And Nitrogen Processes With A Watershed Model

    EPA Science Inventory

    This paper presents a computational analysis for evaluating critical non-point-source sediment and nutrient (specifically nitrogen) processes and management actions at the watershed scale. In the analysis, model parameters that bear key uncertainties were presumed to reflect the ...

  5. The association between paternal sensitivity and infant-father attachment security: a meta-analysis of three decades of research.

    PubMed

    Lucassen, Nicole; Tharner, Anne; Van Ijzendoorn, Marinus H; Bakermans-Kranenburg, Marian J; Volling, Brenda L; Verhulst, Frank C; Lambregtse-Van den Berg, Mijke P; Tiemeier, Henning

    2011-12-01

    For almost three decades, the association between paternal sensitivity and infant-father attachment security has been studied. The first wave of studies on the correlates of infant-father attachment showed a weak association between paternal sensitivity and infant-father attachment security (r = .13, p < .001, k = 8, N = 546). In the current paper, a meta-analysis of the association between paternal sensitivity and infant-father attachment based on all studies currently available is presented, and the change over time of the association between paternal sensitivity and infant-father attachment is investigated. Studies using an observational measure of paternal interactive behavior with the infant, and the Strange Situation Procedure to observe the attachment relationship were included. Paternal sensitivity is differentiated from paternal sensitivity combined with stimulation in the interaction with the infant. Higher levels of paternal sensitivity were associated with more infant-father attachment security (r = .12, p < .001, k = 16, N = 1,355). Fathers' sensitive play combined with stimulation was not more strongly associated with attachment security than sensitive interactions without stimulation of play. Despite possible changes in paternal role patterns, we did not find stronger associations between paternal sensitivity and infant attachment in more recent years.

  6. Spacecraft design sensitivity for a disaster warning satellite system

    NASA Technical Reports Server (NTRS)

    Maloy, J. E.; Provencher, C. E.; Leroy, B. E.; Braley, R. C.; Shumaker, H. A.

    1977-01-01

    A disaster warning satellite (DWS) is described for warning the general public of impending natural catastrophes. The concept is responsive to NOAA requirements and maximizes the use of ATS-6 technology. Upon completion of concept development, the study was extended to establishing the sensitivity of the DWSS spacecraft power, weight, and cost to variations in both warning and conventional communications functions. The results of this sensitivity analysis are presented.

  7. [Quantitative surface analysis of Pt-Co, Cu-Au and Cu-Ag alloy films by XPS and AES].

    PubMed

    Li, Lian-Zhong; Zhuo, Shang-Jun; Shen, Ru-Xiang; Qian, Rong; Gao, Jie

    2013-11-01

    In order to improve the quantitative analysis accuracy of AES, We associated XPS with AES and studied the method to reduce the error of AES quantitative analysis, selected Pt-Co, Cu-Au and Cu-Ag binary alloy thin-films as the samples, used XPS to correct AES quantitative analysis results by changing the auger sensitivity factors to make their quantitative analysis results more similar. Then we verified the accuracy of the quantitative analysis of AES when using the revised sensitivity factors by other samples with different composition ratio, and the results showed that the corrected relative sensitivity factors can reduce the error in quantitative analysis of AES to less than 10%. Peak defining is difficult in the form of the integral spectrum of AES analysis since choosing the starting point and ending point when determining the characteristic auger peak intensity area with great uncertainty, and to make analysis easier, we also processed data in the form of the differential spectrum, made quantitative analysis on the basis of peak to peak height instead of peak area, corrected the relative sensitivity factors, and verified the accuracy of quantitative analysis by the other samples with different composition ratio. The result showed that the analytical error in quantitative analysis of AES reduced to less than 9%. It showed that the accuracy of AES quantitative analysis can be highly improved by the way of associating XPS with AES to correct the auger sensitivity factors since the matrix effects are taken into account. Good consistency was presented, proving the feasibility of this method.

  8. LASER BIOLOGY AND MEDICINE: Application of tunable diode lasers for a highly sensitive analysis of gaseous biomarkers in exhaled air

    NASA Astrophysics Data System (ADS)

    Stepanov, E. V.; Milyaev, Varerii A.

    2002-11-01

    The application of tunable diode lasers for a highly sensitive analysis of gaseous biomarkers in exhaled air in biomedical diagnostics is discussed. The principle of operation and the design of a laser analyser for studying the composition of exhaled air are described. The results of detection of gaseous biomarkers in exhaled air, including clinical studies, which demonstrate the diagnostic possibilities of the method, are presented.

  9. Perspective: Optical measurement of feature dimensions and shapes by scatterometry

    NASA Astrophysics Data System (ADS)

    Diebold, Alain C.; Antonelli, Andy; Keller, Nick

    2018-05-01

    The use of optical scattering to measure feature shape and dimensions, scatterometry, is now routine during semiconductor manufacturing. Scatterometry iteratively improves an optical model structure using simulations that are compared to experimental data from an ellipsometer. These simulations are done using the rigorous coupled wave analysis for solving Maxwell's equations. In this article, we describe the Mueller matrix spectroscopic ellipsometry based scatterometry. Next, the rigorous coupled wave analysis for Maxwell's equations is presented. Following this, several example measurements are described as they apply to specific process steps in the fabrication of gate-all-around (GAA) transistor structures. First, simulations of measurement sensitivity for the inner spacer etch back step of horizontal GAA transistor processing are described. Next, the simulated metrology sensitivity for sacrificial (dummy) amorphous silicon etch back step of vertical GAA transistor processing is discussed. Finally, we present the application of plasmonically active test structures for improving the sensitivity of the measurement of metal linewidths.

  10. A High-Sensitivity Current Sensor Utilizing CrNi Wire and Microfiber Coils

    PubMed Central

    Xie, Xiaodong; Li, Jie; Sun, Li-Peng; Shen, Xiang; Jin, Long; Guan, Bai-ou

    2014-01-01

    We obtain an extremely high current sensitivity by wrapping a section of microfiber on a thin-diameter chromium-nickel wire. Our detected current sensitivity is as high as 220.65 nm/A2 for a structure length of only 35 μm. Such sensitivity is two orders of magnitude higher than the counterparts reported in the literature. Analysis shows that a higher resistivity or/and a thinner diameter of the metal wire may produce higher sensitivity. The effects of varying the structure parameters on sensitivity are discussed. The presented structure has potential for low-current sensing or highly electrically-tunable filtering applications. PMID:24824372

  11. A high-sensitivity current sensor utilizing CrNi wire and microfiber coils.

    PubMed

    Xie, Xiaodong; Li, Jie; Sun, Li-Peng; Shen, Xiang; Jin, Long; Guan, Bai-ou

    2014-05-12

    We obtain an extremely high current sensitivity by wrapping a section of microfiber on a thin-diameter chromium-nickel wire. Our detected current sensitivity is as high as 220.65 nm/A2 for a structure length of only 35 μm. Such sensitivity is two orders of magnitude higher than the counterparts reported in the literature. Analysis shows that a higher resistivity or/and a thinner diameter of the metal wire may produce higher sensitivity. The effects of varying the structure parameters on sensitivity are discussed. The presented structure has potential for low-current sensing or highly electrically-tunable filtering applications.

  12. Design and Analysis of a New Hair Sensor for Multi-Physical Signal Measurement

    PubMed Central

    Yang, Bo; Hu, Di; Wu, Lei

    2016-01-01

    A new hair sensor for multi-physical signal measurements, including acceleration, angular velocity and air flow, is presented in this paper. The entire structure consists of a hair post, a torsional frame and a resonant signal transducer. The hair post is utilized to sense and deliver the physical signals of the acceleration and the air flow rate. The physical signals are converted into frequency signals by the resonant transducer. The structure is optimized through finite element analysis. The simulation results demonstrate that the hair sensor has a frequency of 240 Hz in the first mode for the acceleration or the air flow sense, 3115 Hz in the third and fourth modes for the resonant conversion, and 3467 Hz in the fifth and sixth modes for the angular velocity transformation, respectively. All the above frequencies present in a reasonable modal distribution and are separated from interference modes. The input-output analysis of the new hair sensor demonstrates that the scale factor of the acceleration is 12.35 Hz/g, the scale factor of the angular velocity is 0.404 nm/deg/s and the sensitivity of the air flow is 1.075 Hz/(m/s)2, which verifies the multifunction sensitive characteristics of the hair sensor. Besides, the structural optimization of the hair post is used to improve the sensitivity of the air flow rate and the acceleration. The analysis results illustrate that the hollow circular hair post can increase the sensitivity of the air flow and the II-shape hair post can increase the sensitivity of the acceleration. Moreover, the thermal analysis confirms the scheme of the frequency difference for the resonant transducer can prominently eliminate the temperature influences on the measurement accuracy. The air flow analysis indicates that the surface area increase of hair post is significantly beneficial for the efficiency improvement of the signal transmission. In summary, the structure of the new hair sensor is proved to be feasible by comprehensive simulation and analysis. PMID:27399716

  13. A retrospective analysis of preoperative staging modalities for oral squamous cell carcinoma.

    PubMed

    Kähling, Ch; Langguth, T; Roller, F; Kroll, T; Krombach, G; Knitschke, M; Streckbein, Ph; Howaldt, H P; Wilbrand, J-F

    2016-12-01

    An accurate preoperative assessment of cervical lymph node status is a prerequisite for individually tailored cancer therapies in patients with oral squamous cell carcinoma. The detection of malignant spread and its treatment crucially influence the prognosis. The aim of the present study was to analyze the different staging modalities used among patients with a diagnosis of primary oral squamous cell carcinoma between 2008 and 2015. An analysis of preoperative staging findings, collected by clinical palpation, ultrasound, and computed tomography (CT), was performed. The results obtained were compared with the results of the final histopathological findings of the neck dissection specimens. A statistical analysis using McNemar's test was performed. The sensitivity of CT for the detection of malignant cervical tumor spread was 74.5%. The ultrasound obtained a sensitivity of 60.8%. Both CT and ultrasound demonstrated significantly enhanced sensitivity compared to the clinical palpation with a sensitivity of 37.1%. No significant difference was observed between CT and ultrasound. A combination of different staging modalities increased the sensitivity significantly compared with ultrasound staging alone. No significant difference in sensitivity was found between the combined use of different staging modalities and CT staging alone. The highest sensitivity, of 80.0%, was obtained by a combination of all three staging modalities: clinical palpation, ultrasound and CT. The present study indicates that CT has an essential role in the preoperative staging of patients with oral squamous cell carcinoma. Its use not only significantly increases the sensitivity of cervical lymph node metastasis detection but also offers a preoperative assessment of local tumor spread and resection borders. An additional non-invasive cervical lymph node examination increases the sensitivity of the tumor staging process and reduces the risk of occult metastasis. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  14. Mini-Column Ion-Exchange Separation and Atomic Absorption Quantitation of Nickel, Cobalt, and Iron: An Undergraduate Quantitative Analysis Experiment.

    ERIC Educational Resources Information Center

    Anderson, James L.; And Others

    1980-01-01

    Presents an undergraduate quantitative analysis experiment, describing an atomic absorption quantitation scheme that is fast, sensitive and comparatively simple relative to other titration experiments. (CS)

  15. Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Eleshaky, Mohamed E.

    1991-01-01

    A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.

  16. Automatic Single Event Effects Sensitivity Analysis of a 13-Bit Successive Approximation ADC

    NASA Astrophysics Data System (ADS)

    Márquez, F.; Muñoz, F.; Palomo, F. R.; Sanz, L.; López-Morillo, E.; Aguirre, M. A.; Jiménez, A.

    2015-08-01

    This paper presents Analog Fault Tolerant University of Seville Debugging System (AFTU), a tool to evaluate the Single-Event Effect (SEE) sensitivity of analog/mixed signal microelectronic circuits at transistor level. As analog cells can behave in an unpredictable way when critical areas interact with the particle hitting, there is a need for designers to have a software tool that allows an automatic and exhaustive analysis of Single-Event Effects influence. AFTU takes the test-bench SPECTRE design, emulates radiation conditions and automatically evaluates vulnerabilities using user-defined heuristics. To illustrate the utility of the tool, the SEE sensitivity of a 13-bits Successive Approximation Analog-to-Digital Converter (ADC) has been analysed. This circuit was selected not only because it was designed for space applications, but also due to the fact that a manual SEE sensitivity analysis would be too time-consuming. After a user-defined test campaign, it was detected that some voltage transients were propagated to a node where a parasitic diode was activated, affecting the offset cancelation, and therefore the whole resolution of the ADC. A simple modification of the scheme solved the problem, as it was verified with another automatic SEE sensitivity analysis.

  17. Nuclear morphology for the detection of alterations in bronchial cells from lung cancer: an attempt to improve sensitivity and specificity.

    PubMed

    Fafin-Lefevre, Mélanie; Morlais, Fabrice; Guittet, Lydia; Clin, Bénédicte; Launoy, Guy; Galateau-Sallé, Françoise; Plancoulaine, Benoît; Herlin, Paulette; Letourneux, Marc

    2011-08-01

    To identify which morphologic or densitometric parameters are modified in cell nuclei from bronchopulmonary cancer based on 18 parameters involving shape, intensity, chromatin, texture, and DNA content and develop a bronchopulmonary cancer screening method relying on analysis of sputum sample cell nuclei. A total of 25 sputum samples from controls and 22 bronchial aspiration samples from patients presenting with bronchopulmonary cancer who were professionally exposed to cancer were used. After Feulgen staining, 18 morphologic and DNA content parameters were measured on cell nuclei, via image cytom- etry. A method was developed for analyzing distribution quantiles, compared with simply interpreting mean values, to characterize morphologic modifications in cell nuclei. Distribution analysis of parameters enabled us to distinguish 13 of 18 parameters that demonstrated significant differences between controls and cancer cases. These parameters, used alone, enabled us to distinguish two population types, with both sensitivity and specificity > 70%. Three parameters offered 100% sensitivity and specificity. When mean values offered high sensitivity and specificity, comparable or higher sensitivity and specificity values were observed for at least one of the corresponding quantiles. Analysis of modification in morphologic parameters via distribution analysis proved promising for screening bronchopulmonary cancer from sputum.

  18. A new framework for comprehensive, robust, and efficient global sensitivity analysis: 1. Theory

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin V.

    2016-01-01

    Computer simulation models are continually growing in complexity with increasingly more factors to be identified. Sensitivity Analysis (SA) provides an essential means for understanding the role and importance of these factors in producing model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to "variogram analysis," that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. Synthetic functions that resemble actual model response surfaces are used to illustrate the concepts, and show VARS to be as much as two orders of magnitude more computationally efficient than the state-of-the-art Sobol approach. In a companion paper, we propose a practical implementation strategy, and demonstrate the effectiveness, efficiency, and reliability (robustness) of the VARS framework on real-data case studies.

  19. Finite Element Model Calibration Approach for Area I-X

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Gaspar, James L.; Lazor, Daniel R.; Parks, Russell A.; Bartolotta, Paul A.

    2010-01-01

    Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of non-conventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pretest predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.

  20. Finite Element Model Calibration Approach for Ares I-X

    NASA Technical Reports Server (NTRS)

    Horta, Lucas G.; Reaves, Mercedes C.; Buehrle, Ralph D.; Templeton, Justin D.; Lazor, Daniel R.; Gaspar, James L.; Parks, Russel A.; Bartolotta, Paul A.

    2010-01-01

    Ares I-X is a pathfinder vehicle concept under development by NASA to demonstrate a new class of launch vehicles. Although this vehicle is essentially a shell of what the Ares I vehicle will be, efforts are underway to model and calibrate the analytical models before its maiden flight. Work reported in this document will summarize the model calibration approach used including uncertainty quantification of vehicle responses and the use of nonconventional boundary conditions during component testing. Since finite element modeling is the primary modeling tool, the calibration process uses these models, often developed by different groups, to assess model deficiencies and to update parameters to reconcile test with predictions. Data for two major component tests and the flight vehicle are presented along with the calibration results. For calibration, sensitivity analysis is conducted using Analysis of Variance (ANOVA). To reduce the computational burden associated with ANOVA calculations, response surface models are used in lieu of computationally intensive finite element solutions. From the sensitivity studies, parameter importance is assessed as a function of frequency. In addition, the work presents an approach to evaluate the probability that a parameter set exists to reconcile test with analysis. Comparisons of pre-test predictions of frequency response uncertainty bounds with measured data, results from the variance-based sensitivity analysis, and results from component test models with calibrated boundary stiffness models are all presented.

  1. NPV Sensitivity Analysis: A Dynamic Excel Approach

    ERIC Educational Resources Information Center

    Mangiero, George A.; Kraten, Michael

    2017-01-01

    Financial analysts generally create static formulas for the computation of NPV. When they do so, however, it is not readily apparent how sensitive the value of NPV is to changes in multiple interdependent and interrelated variables. It is the aim of this paper to analyze this variability by employing a dynamic, visually graphic presentation using…

  2. ’Coxiella Burnetii’ Vaccine Development: Lipopolysaccharide Structural Analysis

    DTIC Science & Technology

    1991-02-20

    Analytical instrumentation and methodology is presented for the determination of endotoxin -related structures at much improved sensitivity and... ENDOTOXIN CHARACTERIZATION BY SFC .......................... 10 III. COXIELLA BURNETII LPS CHARACTERIZATION A. EXPERIMENTAL...period for the determination of endotoxin -related structures at much improved sensitivity and specificity. Reports, and their applications, are listed in

  3. Analysis of Composite Panels Subjected to Thermo-Mechanical Loads

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Peters, Jeanne M.

    1999-01-01

    The results of a detailed study of the effect of cutout on the nonlinear response of curved unstiffened panels are presented. The panels are subjected to combined temperature gradient through-the-thickness combined with pressure loading and edge shortening or edge shear. The analysis is based on a first-order, shear deformation, Sanders-Budiansky-type shell theory with the effects of large displacements, moderate rotations, transverse shear deformation, and laminated anisotropic material behavior included. A mixed formulation is used with the fundamental unknowns consisting of the generalized displacements and the stress resultants of the panel. The nonlinear displacements, strain energy, principal strains, transverse shear stresses, transverse shear strain energy density, and their hierarchical sensitivity coefficients are evaluated. The hierarchical sensitivity coefficients measure the sensitivity of the nonlinear response to variations in the panel parameters, as well as in the material properties of the individual layers. Numerical results are presented for cylindrical panels and show the effects of variations in the loading and the size of the cutout on the global and local response quantities as well as their sensitivity to changes in the various panel, layer, and micromechanical parameters.

  4. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 1: Theory and numerical solution procedures

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  5. Establishing the Capability of a 1D SVAT Modelling Scheme in Predicting Key Biophysical Vegetation Characterisation Parameters

    NASA Astrophysics Data System (ADS)

    Ireland, Gareth; Petropoulos, George P.; Carlson, Toby N.; Purdy, Sarah

    2015-04-01

    Sensitivity analysis (SA) consists of an integral and important validatory check of a computer simulation model before it is used to perform any kind of analysis. In the present work, we present the results from a SA performed on the SimSphere Soil Vegetation Atmosphere Transfer (SVAT) model utilising a cutting edge and robust Global Sensitivity Analysis (GSA) approach, based on the use of the Gaussian Emulation Machine for Sensitivity Analysis (GEM-SA) tool. The sensitivity of the following model outputs was evaluated: the ambient CO2 concentration and the rate of CO2 uptake by the plant, the ambient O3 concentration, the flux of O3 from the air to the plant/soil boundary, and the flux of O3 taken up by the plant alone. The most sensitive model inputs for the majority of model outputs were related to the structural properties of vegetation, namely, the Leaf Area Index, Fractional Vegetation Cover, Cuticle Resistance and Vegetation Height. External CO2 in the leaf and the O3 concentration in the air input parameters also exhibited significant influence on model outputs. This work presents a very important step towards an all-inclusive evaluation of SimSphere. Indeed, results from this study contribute decisively towards establishing its capability as a useful teaching and research tool in modelling Earth's land surface interactions. This is of considerable importance in the light of the rapidly expanding use of this model worldwide, which also includes research conducted by various Space Agencies examining its synergistic use with Earth Observation data towards the development of operational products at a global scale. This research was supported by the European Commission Marie Curie Re-Integration Grant "TRANSFORM-EO". SimSphere is currently maintained and freely distributed by the Department of Geography and Earth Sciences at Aberystwyth University (http://www.aber.ac.uk/simsphere). Keywords: CO2 flux, ambient CO2, O3 flux, SimSphere, Gaussian process emulators, BACCO GEM-SA, TRANSFORM-EO.

  6. Relationship between interpersonal sensitivity and leukocyte telomere length.

    PubMed

    Suzuki, Akihito; Matsumoto, Yoshihiko; Enokido, Masanori; Shirata, Toshinori; Goto, Kaoru; Otani, Koichi

    2017-10-10

    Telomeres are repetitive DNA sequences located at the ends of chromosomes, and telomere length represents a biological marker for cellular aging. Interpersonal sensitivity, excessive sensitivity to the behavior and feelings of others, is one of the vulnerable factors to depression. In the present study, we examined the effect of interpersonal sensitivity on telomere length in healthy subjects. The subjects were 159 unrelated healthy Japanese volunteers. Mean age ± SD (range) of the subjects was 42.3 ± 7.8 (30-61) years. Interpersonal sensitivity was assessed by the Japanese version of the Interpersonal Sensitivity Measure (IPSM). Leukocyte telomere length was determined by a quantitative real-time PCR method. Higher scores of the total IPSM were significantly (β = -0.163, p = 0.038) related to shorter telomere length. In the sub-scale analysis, higher scores of timidity were significantly (β = -0.220, p = 0.044) associated with shorter telomere length. The present study suggests that subjects with higher interpersonal sensitivity have shorter leukocyte telomere length, implying that interpersonal sensitivity has an impact on cellular aging.

  7. 3MRA UNCERTAINTY AND SENSITIVITY ANALYSIS

    EPA Science Inventory

    This presentation discusses the Multimedia, Multipathway, Multireceptor Risk Assessment (3MRA) modeling system. The outline of the presentation is: modeling system overview - 3MRA versions; 3MRA version 1.0; national-scale assessment dimensionality; SuperMUSE: windows-based super...

  8. Analysis and comparison of sleeping posture classification methods using pressure sensitive bed system.

    PubMed

    Hsia, C C; Liou, K J; Aung, A P W; Foo, V; Huang, W; Biswas, J

    2009-01-01

    Pressure ulcers are common problems for bedridden patients. Caregivers need to reposition the sleeping posture of a patient every two hours in order to reduce the risk of getting ulcers. This study presents the use of Kurtosis and skewness estimation, principal component analysis (PCA) and support vector machines (SVMs) for sleeping posture classification using cost-effective pressure sensitive mattress that can help caregivers to make correct sleeping posture changes for the prevention of pressure ulcers.

  9. Using the Mount Pinatubo Volcanic Eruption to Determine Climate Sensitivity: Comments on "Climate Forcing by the Volcanic Eruption of Mount Pinatubo" by David H. Douglass and Robert S. Knox

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wigley, T L; Ammann, C M; Santer, B D

    2005-04-22

    [1] Douglass and Knox [2005], hereafter referred to as DK, present an analysis of the observed cooling following the 1991 Mt. Pinatubo eruption and claim that these data imply a very low value for the climate sensitivity (equivalent to 0.6 C equilibrium warming for a CO{sub 2} doubling). We show here that their analysis is flawed and their results are incorrect.

  10. Sensitivity Analysis of Fatigue Crack Growth Model for API Steels in Gaseous Hydrogen.

    PubMed

    Amaro, Robert L; Rustagi, Neha; Drexler, Elizabeth S; Slifka, Andrew J

    2014-01-01

    A model to predict fatigue crack growth of API pipeline steels in high pressure gaseous hydrogen has been developed and is presented elsewhere. The model currently has several parameters that must be calibrated for each pipeline steel of interest. This work provides a sensitivity analysis of the model parameters in order to provide (a) insight to the underlying mathematical and mechanistic aspects of the model, and (b) guidance for model calibration of other API steels.

  11. Sensitivity analysis of the space shuttle to ascent wind profiles

    NASA Technical Reports Server (NTRS)

    Smith, O. E.; Austin, L. D., Jr.

    1982-01-01

    A parametric sensitivity analysis of the space shuttle ascent flight to the wind profile is presented. Engineering systems parameters are obtained by flight simulations using wind profile models and samples of detailed (Jimsphere) wind profile measurements. The wind models used are the synthetic vector wind model, with and without the design gust, and a model of the vector wind change with respect to time. From these comparison analyses an insight is gained on the contribution of winds to ascent subsystems flight parameters.

  12. What Do We Mean By Sensitivity Analysis? The Need For A Comprehensive Characterization Of Sensitivity In Earth System Models

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Gupta, H. V.

    2014-12-01

    Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.

  13. Sensitivity of control-augmented structure obtained by a system decomposition method

    NASA Technical Reports Server (NTRS)

    Sobieszczanskisobieski, Jaroslaw; Bloebaum, Christina L.; Hajela, Prabhat

    1988-01-01

    The verification of a method for computing sensitivity derivatives of a coupled system is presented. The method deals with a system whose analysis can be partitioned into subsets that correspond to disciplines and/or physical subsystems that exchange input-output data with each other. The method uses the partial sensitivity derivatives of the output with respect to input obtained for each subset separately to assemble a set of linear, simultaneous, algebraic equations that are solved for the derivatives of the coupled system response. This sensitivity analysis is verified using an example of a cantilever beam augmented with an active control system to limit the beam's dynamic displacements under an excitation force. The verification shows good agreement of the method with reference data obtained by a finite difference technique involving entire system analysis. The usefulness of a system sensitivity method in optimization applications by employing a piecewise-linear approach to the same numerical example is demonstrated. The method's principal merits are its intrinsically superior accuracy in comparison with the finite difference technique, and its compatibility with the traditional division of work in complex engineering tasks among specialty groups.

  14. Skeletal mechanism generation for surrogate fuels using directed relation graph with error propagation and sensitivity analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niemeyer, Kyle E.; Sung, Chih-Jen; Raju, Mandhapati P.

    2010-09-15

    A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination ofmore » the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane generated using the DRGEP, DRGASA, and DRGEPSA methods are presented and compared to illustrate the improvement of DRGEPSA. From a detailed reaction mechanism for n-alkanes covering n-octane to n-hexadecane with 2115 species and 8157 reactions, two skeletal mechanisms for n-decane generated using DRGEPSA, one covering a comprehensive range of temperature, pressure, and equivalence ratio conditions for autoignition and the other limited to high temperatures, are presented and validated. The comprehensive skeletal mechanism consists of 202 species and 846 reactions and the high-temperature skeletal mechanism consists of 51 species and 256 reactions. Both mechanisms are further demonstrated to well reproduce the results of the detailed mechanism in perfectly-stirred reactor and laminar flame simulations over a wide range of conditions. The comprehensive and high-temperature n-decane skeletal mechanisms are included as supplementary material with this article. (author)« less

  15. Design and analysis of a silicon-based antiresonant reflecting optical waveguide chemical sensor

    NASA Astrophysics Data System (ADS)

    Remley, Kate A.; Weisshaar, Andreas

    1996-08-01

    The design of a silicon-based antiresonant reflecting optical waveguide (ARROW) chemical sensor is presented, and its theoretical performance is compared with that of a conventional structure. The use of an ARROW structure permits incorporation of a thick guiding region for efficient coupling to a single-mode fiber. A high-index overlay is added to fine tune the sensitivity of the ARROW chemical sensor. The sensitivity of the sensor is presented, and design trade-offs are discussed.

  16. Analysis of thermal performance of penetrated multi-layer insulation

    NASA Technical Reports Server (NTRS)

    Foster, Winfred A., Jr.; Jenkins, Rhonald M.; Yoo, Chai H.; Barrett, William E.

    1988-01-01

    Results of research performed for the purpose of studying the sensitivity of multi-layer insulation blanket performance caused by penetrations through the blanket are presented. The work described in this paper presents the experimental data obtained from thermal vacuum tests of various penetration geometries similar to those present on the Hubble Space Telescope. The data obtained from these tests is presented in terms of electrical power required sensitivity factors referenced to a multi-layer blanket without a penetration. The results of these experiments indicate that a significant increase in electrical power is required to overcome the radiation heat losses in the vicinity of the penetrations.

  17. True covariance simulation of the EUVE update filter

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, R. R.

    1989-01-01

    A covariance analysis of the performance and sensitivity of the attitude determination Extended Kalman Filter (EKF) used by the On Board Computer (OBC) of the Extreme Ultra Violet Explorer (EUVE) spacecraft is presented. The linearized dynamics and measurement equations of the error states are derived which constitute the truth model describing the real behavior of the systems involved. The design model used by the OBC EKF is then obtained by reducing the order of the truth model. The covariance matrix of the EKF which uses the reduced order model is not the correct covariance of the EKF estimation error. A true covariance analysis has to be carried out in order to evaluate the correct accuracy of the OBC generated estimates. The results of such analysis are presented which indicate both the performance and the sensitivity of the OBC EKF.

  18. Sensitivity Analysis of Launch Vehicle Debris Risk Model

    NASA Technical Reports Server (NTRS)

    Gee, Ken; Lawrence, Scott L.

    2010-01-01

    As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.

  19. Performance of the 2015 American College of Rheumatology/European League Against Rheumatism gout classification criteria in Thai patients.

    PubMed

    Louthrenoo, Worawit; Jatuworapruk, Kanon; Lhakum, Panomkorn; Pattamapaspong, Nuttaya

    2017-05-01

    To evaluate the sensitivity and specificity of the 2015 American College of Rheumatology/European League Against Rheumatism (ACR/EULAR) gout classification criteria in Thai patients presenting with acute arthritis in a real-life setting. Data were analyzed on consecutive patients presenting with arthritis of less than 2 weeks duration. Sensitivity and specificity were calculated by using the presence of monosodium urate (MSU) crystals in the synovial fluid or tissue aspirate as gold standard for gout diagnosis. Subgroup analysis was performed in patients with early disease (≤2 years), established disease (>2 years), and those without tophus. Additional analysis also was performed in non-tophaceous gout patients, and patients with acute calcium pyrophosphate dihydrate crystal arthritis were used as controls. One hundred and nine gout and 74 non-gout patients participated in this study. Full ACR/EULAR classification criteria had sensitivity and specificity of 90.2 and 90.0%, respectively; and 90.2 and 85.0%, respectively, when synovial fluid microscopy was excluded. Clinical-only criteria yielded sensitivity and specificity of 79.8 and 87.8%, respectively. The criteria performed well among patients with early and non-tophaceous disease, but had lower specificity in patients with established disease. The variation of serum uric acid level was a major limitation of the classification criteria. The ACR/EULAR classification criteria had high sensitivity and specificity in Thai patients presenting with acute arthritis, even when clinical criteria alone were used.

  20. Sensitivity analysis for oblique incidence reflectometry using Monte Carlo simulations.

    PubMed

    Kamran, Faisal; Andersen, Peter E

    2015-08-10

    Oblique incidence reflectometry has developed into an effective, noncontact, and noninvasive measurement technology for the quantification of both the reduced scattering and absorption coefficients of a sample. The optical properties are deduced by analyzing only the shape of the reflectance profiles. This article presents a sensitivity analysis of the technique in turbid media. Monte Carlo simulations are used to investigate the technique and its potential to distinguish the small changes between different levels of scattering. We present various regions of the dynamic range of optical properties in which system demands vary to be able to detect subtle changes in the structure of the medium, translated as measured optical properties. Effects of variation in anisotropy are discussed and results presented. Finally, experimental data of milk products with different fat content are considered as examples for comparison.

  1. Sensitivity Enhancement of FBG-Based Strain Sensor.

    PubMed

    Li, Ruiya; Chen, Yiyang; Tan, Yuegang; Zhou, Zude; Li, Tianliang; Mao, Jian

    2018-05-17

    A novel fiber Bragg grating (FBG)-based strain sensor with a high-sensitivity is presented in this paper. The proposed FBG-based strain sensor enhances sensitivity by pasting the FBG on a substrate with a lever structure. This typical mechanical configuration mechanically amplifies the strain of the FBG to enhance overall sensitivity. As this mechanical configuration has a high stiffness, the proposed sensor can achieve a high resonant frequency and a wide dynamic working range. The sensing principle is presented, and the corresponding theoretical model is derived and validated. Experimental results demonstrate that the developed FBG-based strain sensor achieves an enhanced strain sensitivity of 6.2 pm/με, which is consistent with the theoretical analysis result. The strain sensitivity of the developed sensor is 5.2 times of the strain sensitivity of a bare fiber Bragg grating strain sensor. The dynamic characteristics of this sensor are investigated through the finite element method (FEM) and experimental tests. The developed sensor exhibits an excellent strain-sensitivity-enhancing property in a wide frequency range. The proposed high-sensitivity FBG-based strain sensor can be used for small-amplitude micro-strain measurement in harsh industrial environments.

  2. Sensitivity Enhancement of FBG-Based Strain Sensor

    PubMed Central

    Chen, Yiyang; Tan, Yuegang; Zhou, Zude; Mao, Jian

    2018-01-01

    A novel fiber Bragg grating (FBG)-based strain sensor with a high-sensitivity is presented in this paper. The proposed FBG-based strain sensor enhances sensitivity by pasting the FBG on a substrate with a lever structure. This typical mechanical configuration mechanically amplifies the strain of the FBG to enhance overall sensitivity. As this mechanical configuration has a high stiffness, the proposed sensor can achieve a high resonant frequency and a wide dynamic working range. The sensing principle is presented, and the corresponding theoretical model is derived and validated. Experimental results demonstrate that the developed FBG-based strain sensor achieves an enhanced strain sensitivity of 6.2 pm/με, which is consistent with the theoretical analysis result. The strain sensitivity of the developed sensor is 5.2 times of the strain sensitivity of a bare fiber Bragg grating strain sensor. The dynamic characteristics of this sensor are investigated through the finite element method (FEM) and experimental tests. The developed sensor exhibits an excellent strain-sensitivity-enhancing property in a wide frequency range. The proposed high-sensitivity FBG-based strain sensor can be used for small-amplitude micro-strain measurement in harsh industrial environments. PMID:29772826

  3. Geostationary Coastal and Air Pollution Events (GEO-CAPE) Sensitivity Analysis Experiment

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Bowman, Kevin

    2014-01-01

    Geostationary Coastal and Air pollution Events (GEO-CAPE) is a NASA decadal survey mission to be designed to provide surface reflectance at high spectral, spatial, and temporal resolutions from a geostationary orbit necessary for studying regional-scale air quality issues and their impact on global atmospheric composition processes. GEO-CAPE's Atmospheric Science Questions explore the influence of both gases and particles on air quality, atmospheric composition, and climate. The objective of the GEO-CAPE Observing System Simulation Experiment (OSSE) is to analyze the sensitivity of ozone to the global and regional NOx emissions and improve the science impact of GEO-CAPE with respect to the global air quality. The GEO-CAPE OSSE team at Jet propulsion Laboratory has developed a comprehensive OSSE framework that can perform adjoint-sensitivity analysis for a wide range of observation scenarios and measurement qualities. This report discusses the OSSE framework and presents the sensitivity analysis results obtained from the GEO-CAPE OSSE framework for seven observation scenarios and three instrument systems.

  4. Adjoint Sensitivity Analysis for Scale-Resolving Turbulent Flow Solvers

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick; Garai, Anirban; Diosady, Laslo; Murman, Scott

    2017-11-01

    Adjoint-based sensitivity analysis methods are powerful design tools for engineers who use computational fluid dynamics. In recent years, these engineers have started to use scale-resolving simulations like large-eddy simulations (LES) and direct numerical simulations (DNS), which resolve more scales in complex flows with unsteady separation and jets than the widely-used Reynolds-averaged Navier-Stokes (RANS) methods. However, the conventional adjoint method computes large, unusable sensitivities for scale-resolving simulations, which unlike RANS simulations exhibit the chaotic dynamics inherent in turbulent flows. Sensitivity analysis based on least-squares shadowing (LSS) avoids the issues encountered by conventional adjoint methods, but has a high computational cost even for relatively small simulations. The following talk discusses a more computationally efficient formulation of LSS, ``non-intrusive'' LSS, and its application to turbulent flows simulated with a discontinuous-Galkerin spectral-element-method LES/DNS solver. Results are presented for the minimal flow unit, a turbulent channel flow with a limited streamwise and spanwise domain.

  5. Manufacturing error sensitivity analysis and optimal design method of cable-network antenna structures

    NASA Astrophysics Data System (ADS)

    Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye

    2016-03-01

    Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.

  6. Spectral sensitivity characteristics simulation for silicon p-i-n photodiode

    NASA Astrophysics Data System (ADS)

    Urchuk, S. U.; Legotin, S. A.; Osipov, U. V.; Elnikov, D. S.; Didenko, S. I.; Astahov, V. P.; Rabinovich, O. I.; Yaromskiy, V. P.; Kuzmina, K. A.

    2015-11-01

    In this paper the simulation results of the spectral sensitivity characteristics of silicon p-i-n-photodiodes are presented. The analysis of the characteristics of the semiconductor material (the doping level, lifetime, surface recombination velocity), the construction and operation modes on the characteristics of photosensitive structures in order to optimize them was carried out.

  7. Hyperspectral data analysis procedures with reduced sensitivity to noise

    NASA Technical Reports Server (NTRS)

    Landgrebe, David A.

    1993-01-01

    Multispectral sensor systems have become steadily improved over the years in their ability to deliver increased spectral detail. With the advent of hyperspectral sensors, including imaging spectrometers, this technology is in the process of taking a large leap forward, thus providing the possibility of enabling delivery of much more detailed information. However, this direction of development has drawn even more attention to the matter of noise and other deleterious effects in the data, because reducing the fundamental limitations of spectral detail on information collection raises the limitations presented by noise to even greater importance. Much current effort in remote sensing research is thus being devoted to adjusting the data to mitigate the effects of noise and other deleterious effects. A parallel approach to the problem is to look for analysis approaches and procedures which have reduced sensitivity to such effects. We discuss some of the fundamental principles which define analysis algorithm characteristics providing such reduced sensitivity. One such analysis procedure including an example analysis of a data set is described, illustrating this effect.

  8. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    USGS Publications Warehouse

    Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  9. LSENS, A General Chemical Kinetics and Sensitivity Analysis Code for Homogeneous Gas-Phase Reactions. Part 2; Code Description and Usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part II of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part II describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part I (NASA RP-1328) derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved by LSENS. Part III (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  10. Compliance and stress sensitivity of spur gear teeth

    NASA Technical Reports Server (NTRS)

    Cornell, R. W.

    1983-01-01

    The magnitude and variation of tooth pair compliance with load position affects the dynamics and loading significantly, and the tooth root stressing per load varies significantly with load position. Therefore, the recently developed time history, interactive, closed form solution for the dynamic tooth loads for both low and high contact ratio spur gears was expanded to include improved and simplified methods for calculating the compliance and stress sensitivity for three involute tooth forms as a function of load position. The compliance analysis has an improved fillet/foundation. The stress sensitivity analysis is a modified version of the Heywood method but with an improvement in the magnitude and location of the peak stress in the fillet. These improved compliance and stress sensitivity analyses are presented along with their evaluation using test, finite element, and analytic transformation results, which showed good agreement.

  11. Diagnostic Performance of CT for Diagnosis of Fat-Poor Angiomyolipoma in Patients With Renal Masses: A Systematic Review and Meta-Analysis.

    PubMed

    Woo, Sungmin; Suh, Chong Hyun; Cho, Jeong Yeon; Kim, Sang Youn; Kim, Seung Hyup

    2017-11-01

    The purpose of this article is to systematically review and perform a meta-analysis of the diagnostic performance of CT for diagnosis of fat-poor angiomyolipoma (AML) in patients with renal masses. MEDLINE and EMBASE were systematically searched up to February 2, 2017. We included diagnostic accuracy studies that used CT for diagnosis of fat-poor AML in patients with renal masses, using pathologic examination as the reference standard. Two independent reviewers assessed the methodologic quality using the Quality Assessment of Diagnostic Accuracy Studies-2 tool. Sensitivity and specificity of included studies were calculated and were pooled and plotted in a hierarchic summary ROC plot. Sensitivity analyses using several clinically relevant covariates were performed to explore heterogeneity. Fifteen studies (2258 patients) were included. Pooled sensitivity and specificity were 0.67 (95% CI, 0.48-0.81) and 0.97 (95% CI, 0.89-0.99), respectively. Substantial and considerable heterogeneity was present with regard to sensitivity and specificity (I 2 = 91.21% and 78.53%, respectively). At sensitivity analyses, the specificity estimates were comparable and consistently high across all subgroups (0.93-1.00), but sensitivity estimates showed significant variation (0.14-0.82). Studies using pixel distribution analysis (n = 3) showed substantially lower sensitivity estimates (0.14; 95% CI, 0.04-0.40) compared with the remaining 12 studies (0.81; 95% CI, 0.76-0.85). CT shows moderate sensitivity and excellent specificity for diagnosis of fat-poor AML in patients with renal masses. When methods other than pixel distribution analysis are used, better sensitivity can be achieved.

  12. Longitudinal measurements of luminance and chromatic contrast sensitivity: comparison between wavefront-guided LASIK and contralateral PRK for myopia.

    PubMed

    Barboni, Mirella Telles Salgueiro; Feitosa-Santana, Claudia; Barreto Junior, Jackson; Lago, Marcos; Bechara, Samir Jacob; Alves, Milton Ruiz; Ventura, Dora Fix

    2013-10-01

    The present study aimed to compare the postoperative contrast sensitivity functions between wavefront-guided LASIK eyes and their contralateral wavefront-guided PRK eyes. The participants were 11 healthy subjects (mean age=32.4 ± 6.2 years) who had myopic astigmatism. The spatial contrast sensitivity functions were measured before and three times after the surgery. Psycho and a Cambridge graphic board (VSG 2/4) were used to measure luminance, red-green, and blue-yellow spatial contrast sensitivity functions (from 0.85 to 13.1 cycles/degree). Longitudinal analysis and comparison between surgeries were performed. There was no significant contrast sensitivity change during the one-year follow-up measurements neither for LASIK nor for PRK eyes. The comparison between procedures showed no differences at 12 months postoperative. The present data showed similar contrast sensitivities during one-year follow-up of wave-front guided refractive surgeries. Moreover, one year postoperative data showed no differences in the effects of either wavefront-guided LASIK or wavefront-guided PRK on the luminance and chromatic spatial contrast sensitivity functions.

  13. Dynamic sensitivity analysis of biological systems

    PubMed Central

    Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang

    2008-01-01

    Background A mathematical model to understand, predict, control, or even design a real biological system is a central theme in systems biology. A dynamic biological system is always modeled as a nonlinear ordinary differential equation (ODE) system. How to simulate the dynamic behavior and dynamic parameter sensitivities of systems described by ODEs efficiently and accurately is a critical job. In many practical applications, e.g., the fed-batch fermentation systems, the system admissible input (corresponding to independent variables of the system) can be time-dependent. The main difficulty for investigating the dynamic log gains of these systems is the infinite dimension due to the time-dependent input. The classical dynamic sensitivity analysis does not take into account this case for the dynamic log gains. Results We present an algorithm with an adaptive step size control that can be used for computing the solution and dynamic sensitivities of an autonomous ODE system simultaneously. Although our algorithm is one of the decouple direct methods in computing dynamic sensitivities of an ODE system, the step size determined by model equations can be used on the computations of the time profile and dynamic sensitivities with moderate accuracy even when sensitivity equations are more stiff than model equations. To show this algorithm can perform the dynamic sensitivity analysis on very stiff ODE systems with moderate accuracy, it is implemented and applied to two sets of chemical reactions: pyrolysis of ethane and oxidation of formaldehyde. The accuracy of this algorithm is demonstrated by comparing the dynamic parameter sensitivities obtained from this new algorithm and from the direct method with Rosenbrock stiff integrator based on the indirect method. The same dynamic sensitivity analysis was performed on an ethanol fed-batch fermentation system with a time-varying feed rate to evaluate the applicability of the algorithm to realistic models with time-dependent admissible input. Conclusion By combining the accuracy we show with the efficiency of being a decouple direct method, our algorithm is an excellent method for computing dynamic parameter sensitivities in stiff problems. We extend the scope of classical dynamic sensitivity analysis to the investigation of dynamic log gains of models with time-dependent admissible input. PMID:19091016

  14. Skeletal Mechanism Generation of Surrogate Jet Fuels for Aeropropulsion Modeling

    NASA Astrophysics Data System (ADS)

    Sung, Chih-Jen; Niemeyer, Kyle E.

    2010-05-01

    A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with skeletal reductions of two important hydrocarbon components, n-heptane and n-decane, relevant to surrogate jet fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each previous method, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal.

  15. A sensitivity analysis of process design parameters, commodity prices and robustness on the economics of odour abatement technologies.

    PubMed

    Estrada, José M; Kraakman, N J R Bart; Lebrero, Raquel; Muñoz, Raúl

    2012-01-01

    The sensitivity of the economics of the five most commonly applied odour abatement technologies (biofiltration, biotrickling filtration, activated carbon adsorption, chemical scrubbing and a hybrid technology consisting of a biotrickling filter coupled with carbon adsorption) towards design parameters and commodity prices was evaluated. Besides, the influence of the geographical location on the Net Present Value calculated for a 20 years lifespan (NPV20) of each technology and its robustness towards typical process fluctuations and operational upsets were also assessed. This comparative analysis showed that biological techniques present lower operating costs (up to 6 times) and lower sensitivity than their physical/chemical counterparts, with the packing material being the key parameter affecting their operating costs (40-50% of the total operating costs). The use of recycled or partially treated water (e.g. secondary effluent in wastewater treatment plants) offers an opportunity to significantly reduce costs in biological techniques. Physical/chemical technologies present a high sensitivity towards H2S concentration, which is an important drawback due to the fluctuating nature of malodorous emissions. The geographical analysis evidenced high NPV20 variations around the world for all the technologies evaluated, but despite the differences in wage and price levels, biofiltration and biotrickling filtration are always the most cost-efficient alternatives (NPV20). When, in an economical evaluation, the robustness is as relevant as the overall costs (NPV20), the hybrid technology would move up next to BTF as the most preferred technologies. Copyright © 2012 Elsevier Inc. All rights reserved.

  16. Navigation and Dispersion Analysis of the First Orion Exploration Mission

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato; D'Souza, Christopher

    2015-01-01

    This paper seeks to present the Orion EM-1 Linear Covariance Analysis for the DRO mission. The delta V statistics for each maneuver are presented. Included in the memo are several sensitivity analyses: variation in the time of OTC-1 (the first outbound correction maneuver), variation in the accuracy of the trans-Lunar injection, and variation in the length of the optical navigation passes.

  17. A sensitivity analysis method for the body segment inertial parameters based on ground reaction and joint moment regressor matrices.

    PubMed

    Futamure, Sumire; Bonnet, Vincent; Dumas, Raphael; Venture, Gentiane

    2017-11-07

    This paper presents a method allowing a simple and efficient sensitivity analysis of dynamics parameters of complex whole-body human model. The proposed method is based on the ground reaction and joint moment regressor matrices, developed initially in robotics system identification theory, and involved in the equations of motion of the human body. The regressor matrices are linear relatively to the segment inertial parameters allowing us to use simple sensitivity analysis methods. The sensitivity analysis method was applied over gait dynamics and kinematics data of nine subjects and with a 15 segments 3D model of the locomotor apparatus. According to the proposed sensitivity indices, 76 segments inertial parameters out the 150 of the mechanical model were considered as not influent for gait. The main findings were that the segment masses were influent and that, at the exception of the trunk, moment of inertia were not influent for the computation of the ground reaction forces and moments and the joint moments. The same method also shows numerically that at least 90% of the lower-limb joint moments during the stance phase can be estimated only from a force-plate and kinematics data without knowing any of the segment inertial parameters. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Logistic Map for Cancellable Biometrics

    NASA Astrophysics Data System (ADS)

    Supriya, V. G., Dr; Manjunatha, Ramachandra, Dr

    2017-08-01

    This paper presents design and implementation of secured biometric template protection system by transforming the biometric template using binary chaotic signals and 3 different key streams to obtain another form of template and demonstrating its efficiency by the results and investigating on its security through analysis including, key space analysis, information entropy and key sensitivity analysis.

  19. Pyrotechnic hazards classification and evaluation program. Phase 3, segments 1-4: Investigation of sensitivity test methods and procedures for pyrotechnic hazards evaluation and classification, part A

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The findings, conclusions, and recommendations relative to the investigations conducted to evaluate tests for classifying pyrotechnic materials and end items as to their hazard potential are presented. Information required to establish an applicable means of determining the potential hazards of pyrotechnics is described. Hazard evaluations are based on the peak overpressure or impulse resulting from the explosion as a function of distance from the source. Other hazard classification tests include dust ignition sensitivity, impact ignition sensitivity, spark ignition sensitivity, and differential thermal analysis.

  20. A theoretical-experimental methodology for assessing the sensitivity of biomedical spectral imaging platforms, assays, and analysis methods.

    PubMed

    Leavesley, Silas J; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter; Rich, Thomas C

    2018-01-01

    Spectral imaging technologies have been used for many years by the remote sensing community. More recently, these approaches have been applied to biomedical problems, where they have shown great promise. However, biomedical spectral imaging has been complicated by the high variance of biological data and the reduced ability to construct test scenarios with fixed ground truths. Hence, it has been difficult to objectively assess and compare biomedical spectral imaging assays and technologies. Here, we present a standardized methodology that allows assessment of the performance of biomedical spectral imaging equipment, assays, and analysis algorithms. This methodology incorporates real experimental data and a theoretical sensitivity analysis, preserving the variability present in biomedical image data. We demonstrate that this approach can be applied in several ways: to compare the effectiveness of spectral analysis algorithms, to compare the response of different imaging platforms, and to assess the level of target signature required to achieve a desired performance. Results indicate that it is possible to compare even very different hardware platforms using this methodology. Future applications could include a range of optimization tasks, such as maximizing detection sensitivity or acquisition speed, providing high utility for investigators ranging from design engineers to biomedical scientists. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Post-Optimality Analysis In Aerospace Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, Robert D.; Kroo, Ilan M.; Gage, Peter J.

    1993-01-01

    This analysis pertains to the applicability of optimal sensitivity information to aerospace vehicle design. An optimal sensitivity (or post-optimality) analysis refers to computations performed once the initial optimization problem is solved. These computations may be used to characterize the design space about the present solution and infer changes in this solution as a result of constraint or parameter variations, without reoptimizing the entire system. The present analysis demonstrates that post-optimality information generated through first-order computations can be used to accurately predict the effect of constraint and parameter perturbations on the optimal solution. This assessment is based on the solution of an aircraft design problem in which the post-optimality estimates are shown to be within a few percent of the true solution over the practical range of constraint and parameter variations. Through solution of a reusable, single-stage-to-orbit, launch vehicle design problem, this optimal sensitivity information is also shown to improve the efficiency of the design process, For a hierarchically decomposed problem, this computational efficiency is realized by estimating the main-problem objective gradient through optimal sep&ivity calculations, By reducing the need for finite differentiation of a re-optimized subproblem, a significant decrease in the number of objective function evaluations required to reach the optimal solution is obtained.

  2. A GIS based spatially-explicit sensitivity and uncertainty analysis approach for multi-criteria decision analysis.

    PubMed

    Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas

    2014-03-01

    GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.

  3. A GIS based spatially-explicit sensitivity and uncertainty analysis approach for multi-criteria decision analysis

    NASA Astrophysics Data System (ADS)

    Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas

    2014-03-01

    GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster-Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty-sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights.

  4. A methodology to estimate uncertainty for emission projections through sensitivity analysis.

    PubMed

    Lumbreras, Julio; de Andrés, Juan Manuel; Pérez, Javier; Borge, Rafael; de la Paz, David; Rodríguez, María Encarnación

    2015-04-01

    Air pollution abatement policies must be based on quantitative information on current and future emissions of pollutants. As emission projections uncertainties are inevitable and traditional statistical treatments of uncertainty are highly time/resources consuming, a simplified methodology for nonstatistical uncertainty estimation based on sensitivity analysis is presented in this work. The methodology was applied to the "with measures" scenario for Spain, concretely over the 12 highest emitting sectors regarding greenhouse gas and air pollutants emissions. Examples of methodology application for two important sectors (power plants, and agriculture and livestock) are shown and explained in depth. Uncertainty bands were obtained up to 2020 by modifying the driving factors of the 12 selected sectors and the methodology was tested against a recomputed emission trend in a low economic-growth perspective and official figures for 2010, showing a very good performance. A solid understanding and quantification of uncertainties related to atmospheric emission inventories and projections provide useful information for policy negotiations. However, as many of those uncertainties are irreducible, there is an interest on how they could be managed in order to derive robust policy conclusions. Taking this into account, a method developed to use sensitivity analysis as a source of information to derive nonstatistical uncertainty bands for emission projections is presented and applied to Spain. This method simplifies uncertainty assessment and allows other countries to take advantage of their sensitivity analyses.

  5. The use of atmospheric measurements to constrain model predictions of ozone change from chlorine perturbations

    NASA Technical Reports Server (NTRS)

    Douglass, Anne R.; Stolarski, Richard S.

    1987-01-01

    Atmospheric photochemistry models have been used to predict the sensitivity of the ozone layer to various perturbations. These same models also predict concentrations of chemical species in the present day atmosphere which can be compared to observations. Model results for both present day values and sensitivity to perturbation depend upon input data for reaction rates, photodissociation rates, and boundary conditions. A method of combining the results of a Monte Carlo uncertainty analysis with the existing set of present atmospheric species measurements is developed. The method is used to examine the range of values for the sensitivity of ozone to chlorine perturbations that is possible within the currently accepted ranges for input data. It is found that model runs which predict ozone column losses much greater than 10 percent as a result of present fluorocarbon fluxes produce concentrations and column amounts in the present atmosphere which are inconsistent with the measurements for ClO, HCl, NO, NO2, and HNO3.

  6. Sixteen-Item Anxiety Sensitivity Index: Confirmatory Factor Analytic Evidence, Internal Consistency, and Construct Validity in a Young Adult Sample from the Netherlands

    ERIC Educational Resources Information Center

    Vujanovic, Anka A.; Arrindell, Willem A.; Bernstein, Amit; Norton, Peter J.; Zvolensky, Michael J.

    2007-01-01

    The present investigation examined the factor structure, internal consistency, and construct validity of the 16-item Anxiety Sensitivity Index (ASI; Reiss Peterson, Gursky, & McNally 1986) in a young adult sample (n = 420) from the Netherlands. Confirmatory factor analysis was used to comparatively evaluate two-factor, three-factor, and…

  7. SUBSURFACE RESIDENCE TIMES AS AN ALGORITHM FOR AQUIFER SENSITIVITY MAPPING: TESTING THE CONCEPT WITH GROUND WATER MODELS IN THE CONTENTNEA CREEK BASIN, NORTH CAROLINA, USA

    EPA Science Inventory

    This poster will present a modeling and mapping assessment of landscape sensitivity to non-point source pollution as applied to a hierarchy of catchment drainages in the Coastal Plain of the state of North Carolina. Analysis of the subsurface residence time of water in shallow a...

  8. Improving engineering system design by formal decomposition, sensitivity analysis, and optimization

    NASA Technical Reports Server (NTRS)

    Sobieski, J.; Barthelemy, J. F. M.

    1985-01-01

    A method for use in the design of a complex engineering system by decomposing the problem into a set of smaller subproblems is presented. Coupling of the subproblems is preserved by means of the sensitivity derivatives of the subproblem solution to the inputs received from the system. The method allows for the division of work among many people and computers.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    F. Perry; R. Youngs

    The purpose of this scientific analysis report is threefold: (1) Present a conceptual framework of igneous activity in the Yucca Mountain region (YMR) consistent with the volcanic and tectonic history of this region and the assessment of this history by experts who participated in the probabilistic volcanic hazard analysis (PVHA) (CRWMS M&O 1996 [DIRS 100116]). Conceptual models presented in the PVHA are summarized and applied in areas in which new information has been presented. Alternative conceptual models are discussed, as well as their impact on probability models. The relationship between volcanic source zones defined in the PVHA and structural featuresmore » of the YMR are described based on discussions in the PVHA and studies presented since the PVHA. (2) Present revised probability calculations based on PVHA outputs for a repository footprint proposed in 2003 (BSC 2003 [DIRS 162289]), rather than the footprint used at the time of the PVHA. This analysis report also calculates the probability of an eruptive center(s) forming within the repository footprint using information developed in the PVHA. Probability distributions are presented for the length and orientation of volcanic dikes located within the repository footprint and for the number of eruptive centers (conditional on a dike intersecting the repository) located within the repository footprint. (3) Document sensitivity studies that analyze how the presence of potentially buried basaltic volcanoes may affect the computed frequency of intersection of the repository footprint by a basaltic dike. These sensitivity studies are prompted by aeromagnetic data collected in 1999, indicating the possible presence of previously unrecognized buried volcanoes in the YMR (Blakely et al. 2000 [DIRS 151881]; O'Leary et al. 2002 [DIRS 158468]). The results of the sensitivity studies are for informational purposes only and are not to be used for purposes of assessing repository performance.« less

  10. SENSITIVITY OF BLIND PULSAR SEARCHES WITH THE FERMI LARGE AREA TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dormody, M.; Johnson, R. P.; Atwood, W. B.

    2011-12-01

    We quantitatively establish the sensitivity to the detection of young to middle-aged, isolated, gamma-ray pulsars through blind searches of Fermi Large Area Telescope (LAT) data using a Monte Carlo simulation. We detail a sensitivity study of the time-differencing blind search code used to discover gamma-ray pulsars in the first year of observations. We simulate 10,000 pulsars across a broad parameter space and distribute them across the sky. We replicate the analysis in the Fermi LAT First Source Catalog to localize the sources, and the blind search analysis to find the pulsars. We analyze the results and discuss the effect ofmore » positional error and spin frequency on gamma-ray pulsar detections. Finally, we construct a formula to determine the sensitivity of the blind search and present a sensitivity map assuming a standard set of pulsar parameters. The results of this study can be applied to population studies and are useful in characterizing unidentified LAT sources.« less

  11. Behavioral metabolomics analysis identifies novel neurochemical signatures in methamphetamine sensitization

    PubMed Central

    Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.

    2014-01-01

    Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Lijuan; Gonder, Jeff; Burton, Evan

    This study evaluates the costs and benefits associated with the use of a stationary-wireless- power-transfer-enabled plug-in hybrid electric bus and determines the cost effectiveness relative to a conventional bus and a hybrid electric bus. A sensitivity sweep was performed over many different battery sizes, charging power levels, and number/location of bus stop charging stations. The net present cost was calculated for each vehicle design and provided the basis for design evaluation. In all cases, given the assumed economic conditions, the conventional bus achieved the lowest net present cost while the optimal plug-in hybrid electric bus scenario beat out the hybridmore » electric comparison scenario. The study also performed parameter sensitivity analysis under favorable and high unfavorable market penetration assumptions. The analysis identifies fuel saving opportunities with plug-in hybrid electric bus scenarios at cumulative net present costs not too dissimilar from those for conventional buses.« less

  13. [Possibilities of the TruScreen for screening of precancer and cancer of the uterine cervix].

    PubMed

    Zlatkov, V

    2009-01-01

    The classic approach of detection of pre-cancer and cancer of uterine cervix includes cytological examination, followed by colposcopy assessment of the detected cytological abnormalities. Real-time devices use in-vivo techniques for the measurement, computerized analysis and classifying of different types of cervical tissues. The aim of the present review is to present the technical characteristics and to discus the diagnostic possibilities of TruScreen-automated optical-electron system for cervical screening. The analysis of the presented in the literature diagnostic value of the method at different grades intraepithelial lesions shows that it has higher sensitivity (67-70%) and lower specificity (81%) in comparison to the Pap test with the following results (45-69% sensitivity and 95% specificity). This makes the method suitable for independent primary screening, as well as for adding the diagnostic assurance of the cytological method.

  14. Sampling and analysis of airborne resin acids and solvent-soluble material derived from heated colophony (rosin) flux: a method to quantify exposure to sensitizing compounds liberated during electronics soldering.

    PubMed

    Smith, P A; Son, P S; Callaghan, P M; Jederberg, W W; Kuhlmann, K; Still, K R

    1996-07-17

    Components of colophony (rosin) resin acids are sensitizers through dermal and pulmonary exposure to heated and unheated material. Significant work in the literature identifies specific resin acids and their oxidation products as sensitizers. Pulmonary exposure to colophony sensitizers has been estimated indirectly through formaldehyde exposure. To assess pulmonary sensitization from airborne resin acids, direct measurement is desired, as the degree to which aldehyde exposure correlates with that of resin acids during colophony heating is undefined. Any analytical method proposed should be applicable to a range of compounds and should also identify specific compounds present in a breathing zone sample. This work adapts OSHA Sampling and Analytical Method 58, which is designed to provide airborne concentration data for coal tar pitch volatile solids by air filtration through a glass fiber filter, solvent extraction of the filter, and gravimetric analysis of the non-volatile extract residue. In addition to data regarding total soluble material captured, a portion of the extract may be subjected to compound-specific analysis. Levels of soluble solids found during personal breathing zone sampling during electronics soldering in a Naval Aviation Depot ranged from below the "reliable quantitation limit" reported in the method to 7.98 mg/m3. Colophony-spiked filters analyzed in accordance with the method (modified) produced a limit of detection for total solvent-soluble colophony solids of 10 micrograms/filter. High performance liquid chromatography was used to identify abietic acid present in a breathing zone sample.

  15. Reliability Coupled Sensitivity Based Design Approach for Gravity Retaining Walls

    NASA Astrophysics Data System (ADS)

    Guha Ray, A.; Baidya, D. K.

    2012-09-01

    Sensitivity analysis involving different random variables and different potential failure modes of a gravity retaining wall focuses on the fact that high sensitivity of a particular variable on a particular mode of failure does not necessarily imply a remarkable contribution to the overall failure probability. The present paper aims at identifying a probabilistic risk factor ( R f ) for each random variable based on the combined effects of failure probability ( P f ) of each mode of failure of a gravity retaining wall and sensitivity of each of the random variables on these failure modes. P f is calculated by Monte Carlo simulation and sensitivity analysis of each random variable is carried out by F-test analysis. The structure, redesigned by modifying the original random variables with the risk factors, is safe against all the variations of random variables. It is observed that R f for friction angle of backfill soil ( φ 1 ) increases and cohesion of foundation soil ( c 2 ) decreases with an increase of variation of φ 1 , while R f for unit weights ( γ 1 and γ 2 ) for both soil and friction angle of foundation soil ( φ 2 ) remains almost constant for variation of soil properties. The results compared well with some of the existing deterministic and probabilistic methods and found to be cost-effective. It is seen that if variation of φ 1 remains within 5 %, significant reduction in cross-sectional area can be achieved. But if the variation is more than 7-8 %, the structure needs to be modified. Finally design guidelines for different wall dimensions, based on the present approach, are proposed.

  16. Development of a generalized perturbation theory method for sensitivity analysis using continuous-energy Monte Carlo methods

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.

    2016-03-01

    The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less

  17. Analysis of multimode fiber bundles for endoscopic spectral-domain optical coherence tomography

    PubMed Central

    Risi, Matthew D.; Makhlouf, Houssine; Rouse, Andrew R.; Gmitro, Arthur F.

    2016-01-01

    A theoretical analysis of the use of a fiber bundle in spectral-domain optical coherence tomography (OCT) systems is presented. The fiber bundle enables a flexible endoscopic design and provides fast, parallelized acquisition of the OCT data. However, the multimode characteristic of the fibers in the fiber bundle affects the depth sensitivity of the imaging system. A description of light interference in a multimode fiber is presented along with numerical simulations and experimental studies to illustrate the theoretical analysis. PMID:25967012

  18. Space shuttle navigation analysis

    NASA Technical Reports Server (NTRS)

    Jones, H. L.; Luders, G.; Matchett, G. A.; Sciabarrasi, J. E.

    1976-01-01

    A detailed analysis of space shuttle navigation for each of the major mission phases is presented. A covariance analysis program for prelaunch IMU calibration and alignment for the orbital flight tests (OFT) is described, and a partial error budget is presented. The ascent, orbital operations and deorbit maneuver study considered GPS-aided inertial navigation in the Phase III GPS (1984+) time frame. The entry and landing study evaluated navigation performance for the OFT baseline system. Detailed error budgets and sensitivity analyses are provided for both the ascent and entry studies.

  19. Shape sensitivity analysis of flutter response of a laminated wing

    NASA Technical Reports Server (NTRS)

    Bergen, Fred D.; Kapania, Rakesh K.

    1988-01-01

    A method is presented for calculating the shape sensitivity of a wing aeroelastic response with respect to changes in geometric shape. Yates' modified strip method is used in conjunction with Giles' equivalent plate analysis to predict the flutter speed, frequency, and reduced frequency of the wing. Three methods are used to calculate the sensitivity of the eigenvalue. The first method is purely a finite difference calculation of the eigenvalue derivative directly from the solution of the flutter problem corresponding to the two different values of the shape parameters. The second method uses an analytic expression for the eigenvalue sensitivities of a general complex matrix, where the derivatives of the aerodynamic, mass, and stiffness matrices are computed using a finite difference approximation. The third method also uses an analytic expression for the eigenvalue sensitivities, but the aerodynamic matrix is computed analytically. All three methods are found to be in good agreement with each other. The sensitivities of the eigenvalues were used to predict the flutter speed, frequency, and reduced frequency. These approximations were found to be in good agreement with those obtained using a complete reanalysis.

  20. Gradient-Based Aerodynamic Shape Optimization Using ADI Method for Large-Scale Problems

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Baysal, Oktay

    1997-01-01

    A gradient-based shape optimization methodology, that is intended for practical three-dimensional aerodynamic applications, has been developed. It is based on the quasi-analytical sensitivities. The flow analysis is rendered by a fully implicit, finite volume formulation of the Euler equations.The aerodynamic sensitivity equation is solved using the alternating-direction-implicit (ADI) algorithm for memory efficiency. A flexible wing geometry model, that is based on surface parameterization and platform schedules, is utilized. The present methodology and its components have been tested via several comparisons. Initially, the flow analysis for for a wing is compared with those obtained using an unfactored, preconditioned conjugate gradient approach (PCG), and an extensively validated CFD code. Then, the sensitivities computed with the present method have been compared with those obtained using the finite-difference and the PCG approaches. Effects of grid refinement and convergence tolerance on the analysis and shape optimization have been explored. Finally the new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4. Despite the expected increase in the computational time, the results indicate that shape optimization, which require large numbers of grid points can be resolved with a gradient-based approach.

  1. Fast and sensitive optical toxicity bioassay based on dual wavelength analysis of bacterial ferricyanide reduction kinetics.

    PubMed

    Pujol-Vila, F; Vigués, N; Díaz-González, M; Muñoz-Berbel, X; Mas, J

    2015-05-15

    Global urban and industrial growth, with the associated environmental contamination, is promoting the development of rapid and inexpensive general toxicity methods. Current microbial methodologies for general toxicity determination rely on either bioluminescent bacteria and specific medium solution (i.e. Microtox(®)) or low sensitivity and diffusion limited protocols (i.e. amperometric microbial respirometry). In this work, fast and sensitive optical toxicity bioassay based on dual wavelength analysis of bacterial ferricyanide reduction kinetics is presented, using Escherichia coli as a bacterial model. Ferricyanide reduction kinetic analysis (variation of ferricyanide absorption with time), much more sensitive than single absorbance measurements, allowed for direct and fast toxicity determination without pre-incubation steps (assay time=10 min) and minimizing biomass interference. Dual wavelength analysis at 405 (ferricyanide and biomass) and 550 nm (biomass), allowed for ferricyanide monitoring without interference of biomass scattering. On the other hand, refractive index (RI) matching with saccharose reduced bacterial light scattering around 50%, expanding the analytical linear range in the determination of absorbent molecules. With this method, different toxicants such as metals and organic compounds were analyzed with good sensitivities. Half maximal effective concentrations (EC50) obtained after 10 min bioassay, 2.9, 1.0, 0.7 and 18.3 mg L(-1) for copper, zinc, acetic acid and 2-phenylethanol respectively, were in agreement with previously reported values for longer bioassays (around 60 min). This method represents a promising alternative for fast and sensitive water toxicity monitoring, opening the possibility of quick in situ analysis. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Validation of a next-generation sequencing assay for clinical molecular oncology.

    PubMed

    Cottrell, Catherine E; Al-Kateb, Hussam; Bredemeyer, Andrew J; Duncavage, Eric J; Spencer, David H; Abel, Haley J; Lockwood, Christina M; Hagemann, Ian S; O'Guin, Stephanie M; Burcea, Lauren C; Sawyer, Christopher S; Oschwald, Dayna M; Stratman, Jennifer L; Sher, Dorie A; Johnson, Mark R; Brown, Justin T; Cliften, Paul F; George, Bijoy; McIntosh, Leslie D; Shrivastava, Savita; Nguyen, Tudung T; Payton, Jacqueline E; Watson, Mark A; Crosby, Seth D; Head, Richard D; Mitra, Robi D; Nagarajan, Rakesh; Kulkarni, Shashikant; Seibert, Karen; Virgin, Herbert W; Milbrandt, Jeffrey; Pfeifer, John D

    2014-01-01

    Currently, oncology testing includes molecular studies and cytogenetic analysis to detect genetic aberrations of clinical significance. Next-generation sequencing (NGS) allows rapid analysis of multiple genes for clinically actionable somatic variants. The WUCaMP assay uses targeted capture for NGS analysis of 25 cancer-associated genes to detect mutations at actionable loci. We present clinical validation of the assay and a detailed framework for design and validation of similar clinical assays. Deep sequencing of 78 tumor specimens (≥ 1000× average unique coverage across the capture region) achieved high sensitivity for detecting somatic variants at low allele fraction (AF). Validation revealed sensitivities and specificities of 100% for detection of single-nucleotide variants (SNVs) within coding regions, compared with SNP array sequence data (95% CI = 83.4-100.0 for sensitivity and 94.2-100.0 for specificity) or whole-genome sequencing (95% CI = 89.1-100.0 for sensitivity and 99.9-100.0 for specificity) of HapMap samples. Sensitivity for detecting variants at an observed 10% AF was 100% (95% CI = 93.2-100.0) in HapMap mixes. Analysis of 15 masked specimens harboring clinically reported variants yielded concordant calls for 13/13 variants at AF of ≥ 15%. The WUCaMP assay is a robust and sensitive method to detect somatic variants of clinical significance in molecular oncology laboratories, with reduced time and cost of genetic analysis allowing for strategic patient management. Copyright © 2014 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  3. Prevalence of and risk factors for latex sensitization in patients with spina bifida.

    PubMed

    Bernardini, R; Novembre, E; Lombardi, E; Mezzetti, P; Cianferoni, A; Danti, A D; Mercurella, A; Vierucci, A

    1998-11-01

    We determined the prevalence of and risk factors for latex sensitization in patients with spina bifida. A total of 59 consecutive subjects 2 to 40 years old with spina bifida answered a questionnaire, and underwent a latex skin prick test and determination of serum IgE specific for latex by RAST CAP radioimmunoassay. We also noted the relationships of total serum IgE skin prick tests to common air and food allergens. In addition, skin prick plus prick tests were also done with fresh foods, including kiwi, pear, orange, almond, pineapple, apple, tomato and banana. Latex sensitization was present in 15 patients (25%) according to the presence of IgE specific to latex, as detected by a skin prick test in 9 and/or RAST CAP in 13. Five latex sensitized patients (33.3%) had clinical manifestations, such as urticaria, conjuctivitis, angioedema, rhinitis and bronchial asthma, while using a latex glove and inflating a latex balloon. Atopy was present in 21 patients (35.6%). In 14 patients (23%) 1 or more skin tests were positive for fresh foods using a prick plus prick technique. Tomato, kiwi, and pear were the most common skin test positive foods. Univariate analysis revealed that a history of 5 or more operations, atopy and positive prick plus prick tests results for pear and kiwi were significantly associated with latex sensitization. Multivariate analysis demonstrated that only atopy and a history of 5 or more operations were significantly and independently associated with latex sensitization. A fourth of the patients with spina bifida were sensitized to latex. Atopy and an elevated number of operations were significant and independent predictors of latex sensitization in these cases.

  4. The changing face of food hypersensitivity in an Asian community.

    PubMed

    Chiang, W C; Kidon, M I; Liew, W K; Goh, A; Tang, J P L; Chay, O M

    2007-07-01

    Food allergy seems to be increasing in Asia as well as world-wide. Our aim was to characterize food protein sensitization patterns in a population of Asian children with possible food allergy. Children presenting to our allergy clinic over 3 years with symptomatic allergic disease and at least one specific food allergen sensitization documented on skin prick testing were included in the analysis. Two hundred and twenty-seven patients fulfilled inclusion criteria. Ninety (40%) of the positive skin tests were positive to egg, 87 (39%) to shellfish, 62 (27.3%) to peanut, 30 (13.2%) to fish, 27 (11.8%) to cow's milk, 21 (9.3%) to sesame, 13 (3.7%) to wheat and eight (3.2%) to soy. Peanut sensitization was the third most common sensitizing allergen, and seen mostly in young atopic children with multiple food hypersensitivities and a family history of atopic dermatitis. The median reported age of first exposure to fish and shellfish was 6 and 12 months, respectively. The mean age at presentation of children with shellfish hypersensitivity was at 6.7 years of age. The likelihood of shellfish sensitization was increased in children with concomitant sensitization to cockroaches. In contrast to previously reported low peanut allergy rates in Asia, in our review, peanut sensitization is present in 27% (62/227) of food-allergic children, mostly in patients with multiple food protein sensitizations. Temporal patterns of first exposure of infants to fish and shellfish are unique to the Asian diet. Shellfish are a major sensitizing food source in Asian children, especially in allergic rhinitis patients sensitized to cockroaches.

  5. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis

    DOE PAGES

    Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian

    2017-01-31

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less

  6. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis.

    PubMed

    Hoffmann, Max J; Engelmann, Felix; Matera, Sebastian

    2017-01-28

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO 2 (110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.

  7. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less

  8. A practical approach to the sensitivity analysis for kinetic Monte Carlo simulation of heterogeneous catalysis

    NASA Astrophysics Data System (ADS)

    Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian

    2017-01-01

    Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.

  9. Sensitivity of Combustion-Acoustic Instabilities to Boundary Conditions for Premixed Gas Turbine Combustors

    NASA Technical Reports Server (NTRS)

    Darling, Douglas; Radhakrishnan, Krishnan; Oyediran, Ayo

    1995-01-01

    Premixed combustors, which are being considered for low NOx engines, are susceptible to instabilities due to feedback between pressure perturbations and combustion. This feedback can cause damaging mechanical vibrations of the system as well as degrade the emissions characteristics and combustion efficiency. In a lean combustor instabilities can also lead to blowout. A model was developed to perform linear combustion-acoustic stability analysis using detailed chemical kinetic mechanisms. The Lewis Kinetics and Sensitivity Analysis Code, LSENS, was used to calculate the sensitivities of the heat release rate to perturbations in density and temperature. In the present work, an assumption was made that the mean flow velocity was small relative to the speed of sound. Results of this model showed the regions of growth of perturbations to be most sensitive to the reflectivity of the boundary when reflectivities were close to unity.

  10. Analyses of a heterogeneous lattice hydrodynamic model with low and high-sensitivity vehicles

    NASA Astrophysics Data System (ADS)

    Kaur, Ramanpreet; Sharma, Sapna

    2018-06-01

    Basic lattice model is extended to study the heterogeneous traffic by considering the optimal current difference effect on a unidirectional single lane highway. Heterogeneous traffic consisting of low- and high-sensitivity vehicles is modeled and their impact on stability of mixed traffic flow has been examined through linear stability analysis. The stability of flow is investigated in five distinct regions of the neutral stability diagram corresponding to the amount of higher sensitivity vehicles present on road. In order to investigate the propagating behavior of density waves non linear analysis is performed and near the critical point, the kink antikink soliton is obtained by driving mKdV equation. The effect of fraction parameter corresponding to high sensitivity vehicles is investigated and the results indicates that the stability rise up due to the fraction parameter. The theoretical findings are verified via direct numerical simulation.

  11. Sensitivity analysis and nonlinearity assessment of steam cracking furnace process

    NASA Astrophysics Data System (ADS)

    Rosli, M. N.; Sudibyo, Aziz, N.

    2017-11-01

    In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.

  12. SCALE 6.2 Continuous-Energy TSUNAMI-3D Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2015-01-01

    The TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation) capabilities within the SCALE code system make use of sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different systems, quantifying computational biases, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved ease of use and fidelity and the desire to extend TSUNAMI analysis to advanced applications have motivated the development of a SCALE 6.2 module for calculating sensitivity coefficients using three-dimensional (3D) continuous-energy (CE) Montemore » Carlo methods: CE TSUNAMI-3D. This paper provides an overview of the theory, implementation, and capabilities of the CE TSUNAMI-3D sensitivity analysis methods. CE TSUNAMI contains two methods for calculating sensitivity coefficients in eigenvalue sensitivity applications: (1) the Iterated Fission Probability (IFP) method and (2) the Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance CHaracterization (CLUTCH) method. This work also presents the GEneralized Adjoint Response in Monte Carlo method (GEAR-MC), a first-of-its-kind approach for calculating adjoint-weighted, generalized response sensitivity coefficients—such as flux responses or reaction rate ratios—in CE Monte Carlo applications. The accuracy and efficiency of the CE TSUNAMI-3D eigenvalue sensitivity methods are assessed from a user perspective in a companion publication, and the accuracy and features of the CE TSUNAMI-3D GEAR-MC methods are detailed in this paper.« less

  13. Sensitivity Analysis of OECD Benchmark Tests in BISON

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining coremore » boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.« less

  14. Fast and sensitive trace analysis of malachite green using a surface-enhanced Raman microfluidic sensor.

    PubMed

    Lee, Sangyeop; Choi, Junghyun; Chen, Lingxin; Park, Byungchoon; Kyong, Jin Burm; Seong, Gi Hun; Choo, Jaebum; Lee, Yeonjung; Shin, Kyung-Hoon; Lee, Eun Kyu; Joo, Sang-Woo; Lee, Kyeong-Hee

    2007-05-08

    A rapid and highly sensitive trace analysis technique for determining malachite green (MG) in a polydimethylsiloxane (PDMS) microfluidic sensor was investigated using surface-enhanced Raman spectroscopy (SERS). A zigzag-shaped PDMS microfluidic channel was fabricated for efficient mixing between MG analytes and aggregated silver colloids. Under the optimal condition of flow velocity, MG molecules were effectively adsorbed onto silver nanoparticles while flowing along the upper and lower zigzag-shaped PDMS channel. A quantitative analysis of MG was performed based on the measured peak height at 1615 cm(-1) in its SERS spectrum. The limit of detection, using the SERS microfluidic sensor, was found to be below the 1-2 ppb level and this low detection limit is comparable to the result of the LC-Mass detection method. In the present study, we introduce a new conceptual detection technology, using a SERS microfluidic sensor, for the highly sensitive trace analysis of MG in water.

  15. Sensitivity analysis of infectious disease models: methods, advances and their application

    PubMed Central

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.

    2013-01-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  16. Pseudotargeted MS Method for the Sensitive Analysis of Protein Phosphorylation in Protein Complexes.

    PubMed

    Lyu, Jiawen; Wang, Yan; Mao, Jiawei; Yao, Yating; Wang, Shujuan; Zheng, Yong; Ye, Mingliang

    2018-05-15

    In this study, we presented an enrichment-free approach for the sensitive analysis of protein phosphorylation in minute amounts of samples, such as purified protein complexes. This method takes advantage of the high sensitivity of parallel reaction monitoring (PRM). Specifically, low confident phosphopeptides identified from the data-dependent acquisition (DDA) data set were used to build a pseudotargeted list for PRM analysis to allow the identification of additional phosphopeptides with high confidence. The development of this targeted approach is very easy as the same sample and the same LC-system were used for the discovery and the targeted analysis phases. No sample fractionation or enrichment was required for the discovery phase which allowed this method to analyze minute amount of sample. We applied this pseudotargeted MS method to quantitatively examine phosphopeptides in affinity purified endogenous Shc1 protein complexes at four temporal stages of EGF signaling and identified 82 phospho-sites. To our knowledge, this is the highest number of phospho-sites identified from the protein complexes. This pseudotargeted MS method is highly sensitive in the identification of low abundance phosphopeptides and could be a powerful tool to study phosphorylation-regulated assembly of protein complex.

  17. Mars approach for global sensitivity analysis of differential equation models with applications to dynamics of influenza infection.

    PubMed

    Lee, Yeonok; Wu, Hulin

    2012-01-01

    Differential equation models are widely used for the study of natural phenomena in many fields. The study usually involves unknown factors such as initial conditions and/or parameters. It is important to investigate the impact of unknown factors (parameters and initial conditions) on model outputs in order to better understand the system the model represents. Apportioning the uncertainty (variation) of output variables of a model according to the input factors is referred to as sensitivity analysis. In this paper, we focus on the global sensitivity analysis of ordinary differential equation (ODE) models over a time period using the multivariate adaptive regression spline (MARS) as a meta model based on the concept of the variance of conditional expectation (VCE). We suggest to evaluate the VCE analytically using the MARS model structure of univariate tensor-product functions which is more computationally efficient. Our simulation studies show that the MARS model approach performs very well and helps to significantly reduce the computational cost. We present an application example of sensitivity analysis of ODE models for influenza infection to further illustrate the usefulness of the proposed method.

  18. Fragrances and other materials in deodorants: search for potentially sensitizing molecules using combined GC-MS and structure activity relationship (SAR) analysis.

    PubMed

    Rastogi, S C; Lepoittevin, J P; Johansen, J D; Frosch, P J; Menné, T; Bruze, M; Dreier, B; Andersen, K E; White, I R

    1998-12-01

    Deodorants are one of the most frequently-used types of cosmetics and are a source of allergic contact dermatitis. Therefore, a gas chromatography - mass spectrometric analysis of 71 deodorants was performed for identification of fragrance and non-fragrance materials present in marketed deodorants. Futhermore, the sensitizing potential of these molecules was evaluated using structure activity relationships (SARs) analysis. This was based on the presence of 1 or more chemically reactive site(s), in the chemical structure, associated with sensitizing potential. Among the many different substances used to formulate cosmetic products (over 3500), 226 chemicals were identified in a sample of 71 deodorants. 84 molecules were found to contain at least 1 structural alert, and 70 to belong to, or be susceptible to being metabolized into, the chemical group of aldehydes, ketones and alpha,beta-unsaturated aldehydes, ketone or esters. The combination of GC-MS and SARs analysis could be helpful in the selection of substances for supplementary investigations regarding sensitizing properties. Thus, it may be a valuable tool in the management of contact allergy to deodorants and for producing new deodorants with decreased propensity to cause contact allergy.

  19. Systematic parameter estimation and sensitivity analysis using a multidimensional PEMFC model coupled with DAKOTA.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao Yang; Luo, Gang; Jiang, Fangming

    2010-05-01

    Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated inmore » order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.« less

  20. LSENS, a general chemical kinetics and sensitivity analysis code for homogeneous gas-phase reactions. 2: Code description and usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  1. A sensitivity analysis for a thermomechanical model of the Antarctic ice sheet and ice shelves

    NASA Astrophysics Data System (ADS)

    Baratelli, F.; Castellani, G.; Vassena, C.; Giudici, M.

    2012-04-01

    The outcomes of an ice sheet model depend on a number of parameters and physical quantities which are often estimated with large uncertainty, because of lack of sufficient experimental measurements in such remote environments. Therefore, the efforts to improve the accuracy of the predictions of ice sheet models by including more physical processes and interactions with atmosphere, hydrosphere and lithosphere can be affected by the inaccuracy of the fundamental input data. A sensitivity analysis can help to understand which are the input data that most affect the different predictions of the model. In this context, a finite difference thermomechanical ice sheet model based on the Shallow-Ice Approximation (SIA) and on the Shallow-Shelf Approximation (SSA) has been developed and applied for the simulation of the evolution of the Antarctic ice sheet and ice shelves for the last 200 000 years. The sensitivity analysis of the model outcomes (e.g., the volume of the ice sheet and of the ice shelves, the basal melt rate of the ice sheet, the mean velocity of the Ross and Ronne-Filchner ice shelves, the wet area at the base of the ice sheet) with respect to the model parameters (e.g., the basal sliding coefficient, the geothermal heat flux, the present-day surface accumulation and temperature, the mean ice shelves viscosity, the melt rate at the base of the ice shelves) has been performed by computing three synthetic numerical indices: two local sensitivity indices and a global sensitivity index. Local sensitivity indices imply a linearization of the model and neglect both non-linear and joint effects of the parameters. The global variance-based sensitivity index, instead, takes into account the complete variability of the input parameters but is usually conducted with a Monte Carlo approach which is computationally very demanding for non-linear complex models. Therefore, the global sensitivity index has been computed using a development of the model outputs in a neighborhood of the reference parameter values with a second-order approximation. The comparison of the three sensitivity indices proved that the approximation of the non-linear model with a second-order expansion is sufficient to show some differences between the local and the global indices. As a general result, the sensitivity analysis showed that most of the model outcomes are mainly sensitive to the present-day surface temperature and accumulation, which, in principle, can be measured more easily (e.g., with remote sensing techniques) than the other input parameters considered. On the other hand, the parameters to which the model resulted less sensitive are the basal sliding coefficient and the mean ice shelves viscosity.

  2. A New Framework for Effective and Efficient Global Sensitivity Analysis of Earth and Environmental Systems Models

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin

    2015-04-01

    Earth and Environmental Systems (EES) models are essential components of research, development, and decision-making in science and engineering disciplines. With continuous advances in understanding and computing power, such models are becoming more complex with increasingly more factors to be specified (model parameters, forcings, boundary conditions, etc.). To facilitate better understanding of the role and importance of different factors in producing the model responses, the procedure known as 'Sensitivity Analysis' (SA) can be very helpful. Despite the availability of a large body of literature on the development and application of various SA approaches, two issues continue to pose major challenges: (1) Ambiguous Definition of Sensitivity - Different SA methods are based in different philosophies and theoretical definitions of sensitivity, and can result in different, even conflicting, assessments of the underlying sensitivities for a given problem, (2) Computational Cost - The cost of carrying out SA can be large, even excessive, for high-dimensional problems and/or computationally intensive models. In this presentation, we propose a new approach to sensitivity analysis that addresses the dual aspects of 'effectiveness' and 'efficiency'. By effective, we mean achieving an assessment that is both meaningful and clearly reflective of the objective of the analysis (the first challenge above), while by efficiency we mean achieving statistically robust results with minimal computational cost (the second challenge above). Based on this approach, we develop a 'global' sensitivity analysis framework that efficiently generates a newly-defined set of sensitivity indices that characterize a range of important properties of metric 'response surfaces' encountered when performing SA on EES models. Further, we show how this framework embraces, and is consistent with, a spectrum of different concepts regarding 'sensitivity', and that commonly-used SA approaches (e.g., Sobol, Morris, etc.) are actually limiting cases of our approach under specific conditions. Multiple case studies are used to demonstrate the value of the new framework. The results show that the new framework provides a fundamental understanding of the underlying sensitivities for any given problem, while requiring orders of magnitude fewer model runs.

  3. Sensitivity of species to chemicals: dose-response characteristics for various test types (coldbloodedLC50, cold-blooded LR50 and warm-blooded LD50) and modes of action

    EPA Science Inventory

    While sensitivity of model species to common toxicants has been addressed, a systematic analysis of inter-species variability for different test types, modes of action and species is as yet lacking. Hence, the aim of the present study was to identify similarities and differences ...

  4. Towards a Completely Implantable, Light-Sensitive Intraocular Retinal Prosthesis

    DTIC Science & Technology

    2001-10-25

    electronic retinal prosthesis is under development to treat retinitis pigmentosa and age-related macular degeneration, two presently incurable...34Preservation of the inner retina in retinitis pigmentosa . A morphometric analysis," Arch Ophthalmol, vol. 115, no. 4, pp. 511-515, Apr.1997...Towards a completely implantable, light-sensitive intraocular retinal prosthesis. M.S. Humayun, J.D. Weiland, B. Justus1, C. Merrit1, J. Whalen, D

  5. Interdependency of Reactive Oxygen Species generating and scavenging system in salt sensitive and salt tolerant cultivars of rice.

    PubMed

    Kaur, Navdeep; Dhawan, Manish; Sharma, Isha; Pati, Pratap Kumar

    2016-06-10

    Salinity stress is a major constrain in the global rice production and hence serious efforts are being undertaken towards deciphering its remedial strategies. The comparative analysis of differential response of salt sensitive and salt tolerant lines is a judicious approach to obtain essential clues towards understanding the acquisition of salinity tolerance in rice plants. However, adaptation to salt stress is a fairly complex process and operates through different mechanisms. Among various mechanisms involved, the reactive oxygen species mediated salinity tolerance is believed to be critical as it evokes cascade of responses related to stress tolerance. In this background, the present paper for the first time evaluates the ROS generating and the scavenging system in tandem in both salt sensitive and salt tolerant cultivars of rice for getting better insight into salinity stress adaptation. Comparative analysis of ROS indicates the higher level of hydrogen peroxide (H2O2) and lower level of superoxide ions (O(2-)) in the salt tolerant as compared to salt sensitive cultivars. Specific activity of ROS generating enzyme, NADPH oxidase was also found to be more in the tolerant cultivars. Further, activities of various enzymes involved in enzymatic and non enzymatic antioxidant defence system were mostly higher in tolerant cultivars. The transcript level analysis of antioxidant enzymes were in alignment with the enzymatic activity. Other stress markers like proline were observed to be higher in tolerant varieties whereas, the level of malondialdehyde (MDA) equivalents and chlorophyll content were estimated to be more in sensitive. The present study showed significant differences in the level of ROS production and antioxidant enzymes activities among sensitive and tolerant cultivars, suggesting their possible role in providing natural salt tolerance to selected cultivars of rice. Our study demonstrates that the cellular machinery for ROS production and scavenging system works in an interdependent manner to offer better salt stress adaptation in rice. The present work further highlights that the elevated level of H2O2 which is considered as a key determinant for conferring salt stress tolerance to rice might have originated through an alternative route of photocatalytic activity of chlorophyll.

  6. Automatic detection of DNA double strand breaks after irradiation using an γH2AX assay.

    PubMed

    Hohmann, Tim; Kessler, Jacqueline; Grabiec, Urszula; Bache, Matthias; Vordermark, Dyrk; Dehghani, Faramarz

    2018-05-01

    Radiation therapy belongs to the most common approaches for cancer therapy leading amongst others to DNA damage like double strand breaks (DSB). DSB can be used as a marker for the effect of radiation on cells. For visualization and assessing the extent of DNA damage the γH2AX foci assay is frequently used. The analysis of the γH2AX foci assay remains complicated as the number of γH2AX foci has to be counted. The quantification is mostly done manually, being time consuming and leading to person-dependent variations. Therefore, we present a method to automatically analyze the number of foci inside nuclei, facilitating and quickening the analysis of DSBs with high reliability in fluorescent images. First nuclei were detected in fluorescent images. Afterwards, the nuclei were analyzed independently from each other with a local thresholding algorithm. This approach allowed accounting for different levels of noise and detection of the foci inside the respective nucleus, using Hough transformation searching for circles. The presented algorithm was able to correctly classify most foci in cases of "high" and "average" image quality (sensitivity>0.8) with a low rate of false positive detections (positive predictive value (PPV)>0.98). In cases of "low" image quality the approach had a decreased sensitivity (0.7-0.9), depending on the manual control counter. The PPV remained high (PPV>0.91). Compared to other automatic approaches the presented algorithm had a higher sensitivity and PPV. The used automatic foci detection algorithm was capable of detecting foci with high sensitivity and PPV. Thus it can be used for automatic analysis of images of varying quality.

  7. Landscape sensitivity in a dynamic environment

    NASA Astrophysics Data System (ADS)

    Lin, Jiun-Chuan; Jen, Chia-Horn

    2010-05-01

    Landscape sensitivity at different scales and topics is presented in this study. Methodological approach composed most of this paper. According to the environmental records in the south eastern Asia, the environment change is highly related with five factors, such as scale of influence area, background of environment characters, magnitude and frequency of events, thresholds of occurring hazards and influence by time factor. This paper tries to demonstrate above five points from historical and present data. It is found that landscape sensitivity is highly related to the degree of vulnerability of the land and the processes which put on the ground including human activities. The scale of sensitivity and evaluation of sensitivities is demonstrated in this paper by the data around east Asia. The methods of classification are mainly from the analysis of environmental data and the records of hazards. From the trend of rainfall records, rainfall intensity and change of temperature, the magnitude and frequency of earthquake, dust storm, days of draught, number of hazards, there are many coincidence on these factors with landscape sensitivities. In conclusion, the landscape sensitivities could be classified as four groups: physical stable, physical unstable, unstable, extremely unstable. This paper explain the difference.

  8. The diagnostic value of narrow-band imaging for early and invasive lung cancer: a meta-analysis.

    PubMed

    Zhu, Juanjuan; Li, Wei; Zhou, Jihong; Chen, Yuqing; Zhao, Chenling; Zhang, Ting; Peng, Wenjia; Wang, Xiaojing

    2017-07-01

    This study aimed to compare the ability of narrow-band imaging to detect early and invasive lung cancer with that of conventional pathological analysis and white-light bronchoscopy. We searched the PubMed, EMBASE, Sinomed, and China National Knowledge Infrastructure databases for relevant studies. Meta-disc software was used to perform data analysis, meta-regression analysis, sensitivity analysis, and heterogeneity testing, and STATA software was used to determine if publication bias was present, as well as to calculate the relative risks for the sensitivity and specificity of narrow-band imaging vs those of white-light bronchoscopy for the detection of early and invasive lung cancer. A random-effects model was used to assess the diagnostic efficacy of the above modalities in cases in which a high degree of between-study heterogeneity was noted with respect to their diagnostic efficacies. The database search identified six studies including 578 patients. The pooled sensitivity and specificity of narrow-band imaging were 86% (95% confidence interval: 83-88%) and 81% (95% confidence interval: 77-84%), respectively, and the pooled sensitivity and specificity of white-light bronchoscopy were 70% (95% confidence interval: 66-74%) and 66% (95% confidence interval: 62-70%), respectively. The pooled relative risks for the sensitivity and specificity of narrow-band imaging vs the sensitivity and specificity of white-light bronchoscopy for the detection of early and invasive lung cancer were 1.33 (95% confidence interval: 1.07-1.67) and 1.09 (95% confidence interval: 0.84-1.42), respectively, and sensitivity analysis showed that narrow-band imaging exhibited good diagnostic efficacy with respect to detecting early and invasive lung cancer and that the results of the study were stable. Narrow-band imaging was superior to white light bronchoscopy with respect to detecting early and invasive lung cancer; however, the specificities of the two modalities did not differ significantly.

  9. Meta-analysis of diagnostic test data: a bivariate Bayesian modeling approach.

    PubMed

    Verde, Pablo E

    2010-12-30

    In the last decades, the amount of published results on clinical diagnostic tests has expanded very rapidly. The counterpart to this development has been the formal evaluation and synthesis of diagnostic results. However, published results present substantial heterogeneity and they can be regarded as so far removed from the classical domain of meta-analysis, that they can provide a rather severe test of classical statistical methods. Recently, bivariate random effects meta-analytic methods, which model the pairs of sensitivities and specificities, have been presented from the classical point of view. In this work a bivariate Bayesian modeling approach is presented. This approach substantially extends the scope of classical bivariate methods by allowing the structural distribution of the random effects to depend on multiple sources of variability. Meta-analysis is summarized by the predictive posterior distributions for sensitivity and specificity. This new approach allows, also, to perform substantial model checking, model diagnostic and model selection. Statistical computations are implemented in the public domain statistical software (WinBUGS and R) and illustrated with real data examples. Copyright © 2010 John Wiley & Sons, Ltd.

  10. A GIS based spatially-explicit sensitivity and uncertainty analysis approach for multi-criteria decision analysis☆

    PubMed Central

    Feizizadeh, Bakhtiar; Jankowski, Piotr; Blaschke, Thomas

    2014-01-01

    GIS multicriteria decision analysis (MCDA) techniques are increasingly used in landslide susceptibility mapping for the prediction of future hazards, land use planning, as well as for hazard preparedness. However, the uncertainties associated with MCDA techniques are inevitable and model outcomes are open to multiple types of uncertainty. In this paper, we present a systematic approach to uncertainty and sensitivity analysis. We access the uncertainty of landslide susceptibility maps produced with GIS-MCDA techniques. A new spatially-explicit approach and Dempster–Shafer Theory (DST) are employed to assess the uncertainties associated with two MCDA techniques, namely Analytical Hierarchical Process (AHP) and Ordered Weighted Averaging (OWA) implemented in GIS. The methodology is composed of three different phases. First, weights are computed to express the relative importance of factors (criteria) for landslide susceptibility. Next, the uncertainty and sensitivity of landslide susceptibility is analyzed as a function of weights using Monte Carlo Simulation and Global Sensitivity Analysis. Finally, the results are validated using a landslide inventory database and by applying DST. The comparisons of the obtained landslide susceptibility maps of both MCDA techniques with known landslides show that the AHP outperforms OWA. However, the OWA-generated landslide susceptibility map shows lower uncertainty than the AHP-generated map. The results demonstrate that further improvement in the accuracy of GIS-based MCDA can be achieved by employing an integrated uncertainty–sensitivity analysis approach, in which the uncertainty of landslide susceptibility model is decomposed and attributed to model's criteria weights. PMID:25843987

  11. Metrics for evaluating performance and uncertainty of Bayesian network models

    Treesearch

    Bruce G. Marcot

    2012-01-01

    This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...

  12. Analysis techniques for multivariate root loci. [a tool in linear control systems

    NASA Technical Reports Server (NTRS)

    Thompson, P. M.; Stein, G.; Laub, A. J.

    1980-01-01

    Analysis and techniques are developed for the multivariable root locus and the multivariable optimal root locus. The generalized eigenvalue problem is used to compute angles and sensitivities for both types of loci, and an algorithm is presented that determines the asymptotic properties of the optimal root locus.

  13. Infiltration modeling guidelines for commercial building energy analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gowri, Krishnan; Winiarski, David W.; Jarnagin, Ronald E.

    This report presents a methodology for modeling air infiltration in EnergyPlus to account for envelope air barrier characteristics. Based on a review of various infiltration modeling options available in EnergyPlus and sensitivity analysis, the linear wind velocity coefficient based on DOE-2 infiltration model is recommended. The methodology described in this report can be used to calculate the EnergyPlus infiltration input for any given building level infiltration rate specified at known pressure difference. The sensitivity analysis shows that EnergyPlus calculates the wind speed based on zone altitude, and the linear wind velocity coefficient represents the variation in infiltration heat loss consistentmore » with building location and weather data.« less

  14. Sensitive magnetic sensors without cooling in biomedical engineering.

    PubMed

    Nowak, H; Strähmel, E; Giessler, F; Rinneberg, G; Haueisen, J

    2003-01-01

    Magnetic field sensors are used in various fields of technology. In the past few years a large variety of magnetic field sensors has been established and the performance of these sensors has been improved enormously. In this review article all recent developments in the area of sensitive magnetic field sensory analysis (resolution better than 1 nT) are presented and examined regarding their parameters. This is mainly done under the aspect of application fields in biomedical engineering. A comparison of all commercial and available sensitive magnetic field sensors shows current and prospective ranges of application.

  15. A PARAMETRIC STUDY OF BCS RF SURFACE IMPEDANCE WITH MAGNETIC FIELD USING THE XIAO CODE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reece, Charles E.; Xiao, Binping

    2013-09-01

    A recent new analysis of field-dependent BCS rf surface impedance based on moving Cooper pairs has been presented.[1] Using this analysis coded in Mathematica TM, survey calculations have been completed which examine the sensitivities of this surface impedance to variation of the BCS material parameters and temperature. The results present a refined description of the "best theoretical" performance available to potential applications with corresponding materials.

  16. A cost analysis for the implementation of commonality in the family of commuter airplanes, revised

    NASA Technical Reports Server (NTRS)

    Creighton, Tom; Haddad, Rafael; Hendrich, Louis; Hensley, Doug; Morgan, Louise; Russell, Mark; Swift, Jerry

    1987-01-01

    The acquisition costs determined for the NASA family of commute airplanes are presented. The costs of the baseline designs are presented along with the calculated savings due to the commonality in the family. A sensitivity study is also presented to show the major drivers in the acquisition cost calculations. The baseline costs are calculated with the Nicolai method. A comparison is presented of the estimated costs for the commuter family with the actual price for existing commuters. The cost calculations for the engines and counter-rotating propellers are reported. The effects of commonality on acquisition costs are calculated. The sensitivity calculations of the cost to various costing parameters are shown. The calculations for the direct operating costs, with and without commonality are presented.

  17. Application of support vector machine method for the analysis of absorption spectra of exhaled air of patients with broncho-pulmonary diseases

    NASA Astrophysics Data System (ADS)

    Bukreeva, Ekaterina B.; Bulanova, Anna A.; Kistenev, Yury V.; Kuzmin, Dmitry A.; Tuzikov, Sergei A.; Yumov, Evgeny L.

    2014-11-01

    The results of the joint use of laser photoacoustic spectroscopy and chemometrics methods in gas analysis of exhaled air of patients with respiratory diseases (chronic obstructive pulmonary disease, pneumonia and lung cancer) are presented. The absorption spectra of exhaled breath of all volunteers were measured, the classification methods of the scans of the absorption spectra were applied, the sensitivity/specificity of the classification results were determined. It were obtained a result of nosological in pairs classification for all investigated volunteers, indices of sensitivity and specificity.

  18. Elemental Analysis in Biological Matrices Using ICP-MS.

    PubMed

    Hansen, Matthew N; Clogston, Jeffrey D

    2018-01-01

    The increasing exploration of metallic nanoparticles for use as cancer therapeutic agents necessitates a sensitive technique to track the clearance and distribution of the material once introduced into a living system. Inductively coupled plasma mass spectrometry (ICP-MS) provides a sensitive and selective tool for tracking the distribution of metal components from these nanotherapeutics. This chapter presents a standardized method for processing biological matrices, ensuring complete homogenization of tissues, and outlines the preparation of appropriate standards and controls. The method described herein utilized gold nanoparticle-treated samples; however, the method can easily be applied to the analysis of other metals.

  19. Integrated planar terahertz resonators for femtomolar sensitivity label-free detection of DNA hybridization.

    PubMed

    Nagel, Michael; Bolivar, Peter Haring; Brucherseifer, Martin; Kurz, Heinrich; Bosserhoff, Anja; Büttner, Reinhard

    2002-04-01

    A promising label-free approach for the analysis of genetic material by means of detecting the hybridization of polynucleotides with electromagnetic waves at terahertz (THz) frequencies is presented. Using an integrated waveguide approach, incorporating resonant THz structures as sample carriers and transducers for the analysis of the DNA molecules, we achieve a sensitivity down to femtomolar levels. The approach is demonstrated with time-domain ultrafast techniques based on femtosecond laser pulses for generating and electro-optically detecting broadband THz signals, although the principle can certainly be transferred to other THz technologies.

  20. Monoallelic mutation analysis (MAMA) for identifying germline mutations.

    PubMed

    Papadopoulos, N; Leach, F S; Kinzler, K W; Vogelstein, B

    1995-09-01

    Dissection of germline mutations in a sensitive and specific manner presents a continuing challenge. In dominantly inherited diseases, mutations occur in only one allele and are often masked by the normal allele. Here we report the development of a sensitive and specific diagnostic strategy based on somatic cell hybridization termed MAMA (monoallelic mutation analysis). We have demonstrated the utility of this strategy in two different hereditary colorectal cancer syndromes, one caused by a defective tumour suppressor gene on chromosome 5 (familial adenomatous polyposis, FAP) and the other caused by a defective mismatch repair gene on chromosome 2 (hereditary non-polyposis colorectal cancer, HNPCC).

  1. An easily implemented static condensation method for structural sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gangadharan, S. N.; Haftka, R. T.; Nikolaidis, E.

    1990-01-01

    A black-box approach to static condensation for sensitivity analysis is presented with illustrative examples of a cube and a car structure. The sensitivity of the structural response with respect to joint stiffness parameter is calculated using the direct method, forward-difference, and central-difference schemes. The efficiency of the various methods for identifying joint stiffness parameters from measured static deflections of these structures is compared. The results indicate that the use of static condensation can reduce computation times significantly and the black-box approach is only slightly less efficient than the standard implementation of static condensation. The ease of implementation of the black-box approach recommends it for use with general-purpose finite element codes that do not have a built-in facility for static condensation.

  2. Error analysis applied to several inversion techniques used for the retrieval of middle atmospheric constituents from limb-scanning MM-wave spectroscopic measurements

    NASA Technical Reports Server (NTRS)

    Puliafito, E.; Bevilacqua, R.; Olivero, J.; Degenhardt, W.

    1992-01-01

    The formal retrieval error analysis of Rodgers (1990) allows the quantitative determination of such retrieval properties as measurement error sensitivity, resolution, and inversion bias. This technique was applied to five numerical inversion techniques and two nonlinear iterative techniques used for the retrieval of middle atmospheric constituent concentrations from limb-scanning millimeter-wave spectroscopic measurements. It is found that the iterative methods have better vertical resolution, but are slightly more sensitive to measurement error than constrained matrix methods. The iterative methods converge to the exact solution, whereas two of the matrix methods under consideration have an explicit constraint, the sensitivity of the solution to the a priori profile. Tradeoffs of these retrieval characteristics are presented.

  3. Physiologically based pharmacokinetic modeling of a homologous series of barbiturates in the rat: a sensitivity analysis.

    PubMed

    Nestorov, I A; Aarons, L J; Rowland, M

    1997-08-01

    Sensitivity analysis studies the effects of the inherent variability and uncertainty in model parameters on the model outputs and may be a useful tool at all stages of the pharmacokinetic modeling process. The present study examined the sensitivity of a whole-body physiologically based pharmacokinetic (PBPK) model for the distribution kinetics of nine 5-n-alkyl-5-ethyl barbituric acids in arterial blood and 14 tissues (lung, liver, kidney, stomach, pancreas, spleen, gut, muscle, adipose, skin, bone, heart, brain, testes) after i.v. bolus administration to rats. The aims were to obtain new insights into the model used, to rank the model parameters involved according to their impact on the model outputs and to study the changes in the sensitivity induced by the increase in the lipophilicity of the homologues on ascending the series. Two approaches for sensitivity analysis have been implemented. The first, based on the Matrix Perturbation Theory, uses a sensitivity index defined as the normalized sensitivity of the 2-norm of the model compartmental matrix to perturbations in its entries. The second approach uses the traditional definition of the normalized sensitivity function as the relative change in a model state (a tissue concentration) corresponding to a relative change in a model parameter. Autosensitivity has been defined as sensitivity of a state to any of its parameters; cross-sensitivity as the sensitivity of a state to any other states' parameters. Using the two approaches, the sensitivity of representative tissue concentrations (lung, liver, kidney, stomach, gut, adipose, heart, and brain) to the following model parameters: tissue-to-unbound plasma partition coefficients, tissue blood flows, unbound renal and intrinsic hepatic clearance, permeability surface area product of the brain, have been analyzed. Both the tissues and the parameters were ranked according to their sensitivity and impact. The following general conclusions were drawn: (i) the overall sensitivity of the system to all parameters involved is small due to the weak connectivity of the system structure; (ii) the time course of both the auto- and cross-sensitivity functions for all tissues depends on the dynamics of the tissues themselves, e.g., the higher the perfusion of a tissue, the higher are both its cross-sensitivity to other tissues' parameters and the cross-sensitivities of other tissues to its parameters; and (iii) with a few exceptions, there is not a marked influence of the lipophilicity of the homologues on either the pattern or the values of the sensitivity functions. The estimates of the sensitivity and the subsequent tissue and parameter rankings may be extended to other drugs, sharing the same common structure of the whole body PBPK model, and having similar model parameters. Results show also that the computationally simple Matrix Perturbation Analysis should be used only when an initial idea about the sensitivity of a system is required. If comprehensive information regarding the sensitivity is needed, the numerically expensive Direct Sensitivity Analysis should be used.

  4. Enhancement in sensitivity of graphene-based zinc oxide assisted bimetallic surface plasmon resonance (SPR) biosensor

    NASA Astrophysics Data System (ADS)

    Kumar, Rajeev; Kushwaha, Angad S.; Srivastava, Monika; Mishra, H.; Srivastava, S. K.

    2018-03-01

    In the present communication, a highly sensitive surface plasmon resonance (SPR) biosensor with Kretschmann configuration having alternate layers, prism/zinc oxide/silver/gold/graphene/biomolecules (ss-DNA) is presented. The optimization of the proposed configuration has been accomplished by keeping the constant thickness of zinc oxide (32 nm), silver (32 nm), graphene (0.34 nm) layer and biomolecules (100 nm) for different values of gold layer thickness (1, 3 and 5 nm). The sensitivity of the proposed SPR biosensor has been demonstrated for a number of design parameters such as gold layer thickness, number of graphene layer, refractive index of biomolecules and the thickness of biomolecules layer. SPR biosensor with optimized geometry has greater sensitivity (66 deg/RIU) than the conventional (52 deg/RIU) as well as other graphene-based (53.2 deg/RIU) SPR biosensor. The effect of zinc oxide layer thickness on the sensitivity of SPR biosensor has also been analysed. From the analysis, it is found that the sensitivity increases significantly by increasing the thickness of zinc oxide layer. It means zinc oxide intermediate layer plays an important role to improve the sensitivity of the biosensor. The sensitivity of SPR biosensor also increases by increasing the number of graphene layer (upto nine layer).

  5. The mechanistic model, GoMDOM: Development , calibration and sensitivity analysis

    EPA Science Inventory

    This presentation will be in a series of Gulf Hypoxia modeling presentations which will be used to: 1) aid NOAA in informing scientific directions and funding decisions for their cooperators and 2) a Technical Review of all models will be provided to the Mississippi River Nutrie...

  6. Phase 1 of the automated array assembly task of the low cost silicon solar array project

    NASA Technical Reports Server (NTRS)

    Pryor, R. A.; Grenon, L. A.; Coleman, M. G.

    1978-01-01

    The results of a study of process variables and solar cell variables are presented. Interactions between variables and their effects upon control ranges of the variables are identified. The results of a cost analysis for manufacturing solar cells are discussed. The cost analysis includes a sensitivity analysis of a number of cost factors.

  7. Impact of Definitions of FIA Variables and Compilation Procedures on Inventory Compilation Results in Georgia

    Treesearch

    Brock Stewart; Chris J. Cieszewski; Michal Zasada

    2005-01-01

    This paper presents a sensitivity analysis of the impact of various definitions and inclusions of different variables in the Forest Inventory and Analysis (FIA) inventory on data compilation results. FIA manuals have been changing recently to make the inventory consistent between all the States. Our analysis demonstrates the importance (or insignificance) of different...

  8. Use of a Process Analysis Tool for Diagnostic Study on Fine Particulate Matter Predictions in the U.S.-Part II: Analysis and Sensitivity Simulations

    EPA Science Inventory

    Following the Part I paper that described an application of the U.S. EPA Models-3/Community Multiscale Air Quality (CMAQ) modeling system to the 1999 Southern Oxidants Study episode, this paper presents results from process analysis (PA) using the PA tool embedded in CMAQ and s...

  9. Margin and sensitivity methods for security analysis of electric power systems

    NASA Astrophysics Data System (ADS)

    Greene, Scott L.

    Reliable operation of large scale electric power networks requires that system voltages and currents stay within design limits. Operation beyond those limits can lead to equipment failures and blackouts. Security margins measure the amount by which system loads or power transfers can change before a security violation, such as an overloaded transmission line, is encountered. This thesis shows how to efficiently compute security margins defined by limiting events and instabilities, and the sensitivity of those margins with respect to assumptions, system parameters, operating policy, and transactions. Security margins to voltage collapse blackouts, oscillatory instability, generator limits, voltage constraints and line overloads are considered. The usefulness of computing the sensitivities of these margins with respect to interarea transfers, loading parameters, generator dispatch, transmission line parameters, and VAR support is established for networks as large as 1500 buses. The sensitivity formulas presented apply to a range of power system models. Conventional sensitivity formulas such as line distribution factors, outage distribution factors, participation factors and penalty factors are shown to be special cases of the general sensitivity formulas derived in this thesis. The sensitivity formulas readily accommodate sparse matrix techniques. Margin sensitivity methods are shown to work effectively for avoiding voltage collapse blackouts caused by either saddle node bifurcation of equilibria or immediate instability due to generator reactive power limits. Extremely fast contingency analysis for voltage collapse can be implemented with margin sensitivity based rankings. Interarea transfer can be limited by voltage limits, line limits, or voltage stability. The sensitivity formulas presented in this thesis apply to security margins defined by any limit criteria. A method to compute transfer margins by directly locating intermediate events reduces the total number of loadflow iterations required by each margin computation and provides sensitivity information at minimal additional cost. Estimates of the effect of simultaneous transfers on the transfer margins agree well with the exact computations for a network model derived from a portion of the U.S grid. The accuracy of the estimates over a useful range of conditions and the ease of obtaining the estimates suggest that the sensitivity computations will be of practical value.

  10. [Advertising].

    ERIC Educational Resources Information Center

    Lombard, Jim

    1979-01-01

    The author presents examples of subliminal or indirect advertising in the mass media and suggests that advertising analysis be part of the elementary curriculum so that children can become sensitized to such nonverbal influences on their behavior. (SJL)

  11. Assessing the sensitivity of bovine tuberculosis surveillance in Canada's cattle population, 2009-2013.

    PubMed

    El Allaki, Farouk; Harrington, Noel; Howden, Krista

    2016-11-01

    The objectives of this study were (1) to estimate the annual sensitivity of Canada's bTB surveillance system and its three system components (slaughter surveillance, export testing and disease investigation) using a scenario tree modelling approach, and (2) to identify key model parameters that influence the estimates of the surveillance system sensitivity (SSSe). To achieve these objectives, we designed stochastic scenario tree models for three surveillance system components included in the analysis. Demographic data, slaughter data, export testing data, and disease investigation data from 2009 to 2013 were extracted for input into the scenario trees. Sensitivity analysis was conducted to identify key influential parameters on SSSe estimates. The median annual SSSe estimates generated from the study were very high, ranging from 0.95 (95% probability interval [PI]: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). Median annual sensitivity estimates for the slaughter surveillance component ranged from 0.95 (95% PI: 0.88-0.98) to 0.97 (95% PI: 0.93-0.99). This shows that slaughter surveillance to be the major contributor to overall surveillance system sensitivity with a high probability to detect M. bovis infection if present at a prevalence of 0.00028% or greater during the study period. The export testing and disease investigation components had extremely low component sensitivity estimates-the maximum median sensitivity estimates were 0.02 (95% PI: 0.014-0.023) and 0.0061 (95% PI: 0.0056-0.0066) respectively. The three most influential input parameters on the model's output (SSSe) were the probability of a granuloma being detected at slaughter inspection, the probability of a granuloma being present in older animals (≥12 months of age), and the probability of a granuloma sample being submitted to the laboratory. Additional studies are required to reduce the levels of uncertainty and variability associated with these three parameters influencing the surveillance system sensitivity. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.

  12. Survey of methods for calculating sensitivity of general eigenproblems

    NASA Technical Reports Server (NTRS)

    Murthy, Durbha V.; Haftka, Raphael T.

    1987-01-01

    A survey of methods for sensitivity analysis of the algebraic eigenvalue problem for non-Hermitian matrices is presented. In addition, a modification of one method based on a better normalizing condition is proposed. Methods are classified as Direct or Adjoint and are evaluated for efficiency. Operation counts are presented in terms of matrix size, number of design variables and number of eigenvalues and eigenvectors of interest. The effect of the sparsity of the matrix and its derivatives is also considered, and typical solution times are given. General guidelines are established for the selection of the most efficient method.

  13. Analysis strategies for high-resolution UHF-fMRI data.

    PubMed

    Polimeni, Jonathan R; Renvall, Ville; Zaretskaya, Natalia; Fischl, Bruce

    2018-03-01

    Functional MRI (fMRI) benefits from both increased sensitivity and specificity with increasing magnetic field strength, making it a key application for Ultra-High Field (UHF) MRI scanners. Most UHF-fMRI studies utilize the dramatic increases in sensitivity and specificity to acquire high-resolution data reaching sub-millimeter scales, which enable new classes of experiments to probe the functional organization of the human brain. This review article surveys advanced data analysis strategies developed for high-resolution fMRI at UHF. These include strategies designed to mitigate distortion and artifacts associated with higher fields in ways that attempt to preserve spatial resolution of the fMRI data, as well as recently introduced analysis techniques that are enabled by these extremely high-resolution data. Particular focus is placed on anatomically-informed analyses, including cortical surface-based analysis, which are powerful techniques that can guide each step of the analysis from preprocessing to statistical analysis to interpretation and visualization. New intracortical analysis techniques for laminar and columnar fMRI are also reviewed and discussed. Prospects for single-subject individualized analyses are also presented and discussed. Altogether, there are both specific challenges and opportunities presented by UHF-fMRI, and the use of proper analysis strategies can help these valuable data reach their full potential. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Permethylated-β-Cyclodextrin Capped CdTe Quantum Dot and its Sensitive Fluorescence Analysis of Malachite Green.

    PubMed

    Cao, Yujuan; Wei, Jiongling; Wu, Wei; Wang, Song; Hu, Xiaogang; Yu, Ying

    2015-09-01

    In the present work, the CdTe quantum dots were covalently conjugated with permethylated-β-cyclodextrin (OMe-β-CD) using 1-ethyl-3-(3-dimethylaminopropyl) carbodiimide hydrochloride as cross-linking reagent. The obtained functional quantum dots (OMe-β-CD/QDs) showed highly luminescent, water solubility and photostability as well as good inclusion ability to malachite green. A sensitive fluorescence method was developed for the analysis of malachite green in different samples. The good linearity was 2.0 × 10(-7)-1.0 × 10(-5) mol/L and the limit of detect was 1.7 × 10(-8) mol/L. The recoveries for three environmental water samples were 92.0-108.2 % with relative standard deviation (RSD) of 0.24-1.87 %, while the recovery for the fish sample was 94.3 % with RSD of 1.04 %. The results showed that the present method was sensitive and convenient to determine malachite green in complex samples. Graphical Abstract The analytical mechanism of OMe-β-CD/QDs and its linear response to MG.

  15. Parametric study and global sensitivity analysis for co-pyrolysis of rape straw and waste tire via variance-based decomposition.

    PubMed

    Xu, Li; Jiang, Yong; Qiu, Rong

    2018-01-01

    In present study, co-pyrolysis behavior of rape straw, waste tire and their various blends were investigated. TG-FTIR indicated that co-pyrolysis was characterized by a four-step reaction, and H 2 O, CH, OH, CO 2 and CO groups were the main products evolved during the process. Additionally, using BBD-based experimental results, best-fit multiple regression models with high R 2 -pred values (94.10% for mass loss and 95.37% for reaction heat), which correlated explanatory variables with the responses, were presented. The derived models were analyzed by ANOVA at 95% confidence interval, F-test, lack-of-fit test and residues normal probability plots implied the models described well the experimental data. Finally, the model uncertainties as well as the interactive effect of these parameters were studied, the total-, first- and second-order sensitivity indices of operating factors were proposed using Sobol' variance decomposition. To the authors' knowledge, this is the first time global parameter sensitivity analysis has been performed in (co-)pyrolysis literature. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Some Sensitivity Studies of Chemical Transport Simulated in Models of the Soil-Plant-Litter System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Begovich, C.L.

    2002-10-28

    Fifteen parameters in a set of five coupled models describing carbon, water, and chemical dynamics in the soil-plant-litter system were varied in a sensitivity analysis of model response. Results are presented for chemical distribution in the components of soil, plants, and litter along with selected responses of biomass, internal chemical transport (xylem and phloem pathways), and chemical uptake. Response and sensitivity coefficients are presented for up to 102 model outputs in an appendix. Two soil properties (chemical distribution coefficient and chemical solubility) and three plant properties (leaf chemical permeability, cuticle thickness, and root chemical conductivity) had the greatest influence onmore » chemical transport in the soil-plant-litter system under the conditions examined. Pollutant gas uptake (SO{sub 2}) increased with change in plant properties that increased plant growth. Heavy metal dynamics in litter responded to plant properties (phloem resistance, respiration characteristics) which induced changes in the chemical cycling to the litter system. Some of the SO{sub 2} and heavy metal responses were not expected but became apparent through the modeling analysis.« less

  17. High order statistical signatures from source-driven measurements of subcritical fissile systems

    NASA Astrophysics Data System (ADS)

    Mattingly, John Kelly

    1998-11-01

    This research focuses on the development and application of high order statistical analyses applied to measurements performed with subcritical fissile systems driven by an introduced neutron source. The signatures presented are derived from counting statistics of the introduced source and radiation detectors that observe the response of the fissile system. It is demonstrated that successively higher order counting statistics possess progressively higher sensitivity to reactivity. Consequently, these signatures are more sensitive to changes in the composition, fissile mass, and configuration of the fissile assembly. Furthermore, it is shown that these techniques are capable of distinguishing the response of the fissile system to the introduced source from its response to any internal or inherent sources. This ability combined with the enhanced sensitivity of higher order signatures indicates that these techniques will be of significant utility in a variety of applications. Potential applications include enhanced radiation signature identification of weapons components for nuclear disarmament and safeguards applications and augmented nondestructive analysis of spent nuclear fuel. In general, these techniques expand present capabilities in the analysis of subcritical measurements.

  18. Civil and mechanical engineering applications of sensitivity analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Komkov, V.

    1985-07-01

    In this largely tutorial presentation, the historical development of optimization theories has been outlined as they applied to mechanical and civil engineering designs and the development of modern sensitivity techniques during the last 20 years has been traced. Some of the difficulties and the progress made in overcoming them have been outlined. Some of the recently developed theoretical methods have been stressed to indicate their importance to computer-aided design technology.

  19. The NEXUS criteria are insufficient to exclude cervical spine fractures in older blunt trauma patients.

    PubMed

    Paykin, Gabriel; O'Reilly, Gerard; Ackland, Helen M; Mitra, Biswadev

    2017-05-01

    The National Emergency X-Radiography Utilization Study (NEXUS) criteria are used to assess the need for imaging to evaluate cervical spine integrity after injury. The aim of this study was to assess the sensitivity of the NEXUS criteria in older blunt trauma patients. Patients aged 65 years or older presenting between 1st July 2010 and 30th June 2014 and diagnosed with cervical spine fractures were identified from the institutional trauma registry. Clinical examination findings were extracted from electronic medical records. Data on the NEXUS criteria were collected and sensitivity of the rule to exclude a fracture was calculated. Over the study period 231,018 patients presented to The Alfred Emergency & Trauma Centre, of whom 14,340 met the institutional trauma registry inclusion criteria and 4035 were aged ≥65years old. Among these, 468 patients were diagnosed with cervical spine fractures, of whom 21 were determined to be NEXUS negative. The NEXUS criteria performed with a sensitivity of 94.8% [95% CI: 92.1%-96.7%] on complete case analysis in older blunt trauma patients. One-way sensitivity analysis resulted in a maximum sensitivity limit of 95.5% [95% CI: 93.2%-97.2%]. Compared with the general adult blunt trauma population, the NEXUS criteria are less sensitive in excluding cervical spine fractures in older blunt trauma patients. We therefore suggest that liberal imaging be considered for older patients regardless of history or examination findings and that the addition of an age criterion to the NEXUS criteria be investigated in future studies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Dynamic Characteristics and Stability Analysis of Space Shuttle Main Engine Oxygen Pump

    NASA Technical Reports Server (NTRS)

    Gunter, Edgar J.; Branagan, Lyle

    1991-01-01

    The dynamic characteristics of the Space Shuttle high pressure oxygen pump are presented. Experimental data is presented to show the vibration spectrum and response under actual engine operation and also in spin pit testing for balancing. The oxygen pump appears to be operating near a second critical speed and is sensitive to self excited aerodynamic cross coupling forces in the turbine and pump. An analysis is presented to show the improvement in pump stability by the application of turbulent flow seals, preburner seals, and pump shaft cross sectional modifications.

  1. Chinese insurance agents in "bad barrels": a multilevel analysis of the relationship between ethical leadership, ethical climate and business ethical sensitivity.

    PubMed

    Zhang, Na; Zhang, Jian

    2016-01-01

    The moral hazards and poor public image of the insurance industry, arising from insurance agents' unethical behavior, affect both the normal operation of an insurance company and decrease applicants' confidence in the company. Contrarily, these scandals may demonstrate that the organizations were "bad barrels" in which insurance agents' unethical decisions were supported or encouraged by the organization's leadership or climate. The present study brings two organization-level factors (ethical leadership and ethical climate) together and explores the role of ethical climate on the relationship between the ethical leadership and business ethical sensitivity of Chinese insurance agents. Through the multilevel analysis of 502 insurance agents from 56 organizations, it is found that organizational ethical leadership is positively related to the organizational ethical climate; organizational ethical climate is positively related to business ethical sensitivity, and organizational ethical climate fully mediates the relationship between organizational ethical leadership and business ethical sensitivity. Organizational ethical climate plays a completely mediating role in the relationship between organizational ethical leadership and business ethical sensitivity. The integrated model of ethical leadership, ethical climate and business ethical sensitivity makes several contributions to ethics theory, research and management.

  2. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  3. Sensitivity analysis of 1-D dynamical model for basin analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, S.

    1987-01-01

    Geological processes related to petroleum generation, migration and accumulation are very complicated in terms of time and variables involved, and it is very difficult to simulate these processes by laboratory experiments. For this reasons, many mathematic/computer models have been developed to simulate these geological processes based on geological, geophysical and geochemical principles. The sensitivity analysis in this study is a comprehensive examination on how geological, geophysical and geochemical parameters influence the reconstructions of geohistory, thermal history and hydrocarbon generation history using the 1-D fluid flow/compaction model developed in the Basin Modeling Group at the University of South Carolina. This studymore » shows the effects of some commonly used parameter such as depth, age, lithology, porosity, permeability, unconformity (eroded thickness and erosion time), temperature at sediment surface, bottom hole temperature, present day heat flow, thermal gradient, thermal conductivity and kerogen type and content on the evolutions of formation thickness, porosity, permeability, pressure with time and depth, heat flow with time, temperature with time and depth, vitrinite reflectance (Ro) and TTI with time and depth, and oil window in terms of time and depth, amount of hydrocarbons generated with time and depth. Lithology, present day heat flow and thermal conductivity are the most sensitive parameters in the reconstruction of temperature history.« less

  4. Generalized Linear Covariance Analysis

    NASA Technical Reports Server (NTRS)

    Carpenter, James R.; Markley, F. Landis

    2014-01-01

    This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.

  5. Advances in on-chip photodetection for applications in miniaturized genetic analysis systems

    NASA Astrophysics Data System (ADS)

    Namasivayam, Vijay; Lin, Rongsheng; Johnson, Brian; Brahmasandra, Sundaresh; Razzacki, Zafar; Burke, David T.; Burns, Mark A.

    2004-01-01

    Microfabrication techniques have become increasingly popular in the development of next generation DNA analysis devices. Improved on-chip fluorescence detection systems may have applications in developing portable hand-held instruments for point-of-care diagnostics. Miniaturization of fluorescence detection involves construction of ultra-sensitive photodetectors that can be integrated onto a fluidic platform combined with the appropriate optical emission filters. We have previously demonstrated integration PIN photodiodes onto a microfabricated electrophoresis channel for separation and detection of DNA fragments. In this work, we present an improved detector structure that uses a PINN+ photodiode with an on-chip interference filter and a robust liquid barrier layer. This new design yields high sensitivity (detection limit of 0.9 ng µl-1 of DNA), low-noise (S/N ~ 100/1) and enhanced quantum efficiencies (>80%) over the entire visible spectrum. Applications of these photodiodes in various areas of DNA analysis such as microreactions (PCR), separations (electrophoresis) and microfluidics (drop sensing) are presented.

  6. Application of design sensitivity analysis for greater improvement on machine structural dynamics

    NASA Technical Reports Server (NTRS)

    Yoshimura, Masataka

    1987-01-01

    Methodologies are presented for greatly improving machine structural dynamics by using design sensitivity analyses and evaluative parameters. First, design sensitivity coefficients and evaluative parameters of structural dynamics are described. Next, the relations between the design sensitivity coefficients and the evaluative parameters are clarified. Then, design improvement procedures of structural dynamics are proposed for the following three cases: (1) addition of elastic structural members, (2) addition of mass elements, and (3) substantial charges of joint design variables. Cases (1) and (2) correspond to the changes of the initial framework or configuration, and (3) corresponds to the alteration of poor initial design variables. Finally, numerical examples are given for demonstrating the availability of the methods proposed.

  7. Sensitivity analysis and approximation methods for general eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Murthy, D. V.; Haftka, R. T.

    1986-01-01

    Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.

  8. HPAEC-PAD for oligosaccharide analysis-novel insights into analyte sensitivity and response stability.

    PubMed

    Mechelke, Matthias; Herlet, Jonathan; Benz, J Philipp; Schwarz, Wolfgang H; Zverlov, Vladimir V; Liebl, Wolfgang; Kornberger, Petra

    2017-12-01

    The rising importance of accurately detecting oligosaccharides in biomass hydrolyzates or as ingredients in food, such as in beverages and infant milk products, demands for the availability of tools to sensitively analyze the broad range of available oligosaccharides. Over the last decades, HPAEC-PAD has been developed into one of the major technologies for this task and represents a popular alternative to state-of-the-art LC-MS oligosaccharide analysis. This work presents the first comprehensive study which gives an overview of the separation of 38 analytes as well as enzymatic hydrolyzates of six different polysaccharides focusing on oligosaccharides. The high sensitivity of the PAD comes at cost of its stability due to recession of the gold electrode. By an in-depth analysis of the sensitivity drop over time for 35 analytes, including xylo- (XOS), arabinoxylo- (AXOS), laminari- (LOS), manno- (MOS), glucomanno- (GMOS), and cellooligosaccharides (COS), we developed an analyte-specific one-phase decay model for this effect over time. Using this model resulted in significantly improved data normalization when using an internal standard. Our results thereby allow a quantification approach which takes the inevitable and analyte-specific PAD response drop into account. Graphical abstract HPAEC-PAD analysis of oligosaccharides and determination of PAD response drop leading to an improved data normalization.

  9. Mild extraction methods using aqueous glucose solution for the analysis of natural dyes in textile artefacts dyed with Dyer's madder (Rubia tinctorum L.).

    PubMed

    Ford, Lauren; Henderson, Robert L; Rayner, Christopher M; Blackburn, Richard S

    2017-03-03

    Madder (Rubia tinctorum L.) has been widely used as a red dye throughout history. Acid-sensitive colorants present in madder, such as glycosides (lucidin primeveroside, ruberythric acid, galiosin) and sensitive aglycons (lucidin), are degraded in the textile back extraction process; in previous literature these sensitive molecules are either absent or present in only low concentrations due to the use of acid in typical textile back extraction processes. Anthraquinone aglycons alizarin and purpurin are usually identified in analysis following harsh back extraction methods, such those using solvent mixtures with concentrated hydrochloric acid at high temperatures. Use of softer extraction techniques potentially allows for dye components present in madder to be extracted without degradation, which can potentially provide more information about the original dye profile, which varies significantly between madder varieties, species and dyeing technique. Herein, a softer extraction method involving aqueous glucose solution was developed and compared to other back extraction techniques on wool dyed with root extract from different varieties of Rubia tinctorum. Efficiencies of the extraction methods were analysed by HPLC coupled with diode array detection. Acidic literature methods were evaluated and they generally caused hydrolysis and degradation of the dye components, with alizarin, lucidin, and purpurin being the main compounds extracted. In contrast, extraction in aqueous glucose solution provides a highly effective method for extraction of madder dyed wool and is shown to efficiently extract lucidin primeveroside and ruberythric acid without causing hydrolysis and also extract aglycons that are present due to hydrolysis during processing of the plant material. Glucose solution is a favourable extraction medium due to its ability to form extensive hydrogen bonding with glycosides present in madder, and displace them from the fibre. This new glucose method offers an efficient process that preserves these sensitive molecules and is a step-change in analysis of madder dyed textiles as it can provide further information about historical dye preparation and dyeing processes that current methods cannot. The method also efficiently extracts glycosides in artificially aged samples, making it applicable for museum textile artefacts. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. CAQI Common Air Quality Index--update with PM(2.5) and sensitivity analysis.

    PubMed

    van den Elshout, Sef; Léger, Karine; Heich, Hermann

    2014-08-01

    The CAQI or Common Air Quality Index was proposed to facilitate the comparison of air quality in European cities in real-time. There are many air quality indices in use in the world. All are somewhat different in concept and presentation and comparing air quality presentations of cities on the internet was virtually impossible. The CAQI and the accompanying website www.airqualitynow.eu and app were proposed to overcome this problem in Europe. This paper describes the logic of making an index, in particular the CAQI and its update with a grid for PM2.5. To assure a smooth transition to the new calculation scheme we studied the behaviour of the index before and after the changes. We used 2006 Airbase data from 31 urban background and 27 street stations all across Europe (that were monitoring PM2.5 in 2006). The CAQI characterises a city by a roadside and urban background situation. It also insists on a minimum number of pollutants to be included in the calculation. Both were deemed necessary to improve the basis for comparing one city to another. A sensitivity analysis demonstrates the comparative behaviour of the street and urban background stations and presents the sensitivity of the CAQI outcome to the pollutants included in its calculation. © 2013.

  11. Barcoding T Cell Calcium Response Diversity with Methods for Automated and Accurate Analysis of Cell Signals (MAAACS)

    PubMed Central

    Sergé, Arnauld; Bernard, Anne-Marie; Phélipot, Marie-Claire; Bertaux, Nicolas; Fallet, Mathieu; Grenot, Pierre; Marguet, Didier; He, Hai-Tao; Hamon, Yannick

    2013-01-01

    We introduce a series of experimental procedures enabling sensitive calcium monitoring in T cell populations by confocal video-microscopy. Tracking and post-acquisition analysis was performed using Methods for Automated and Accurate Analysis of Cell Signals (MAAACS), a fully customized program that associates a high throughput tracking algorithm, an intuitive reconnection routine and a statistical platform to provide, at a glance, the calcium barcode of a population of individual T-cells. Combined with a sensitive calcium probe, this method allowed us to unravel the heterogeneity in shape and intensity of the calcium response in T cell populations and especially in naive T cells, which display intracellular calcium oscillations upon stimulation by antigen presenting cells. PMID:24086124

  12. Modeling Canadian Quality Control Test Program for Steroid Hormone Receptors in Breast Cancer: Diagnostic Accuracy Study.

    PubMed

    Pérez, Teresa; Makrestsov, Nikita; Garatt, John; Torlakovic, Emina; Gilks, C Blake; Mallett, Susan

    The Canadian Immunohistochemistry Quality Control program monitors clinical laboratory performance for estrogen receptor and progesterone receptor tests used in breast cancer treatment management in Canada. Current methods assess sensitivity and specificity at each time point, compared with a reference standard. We investigate alternative performance analysis methods to enhance the quality assessment. We used 3 methods of analysis: meta-analysis of sensitivity and specificity of each laboratory across all time points; sensitivity and specificity at each time point for each laboratory; and fitting models for repeated measurements to examine differences between laboratories adjusted by test and time point. Results show 88 laboratories participated in quality control at up to 13 time points using typically 37 to 54 histology samples. In meta-analysis across all time points no laboratories have sensitivity or specificity below 80%. Current methods, presenting sensitivity and specificity separately for each run, result in wide 95% confidence intervals, typically spanning 15% to 30%. Models of a single diagnostic outcome demonstrated that 82% to 100% of laboratories had no difference to reference standard for estrogen receptor and 75% to 100% for progesterone receptor, with the exception of 1 progesterone receptor run. Laboratories with significant differences to reference standard identified with Generalized Estimating Equation modeling also have reduced performance by meta-analysis across all time points. The Canadian Immunohistochemistry Quality Control program has a good design, and with this modeling approach has sufficient precision to measure performance at each time point and allow laboratories with a significantly lower performance to be targeted for advice.

  13. Parameterization and Sensitivity Analysis of a Complex Simulation Model for Mosquito Population Dynamics, Dengue Transmission, and Their Control

    PubMed Central

    Ellis, Alicia M.; Garcia, Andres J.; Focks, Dana A.; Morrison, Amy C.; Scott, Thomas W.

    2011-01-01

    Models can be useful tools for understanding the dynamics and control of mosquito-borne disease. More detailed models may be more realistic and better suited for understanding local disease dynamics; however, evaluating model suitability, accuracy, and performance becomes increasingly difficult with greater model complexity. Sensitivity analysis is a technique that permits exploration of complex models by evaluating the sensitivity of the model to changes in parameters. Here, we present results of sensitivity analyses of two interrelated complex simulation models of mosquito population dynamics and dengue transmission. We found that dengue transmission may be influenced most by survival in each life stage of the mosquito, mosquito biting behavior, and duration of the infectious period in humans. The importance of these biological processes for vector-borne disease models and the overwhelming lack of knowledge about them make acquisition of relevant field data on these biological processes a top research priority. PMID:21813844

  14. Surrogate-based Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Queipo, Nestor V.; Haftka, Raphael T.; Shyy, Wei; Goel, Tushar; Vaidyanathan, Raj; Tucker, P. Kevin

    2005-01-01

    A major challenge to the successful full-scale development of modem aerospace systems is to address competing objectives such as improved performance, reduced costs, and enhanced safety. Accurate, high-fidelity models are typically time consuming and computationally expensive. Furthermore, informed decisions should be made with an understanding of the impact (global sensitivity) of the design variables on the different objectives. In this context, the so-called surrogate-based approach for analysis and optimization can play a very valuable role. The surrogates are constructed using data drawn from high-fidelity models, and provide fast approximations of the objectives and constraints at new design points, thereby making sensitivity and optimization studies feasible. This paper provides a comprehensive discussion of the fundamental issues that arise in surrogate-based analysis and optimization (SBAO), highlighting concepts, methods, techniques, as well as practical implications. The issues addressed include the selection of the loss function and regularization criteria for constructing the surrogates, design of experiments, surrogate selection and construction, sensitivity analysis, convergence, and optimization. The multi-objective optimal design of a liquid rocket injector is presented to highlight the state of the art and to help guide future efforts.

  15. Analysis of glycosaminoglycan-derived disaccharides by capillary electrophoresis using laser-induced fluorescence detection

    PubMed Central

    Chang, Yuqing; Yang, Bo; Zhao, Xue; Linhardt, Robert J.

    2012-01-01

    A quantitative and highly sensitive method for the analysis of glycosaminoglycan (GAG)-derived disaccharides is presented that relies on capillary electrophoresis (CE) with laser-induced fluorescence (LIF) detection. This method enables complete separation of seventeen GAG-derived disaccharides in a single run. Unsaturated disaccharides were derivatized with 2-aminoacridone (AMAC) to improve sensitivity. The limit of detection was at the attomole level and about 100-fold more sensitive than traditional CE-ultraviolet detection. A CE separation timetable was developed to achieve complete resolution and shorten analysis time. The RSD of migration time and peak areas at both low and high concentrations of unsaturated disaccharides are all less than 2.7% and 3.2%, respectively, demonstrating that this is a reproducible method. This analysis was successfully applied to cultured Chinese hamster ovary cell samples for determination of GAG disaccharides. The current method simplifies GAG extraction steps, and reduces inaccuracy in calculating ratios of heparin/heparan sulfate to chondroitin sulfate/dermatan sulfate, resulting from the separate analyses of a single sample. PMID:22609076

  16. [Meta-analysis of diagnostic capability of frequency-doubling technology (FDT) for primary glaucoma].

    PubMed

    Liu, Ting; He, Xiang-ge

    2006-05-01

    To evaluate the overall diagnostic capabilities of frequency-doubling technology (FDT) in patients of primary glaucoma, with standard automated perimetry (SAP) and/or optic disc appearance as the gold standard. A comprehensive electric search in MEDLINE, EMBASE, Cochrane Library, BIOSIS, Previews, HMIC, IPA, OVID, CNKI, CBMdisc, VIP information, CMCC, CCPD, SSreader and 21dmedia and a manual search in related textbooks, journals, congress articles and their references were performed to identify relevant English and Chinese language articles. Criteria for adaptability were established according to validity criteria for diagnostic research published by the Cochrane Methods Group on Screening and Diagnostic Tests. Quality of the included articles was assessed and relevant materials were extracted for studying. Statistical analysis was performed with Meta Test version 0.6 software. Heterogeneity of the included articles was tested, which was used to select appropriate effect model to calculate pooled weighted sensitivity and specificity. Summary Receiver Operating Characteristic (SROC) curve was established and the area under the curve (AUC) was calculated. Finally, sensitivity analysis was performed. Fifteen English articles (21 studies) of 206 retrieved articles were included in the present study, with a total of 3172 patients. The reported sensitivity of FDT ranged from 0.51 to 1.00, and specificity from 0.58 to 1.00. The pooled weighted sensitivity and specificity for FDT with 95% confidence intervals (95% CI) after correction for standard error were 0.86 (0.80 - 0.90), 0.87 (0.81 - 0.91), respectively. The AUC of SROC was 93.01%. Sensitivity analysis demonstrated no disproportionate influences of individual study. The included articles are of good quality and FDT can be a highly efficient diagnostic test for primary glaucoma based on Meta-analysis. However, a high quality perspective study is still required for further analysis.

  17. Photobiomodulation in the Prevention of Tooth Sensitivity Caused by In-Office Dental Bleaching. A Randomized Placebo Preliminary Study.

    PubMed

    Calheiros, Andrea Paiva Corsetti; Moreira, Maria Stella; Gonçalves, Flávia; Aranha, Ana Cecília Correa; Cunha, Sandra Ribeiro; Steiner-Oliveira, Carolina; Eduardo, Carlos de Paula; Ramalho, Karen Müller

    2017-08-01

    Analyze the effect of photobiomodulation in the prevention of tooth sensitivity after in-office dental bleaching. Tooth sensitivity is a common clinical consequence of dental bleaching. Therapies for prevention of sensitivity have been investigated in literature. This study was developed as a randomized, placebo blind clinical trial. Fifty patients were selected (n = 10) and randomly divided into five groups: (1) control, (2) placebo, (3) laser before bleaching, (4) laser after bleaching, and (5) laser before and after bleaching. Irradiation was performed perpendicularly, in contact, on each tooth during 10 sec per point in two points. The first point was positioned in the middle of the tooth crown and the second in the periapical region. Photobiomodulation was applied using the following parameters: 780 nm, 40 mW, 10 J/cm 2 , 0.4 J per point. Pain was analyzed before, immediately after, and seven subsequent days after bleaching. Patients were instructed to report pain using the scale: 0 = no tooth sensitivity, 1 = gentle sensitivity, 2 = moderate sensitivity, 3 = severe sensitivity. There were no statistical differences between groups at any time (p > 0.05). More studies, with others parameters and different methods of tooth sensitivity analysis, should be performed to complement the results found. Within the limitation of the present study, the laser parameters of photobiomodulation tested in the present study were not efficient in preventing tooth sensitivity after in-office bleaching.

  18. The effect of uncertainties in distance-based ranking methods for multi-criteria decision making

    NASA Astrophysics Data System (ADS)

    Jaini, Nor I.; Utyuzhnikov, Sergei V.

    2017-08-01

    Data in the multi-criteria decision making are often imprecise and changeable. Therefore, it is important to carry out sensitivity analysis test for the multi-criteria decision making problem. The paper aims to present a sensitivity analysis for some ranking techniques based on the distance measures in multi-criteria decision making. Two types of uncertainties are considered for the sensitivity analysis test. The first uncertainty is related to the input data, while the second uncertainty is towards the Decision Maker preferences (weights). The ranking techniques considered in this study are TOPSIS, the relative distance and trade-off ranking methods. TOPSIS and the relative distance method measure a distance from an alternative to the ideal and antiideal solutions. In turn, the trade-off ranking calculates a distance of an alternative to the extreme solutions and other alternatives. Several test cases are considered to study the performance of each ranking technique in both types of uncertainties.

  19. Parametric sensitivity analysis of leachate transport simulations at landfills.

    PubMed

    Bou-Zeid, E; El-Fadel, M

    2004-01-01

    This paper presents a case study in simulating leachate generation and transport at a 2000 ton/day landfill facility and assesses leachate migration away from the landfill in order to control associated environmental impacts, particularly on groundwater wells down gradient of the site. The site offers unique characteristics in that it is a former quarry converted to a landfill and is planned to have refuse depths that could reach 100 m, making it one of the deepest in the world. Leachate quantity and potential percolation into the subsurface are estimated using the Hydrologic Evaluation of Landfill Performance (HELP) model. A three-dimensional subsurface model (PORFLOW) was adopted to simulate ground water flow and contaminant transport away from the site. A comprehensive sensitivity analysis to leachate transport control parameters was also conducted. Sensitivity analysis suggests that changes in partition coefficient, source strength, aquifer hydraulic conductivity, and dispersivity have the most significant impact on model output indicating that these parameters should be carefully selected when similar modeling studies are performed. Copyright 2004 Elsevier Ltd.

  20. Experimental elaboration and analysis of dye-sensitized TiO2 solar cells (DSSC) dyed by natural dyes and conductive polymers

    NASA Astrophysics Data System (ADS)

    KałuŻyński, P.; Maciak, E.; Herzog, T.; Wójcik, M.

    2016-09-01

    In this paper we propose low cost and easy in development fully working dye-sensitized solar cell module made with use of a different sensitizing dyes (various anthocyanins and P3HT) for increasing the absorption spectrum, transparent conducting substrates (vaccum spattered chromium and gold), nanometer sized TiO2 film, iodide and methyl viologen dichloride based electrolyte, and a counter electrode (vaccum spattered platinum or carbon). Moreover, some of the different technologies and optimization manufacturing processes were elaborated for energy efficiency increase and were presented in this paper.

  1. Monolayer Graphene Bolometer as a Sensitive Far-IR Detector

    NASA Technical Reports Server (NTRS)

    Karasik, Boris S.; McKitterick, Christopher B.; Prober, Daniel E.

    2014-01-01

    In this paper we give a detailed analysis of the expected sensitivity and operating conditions in the power detection mode of a hot-electron bolometer (HEB) made from a few micro m(sup 2) of monolayer graphene (MLG) flake which can be embedded into either a planar antenna or waveguide circuit via NbN (or NbTiN) superconducting contacts with critical temperature approx. 14 K. Recent data on the strength of the electron-phonon coupling are used in the present analysis and the contribution of the readout noise to the Noise Equivalent Power (NEP) is explicitly computed. The readout scheme utilizes Johnson Noise Thermometry (JNT) allowing for Frequency-Domain Multiplexing (FDM) using narrowband filter coupling of the HEBs. In general, the filter bandwidth and the summing amplifier noise have a significant effect on the overall system sensitivity.

  2. SPR based hybrid electro-optic biosensor for β-lactam antibiotics determination in water

    NASA Astrophysics Data System (ADS)

    Galatus, Ramona; Feier, Bogdan; Cristea, Cecilia; Cennamo, Nunzio; Zeni, Luigi

    2017-09-01

    The present work aims to provide a hybrid platform capable of complementary and sensitive detection of β-lactam antibiotics, ampicillin in particular. The use of an aptamer specific to ampicillin assures good selectivity and sensitivity for the detection of ampicillin from different matrice. This new approach is dedicated for a portable, remote sensing platform based on low-cost, small size and low-power consumption solution. The simple experimental hybrid platform integrates the results from the D-shape surface plasmon resonance plastic optical fiber (SPR-POF) and from the electrochemical (bio)sensor, for the analysis of ampicillin, delivering sensitive and reliable results. The SPR-POF already used in many previous applications is embedded in a new experimental setup with fluorescent fibers emitters, for broadband wavelength analysis, low-power consumption and low-heating capabilities of the sensing platform.

  3. Development of a noise annoyance sensitivity scale

    NASA Technical Reports Server (NTRS)

    Bregman, H. L.; Pearson, R. G.

    1972-01-01

    Examining the problem of noise pollution from the psychological rather than the engineering view, a test of human sensitivity to noise was developed against the criterion of noise annoyance. Test development evolved from a previous study in which biographical, attitudinal, and personality data was collected on a sample of 166 subjects drawn from the adult community of Raleigh. Analysis revealed that only a small subset of the data collected was predictive of noise annoyance. Item analysis yielded 74 predictive items that composed the preliminary noise sensitivity test. This was administered to a sample of 80 adults who later rate the annoyance value of six sounds (equated in terms of peak sound pressure level) presented in a simulated home, living-room environment. A predictive model involving 20 test items was developed using multiple regression techniques, and an item weighting scheme was evaluated.

  4. A closure test for time-specific capture-recapture data

    USGS Publications Warehouse

    Stanley, T.R.; Burnham, K.P.

    1999-01-01

    The assumption of demographic closure in the analysis of capture-recapture data under closed-population models is of fundamental importance. Yet, little progress has been made in the development of omnibus tests of the closure assumption. We present a closure test for time-specific data that, in principle, tests the null hypothesis of closed-population model M(t) against the open-population Jolly-Seber model as a specific alternative. This test is chi-square, and can be decomposed into informative components that can be interpreted to determine the nature of closure violations. The test is most sensitive to permanent emigration and least sensitive to temporary emigration, and is of intermediate sensitivity to permanent or temporary immigration. This test is a versatile tool for testing the assumption of demographic closure in the analysis of capture-recapture data.

  5. Quadrant photodetector sensitivity.

    PubMed

    Manojlović, Lazo M

    2011-07-10

    A quantitative theoretical analysis of the quadrant photodetector (QPD) sensitivity in position measurement is presented. The Gaussian light spot irradiance distribution on the QPD surface was assumed to meet most of the real-life applications of this sensor. As the result of the mathematical treatment of the problem, we obtained, in a closed form, the sensitivity function versus the ratio of the light spot 1/e radius and the QPD radius. The obtained result is valid for the full range of the ratios. To check the influence of the finite light spot radius on the interaxis cross talk and linearity, we also performed a mathematical analysis to quantitatively measure these types of errors. An optimal range of the ratio of light spot radius and QPD radius has been found to simultaneously achieve low interaxis cross talk and high linearity of the sensor. © 2011 Optical Society of America

  6. Integrated on-chip derivatization and electrophoresis for the rapid analysis of biogenic amines.

    PubMed

    Beard, Nigel P; Edel, Joshua B; deMello, Andrew J

    2004-07-01

    We demonstrate the monolithic integration of a chemical reactor with a capillary electrophoresis device for the rapid and sensitive analysis of biogenic amines. Fluorescein isothiocyanate (FITC) is widely employed for the analysis of amino-group containing analytes. However, the slow reaction kinetics hinders the use of this dye for on-chip labeling applications. Other alternatives are available such as o-phthaldehyde (OPA), however, the inferior photophysical properties and the UV lambdamax present difficulties when using common excitation sources leading to a disparity in sensitivity. Consequently, we present for the first time the use of dichlorotriazine fluorescein (DTAF) as a superior in situ derivatizing agent for biogenic amines in microfluidic devices. The developed microdevice employs both hydrodynamic and electroosmotic flow, facilitating the creation of a polymeric microchip to perform both precolumn derivatization and electrophoretic analysis. The favorable photophysical properties of the DTAF and its fast reaction kinetics provide detection limits down to 1 nM and total analysis times (including on-chip mixing and reaction) of <60 s. The detection limits are two orders of magnitude lower than current limits obtained with both FITC and OPA. The optimized microdevice is also employed to probe biogenic amines in real samples.

  7. Potential of far-ultraviolet absorption spectroscopy as a highly sensitive qualitative and quantitative analysis method for polymer films, part I: classification of commercial food wrap films.

    PubMed

    Sato, Harumi; Higashi, Noboru; Ikehata, Akifumi; Koide, Noriko; Ozaki, Yukihiro

    2007-07-01

    The aim of the present study is to propose a totally new technique for the utilization of far-ultraviolet (UV) spectroscopy in polymer thin film analysis. Far-UV spectra in the 120-300 nm region have been measured in situ for six kinds of commercial polymer wrap films by use of a novel type of far-UV spectrometer that does not need vacuum evaporation. These films can be straightforwardly classified into three groups, polyethylene (PE) films, polyvinyl chloride (PVC) films, and polyvinylidene chloride (PVDC) films, by using the raw spectra. The differences in the wavelength of the absorption band due to the sigma-sigma* transition of the C-C bond have been used for the classification of the six kinds of films. Using this method, it was easy to distinguish the three kinds of PE films and to separate the two kinds of PVDC films. Compared with other spectroscopic methods, the advantages of this technique include nondestructive analysis, easy spectral measurement, high sensitivity, and simple spectral analysis. The present study has demonstrated that far-UV spectroscopy is a very promising technique for polymer film analysis.

  8. Three Minute Method for Amino Acid Analysis by UHPLC and high resolution quadrupole orbitrap mass spectrometry

    PubMed Central

    Nemkov, Travis; D'Alessandro, Angelo; Hansen, Kirk C.

    2015-01-01

    Amino acid analysis is a powerful bioanalytical technique for many biomedical research endeavors, including cancer, emergency medicine, nutrition and neuroscience research. In the present study, we present a three minute analytical method for underivatized amino acid analysis that employs ultra-high performance liquid chromatography and high resolution quadrupole orbitrap mass spectrometry. This method has demonstrated linearity (mM to nM range), reproducibility (intra-day<5%, inter-day<20%), sensitivity (low fmol) and selectivity. Here, we illustrate the rapidity and accuracy of the method through comparison with conventional liquid chromatography-mass spectrometry methods. We further demonstrate the robustness and sensitivity of this method on a diverse range of biological matrices. Using this method we were able to selectively discriminate murine pancreatic cancer cells with and without knocked down expression of Hypoxia Inducible Factor 1α; plasma, lymph and bronchioalveolar lavage fluid samples from control versus hemorrhaged rats; and muscle tissue samples harvested from rats subjected to both low fat and high fat diets. Furthermore, we were able to exploit the sensitivity of the method to detect and quantify the release of glutamate from sparsely isolated murine taste buds. Spiked in light or heavy standards (13C6-arginine, 13C6-lysine, 13C515N2-glutamine) or xenometabolites were used to determine coefficient of variations, confirm linearity of relative quantitation in four different matrices, and overcome matrix effects for absolute quantitation. The presented method enables high-throughput analysis of low abundance samples requiring only one percent of the material extracted from 100,000 cells, 10 μl of biological fluid, or 2 mg of muscle tissue. PMID:26058356

  9. Dynamic analysis of process reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shadle, L.J.; Lawson, L.O.; Noel, S.D.

    1995-06-01

    The approach and methodology of conducting a dynamic analysis is presented in this poster session in order to describe how this type of analysis can be used to evaluate the operation and control of process reactors. Dynamic analysis of the PyGas{trademark} gasification process is used to illustrate the utility of this approach. PyGas{trademark} is the gasifier being developed for the Gasification Product Improvement Facility (GPIF) by Jacobs-Siffine Engineering and Riley Stoker. In the first step of the analysis, process models are used to calculate the steady-state conditions and associated sensitivities for the process. For the PyGas{trademark} gasifier, the process modelsmore » are non-linear mechanistic models of the jetting fluidized-bed pyrolyzer and the fixed-bed gasifier. These process sensitivities are key input, in the form of gain parameters or transfer functions, to the dynamic engineering models.« less

  10. Modeling Nitrogen Dynamics in a Waste Stabilization Pond System Using Flexible Modeling Environment with MCMC.

    PubMed

    Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V; Petway, Joy R

    2017-07-12

    This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH₃-N and NO₃-N. Results indicate that the integrated FME-GLUE-based model, with good Nash-Sutcliffe coefficients (0.53-0.69) and correlation coefficients (0.76-0.83), successfully simulates the concentrations of ON-N, NH₃-N and NO₃-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH₃-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO₃-N simulation, which was measured using global sensitivity.

  11. Resolution of VTI anisotropy with elastic full-waveform inversion: theory and basic numerical examples

    NASA Astrophysics Data System (ADS)

    Podgornova, O.; Leaney, S.; Liang, L.

    2018-07-01

    Extracting medium properties from seismic data faces some limitations due to the finite frequency content of the data and restricted spatial positions of the sources and receivers. Some distributions of the medium properties make low impact on the data (including none). If these properties are used as the inversion parameters, then the inverse problem becomes overparametrized, leading to ambiguous results. We present an analysis of multiparameter resolution for the linearized inverse problem in the framework of elastic full-waveform inversion. We show that the spatial and multiparameter sensitivities are intertwined and non-sensitive properties are spatial distributions of some non-trivial combinations of the conventional elastic parameters. The analysis accounts for the Hessian information and frequency content of the data; it is semi-analytical (in some scenarios analytical), easy to interpret and enhances results of the widely used radiation pattern analysis. Single-type scattering is shown to have limited sensitivity, even for full-aperture data. Finite-frequency data lose multiparameter sensitivity at smooth and fine spatial scales. Also, we establish ways to quantify a spatial-multiparameter coupling and demonstrate that the theoretical predictions agree well with the numerical results.

  12. CFD and Aeroelastic Analysis of the MEXICO Wind Turbine

    NASA Astrophysics Data System (ADS)

    Carrión, M.; Woodgate, M.; Steijl, R.; Barakos, G.; Gómez-Iradi, S.; Munduate, X.

    2014-12-01

    This paper presents an aerodynamic and aeroelastic analysis of the MEXICO wind turbine, using the compressible HMB solver of Liverpool. The aeroelasticity of the blade, as well as the effect of a low-Mach scheme were studied for the zero-yaw 15m/s wind case and steady- state computations. The wake developed behind the rotor was also extracted and compared with the experimental data, using the compressible solver and a low-Mach scheme. It was found that the loads were not sensitive to the Mach number effects, although the low-Mach scheme improved the wake predictions. The sensitivity of the results to the blade structural properties was also highlighted.

  13. Sensitive analysis of blonanserin, a novel antipsychotic agent, in human plasma by ultra-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Ogawa, Tadashi; Hattori, Hideki; Kaneko, Rina; Ito, Kenjiro; Iwai, Masayo; Mizutani, Yoko; Arinobu, Tetsuya; Ishii, Akira; Suzuki, Osamu; Seno, Hiroshi

    2010-01-01

    A rapid and sensitive method for analysis of blonanserin in human plasma by ultra-performance liquid chromatography-tandem mass spectrometry is presented. After pretreatment of a plasma sample by solid-phase extraction, blonanserin was analyzed by the system with a C(18) column. This method gave satisfactory recovery rates, reproducibility, and good linearity of calibration curve in the range of 0.01-10.0 ng/mL for quality control samples spiked with blonanserin. The detection limit was as low as 1 pg/mL. This method seems very useful in forensic and clinical toxicology and pharmacokinetic studies.

  14. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  15. Refractive collimation beam shaper design and sensitivity analysis using a free-form profile construction method.

    PubMed

    Tsai, Chung-Yu

    2017-07-01

    A refractive laser beam shaper comprising two free-form profiles is presented. The profiles are designed using a free-form profile construction method such that each incident ray is directed in a certain user-specified direction or to a particular point on the target surface so as to achieve the required illumination distribution of the output beam. The validity of the proposed design method is demonstrated by means of ZEMAX simulations. The method is mathematically straightforward and easily implemented in computer code. It thus provides a convenient tool for the design and sensitivity analysis of laser beam shapers and similar optical components.

  16. Implementation of structural response sensitivity calculations in a large-scale finite-element analysis system

    NASA Technical Reports Server (NTRS)

    Giles, G. L.; Rogers, J. L., Jr.

    1982-01-01

    The methodology used to implement structural sensitivity calculations into a major, general-purpose finite-element analysis system (SPAR) is described. This implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calculating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of SPAR are also discussed.

  17. Non-volatile analysis in fruits by laser resonant ionization spectrometry: application to resveratrol (3,5,4'-trihydroxystilbene) in grapes

    NASA Astrophysics Data System (ADS)

    Montero, C.; Orea, J. M.; Soledad Muñoz, M.; Lobo, R. F. M.; González Ureña, A.

    A laser desorption (LD) coupled with resonance-enhanced multiphoton ionisation (REMPI) and time-of-flight mass spectrometry (TOFMS) technique for non-volatile trace analysis compounds is presented. Essential features are: (a) an enhanced desorption yield due to the mixing of metal powder with the analyte in the sample preparation, (b) a high resolution, great sensitivity and low detection limit due to laser resonant ionisation and mass spectrometry detection. Application to resveratrol content in grapes demonstrated the capability of the analytical method with a sensitivity of 0.2 pg per single laser shot and a detection limit of 5 ppb.

  18. Fast computation of derivative based sensitivities of PSHA models via algorithmic differentiation

    NASA Astrophysics Data System (ADS)

    Leövey, Hernan; Molkenthin, Christian; Scherbaum, Frank; Griewank, Andreas; Kuehn, Nicolas; Stafford, Peter

    2015-04-01

    Probabilistic seismic hazard analysis (PSHA) is the preferred tool for estimation of potential ground-shaking hazard due to future earthquakes at a site of interest. A modern PSHA represents a complex framework which combines different models with possible many inputs. Sensitivity analysis is a valuable tool for quantifying changes of a model output as inputs are perturbed, identifying critical input parameters and obtaining insight in the model behavior. Differential sensitivity analysis relies on calculating first-order partial derivatives of the model output with respect to its inputs. Moreover, derivative based global sensitivity measures (Sobol' & Kucherenko '09) can be practically used to detect non-essential inputs of the models, thus restricting the focus of attention to a possible much smaller set of inputs. Nevertheless, obtaining first-order partial derivatives of complex models with traditional approaches can be very challenging, and usually increases the computation complexity linearly with the number of inputs appearing in the models. In this study we show how Algorithmic Differentiation (AD) tools can be used in a complex framework such as PSHA to successfully estimate derivative based sensitivities, as is the case in various other domains such as meteorology or aerodynamics, without no significant increase in the computation complexity required for the original computations. First we demonstrate the feasibility of the AD methodology by comparing AD derived sensitivities to analytically derived sensitivities for a basic case of PSHA using a simple ground-motion prediction equation. In a second step, we derive sensitivities via AD for a more complex PSHA study using a ground motion attenuation relation based on a stochastic method to simulate strong motion. The presented approach is general enough to accommodate more advanced PSHA studies of higher complexity.

  19. Cost-effectiveness of breech version by acupuncture-type interventions on BL 67, including moxibustion, for women with a breech foetus at 33 weeks gestation: a modelling approach.

    PubMed

    van den Berg, Ineke; Kaandorp, Guido C; Bosch, Johanna L; Duvekot, Johannes J; Arends, Lidia R; Hunink, M G Myriam

    2010-04-01

    To assess, using a modelling approach, the effectiveness and costs of breech version with acupuncture-type interventions on BL67 (BVA-T), including moxibustion, compared to expectant management for women with a foetal breech presentation at 33 weeks gestation. A decision tree was developed to predict the number of caesarean sections prevented by BVA-T compared to expectant management to rectify breech presentation. The model accounted for external cephalic versions (ECV), treatment compliance, and costs for 10,000 simulated breech presentations at 33 weeks gestational age. Event rates were taken from Dutch population data and the international literature, and the relative effectiveness of BVA-T was based on a specific meta-analysis. Sensitivity analyses were conducted to evaluate the robustness of the results. We calculated percentages of breech presentations at term, caesarean sections, and costs from the third-party payer perspective. Odds ratios (OR) and cost differences of BVA-T versus expectant management were calculated. (Probabilistic) sensitivity analysis and expected value of perfect information analysis were performed. The simulated outcomes demonstrated 32% breech presentations after BVA-T versus 53% with expectant management (OR 0.61, 95% CI 0.43, 0.83). The percentage caesarean section was 37% after BVA-T versus 50% with expectant management (OR 0.73, 95% CI 0.59, 0.88). The mean cost-savings per woman was euro 451 (95% CI euro 109, euro 775; p=0.005) using moxibustion. Sensitivity analysis showed that if 16% or more of women offered moxibustion complied, it was more effective and less costly than expectant management. To prevent one caesarean section, 7 women had to use BVA-T. The expected value of perfect information from further research was euro0.32 per woman. The results suggest that offering BVA-T to women with a breech foetus at 33 weeks gestation reduces the number of breech presentations at term, thus reducing the number of caesarean sections, and is cost-effective compared to expectant management, including external cephalic version. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  20. Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks

    PubMed Central

    Kaltenbacher, Barbara; Hasenauer, Jan

    2017-01-01

    Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351

  1. Three Dimensional Parametric Analyses of Stress Concentration Factor and Its Mitigation in Isotropic and Orthotropic Plate with Central Circular Hole Under Axial In-Plane Loading

    NASA Astrophysics Data System (ADS)

    Nagpal, Shubhrata; Jain, Nitin Kumar; Sanyal, Shubhashis

    2016-01-01

    The problem of finding the stress concentration factor of a loaded rectangular plate has offered considerably analytical difficulty. The present work focused on understanding of behavior of isotropic and orthotropic plate subjected to static in-plane loading using finite element method. The complete plate model configuration has been analyzed using finite element method based software ANSYS. In the present work two parameters: thickness to width of plate (T/A) and diameter of hole to width of plate (D/A) have been varied for analysis of stress concentration factor (SCF) and its mitigation. Plates of five different materials have been considered for complete analysis to find out the sensitivity of stress concentration factor. The D/A ratio varied from 0.1 to 0.7 for analysis of SCF and varied from 0.1 to 0.5 for analyzing the mitigation of SCF. 0.01, 0.05 and 0.1 are considered as T/A ratio for all the cases. The results are presented in graphical form and discussed. The mitigation in SCF reported is very encouraging. The SCF is more sensitive to D/A ratio as compared to T/A.

  2. Comparison of measured and calculated forces on the RE-1000 free-piston Stirling engine displacer

    NASA Technical Reports Server (NTRS)

    Schreiber, Jeffrey G.

    1987-01-01

    The NASA Lewis Research Center has tested a 1 kW free-piston Stirling engine at the NASA Lewis test facilities. The tests performed over the past several years on the RE-1000 single cylinder engine are known as the sensitivity tests. This report presents an analysis of some of the data published in the sensitivity test report. A basic investigation into the measured forces acting on the unconstrained displacer of the engine is presented. These measured forces are then correlated with the values predicted by the NASA Lewis Stirling engine computer simulation. The results of the investigation are presented in the form of phasor diagrams. Possible future work resulting from this investigation is outlined.

  3. Sensitivity Analysis of the Sheet Metal Stamping Processes Based on Inverse Finite Element Modeling and Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Yu, Maolin; Du, R.

    2005-08-01

    Sheet metal stamping is one of the most commonly used manufacturing processes, and hence, much research has been carried for economic gain. Searching through the literatures, however, it is found that there are still a lots of problems unsolved. For example, it is well known that for a same press, same workpiece material, and same set of die, the product quality may vary owing to a number of factors, such as the inhomogeneous of the workpice material, the loading error, the lubrication, and etc. Presently, few seem able to predict the quality variation, not to mention what contribute to the quality variation. As a result, trial-and-error is still needed in the shop floor, causing additional cost and time delay. This paper introduces a new approach to predict the product quality variation and identify the sensitive design / process parameters. The new approach is based on a combination of inverse Finite Element Modeling (FEM) and Monte Carlo Simulation (more specifically, the Latin Hypercube Sampling (LHS) approach). With an acceptable accuracy, the inverse FEM (also called one-step FEM) requires much less computation load than that of the usual incremental FEM and hence, can be used to predict the quality variations under various conditions. LHS is a statistical method, through which the sensitivity analysis can be carried out. The result of the sensitivity analysis has clear physical meaning and can be used to optimize the die design and / or the process design. Two simulation examples are presented including drawing a rectangular box and drawing a two-step rectangular box.

  4. Application of sensitivity analysis for assessment of de-desertification alternatives in the central Iran by using Triantaphyllou method.

    PubMed

    Sadeghi Ravesh, Mohammad Hassan; Ahmadi, Hassan; Zehtabian, Gholamreza

    2011-08-01

    Desertification, land degradation in arid, semi-arid, and dry sub-humid regions, is a global environmental problem. With respect to increasing importance of desertification and its complexity, the necessity of attention to the optimal de-desertification alternatives is essential. Therefore, this work presents an analytic hierarchy process (AHP) method to objectively select the optimal de-desertification alternatives based on the results of interviews with experts in Khezr Abad region, central Iran as the case study. This model was used in Yazd Khezr Abad region to evaluate the efficiency in presentation of better alternatives related to personal and environmental situations. Obtained results indicate that the criterion "proportion and adaptation to the environment" with the weighted average of 33.6% is the most important criterion from experts viewpoints. While prevention alternatives of land usage unsuitable of reveres and conversion with 22.88% mean weight and vegetation cover development and reclamation with 21.9% mean weight are recognized ordinarily as the most important de-desertification alternatives in region. Finally, sensitivity analysis is performed in detail by varying the objective factor decision weight, the priority weight of subjective factors, and the gain factors. After the fulfillment of sensitivity analysis and determination of the most sensitive criteria and alternatives, the former classification and ranking of alternatives does not change so much, and it was observed that unsuitable land use alternative with the preference degree of 22.7% was still in the first order of priority. The final priority of livestock grazing control alternative was replaced with the alternative of modification of ground water harvesting.

  5. Pain sensitivity mediates the relationship between stress and headache intensity in chronic tension-type headache.

    PubMed

    Cathcart, Stuart; Bhullar, Navjot; Immink, Maarten; Della Vedova, Chris; Hayball, John

    2012-01-01

    A central model for chronic tension-type headache (CTH) posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. The prediction from this model that pain sensitivity mediates the relationship between stress and headache activity has not yet been examined. To determine whether pain sensitivity mediates the relationship between stress and prospective headache activity in CTH sufferers. Self-reported stress, pain sensitivity and prospective headache activity were measured in 53 CTH sufferers recruited from the general population. Pain sensitivity was modelled as a mediator between stress and headache activity, and tested using a nonparametric bootstrap analysis. Pain sensitivity significantly mediated the relationship between stress and headache intensity. The results of the present study support the central model for CTH, which posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. Implications for the mechanisms and treatment of CTH are discussed.

  6. 7 CFR 1710.303 - Power cost studies-power supply borrowers.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... contracts or revisions to existing contracts, and an analysis of the effects on power costs; (4) Use of sensitivity analyses to determine the vulnerability of the alternatives to a reasonable range of assumptions... conservation alternatives as set forth in §§ 1710.253 and 1710.254; (2) A present-value analysis of the costs...

  7. An interferometric imaging biosensor using weighted spectrum analysis to confirm DNA monolayer films with attogram sensitivity.

    PubMed

    Fu, Rongxin; Li, Qi; Wang, Ruliang; Xue, Ning; Lin, Xue; Su, Ya; Jiang, Kai; Jin, Xiangyu; Lin, Rongzan; Gan, Wupeng; Lu, Ying; Huang, Guoliang

    2018-05-01

    Interferometric imaging biosensors are powerful and convenient tools for confirming the existence of DNA monolayer films on silicon microarray platforms. However, their accuracy and sensitivity need further improvement because DNA molecules contribute to an inconspicuous interferometric signal both in thickness and size. Such weaknesses result in poor performance of these biosensors for low DNA content analyses and point mutation tests. In this paper, an interferometric imaging biosensor with weighted spectrum analysis is presented to confirm DNA monolayer films. The interferometric signal of DNA molecules can be extracted and then quantitative detection results for DNA microarrays can be reconstructed. With the proposed strategy, the relative error of thickness detection was reduced from 88.94% to merely 4.15%. The mass sensitivity per unit area of the proposed biosensor reached 20 attograms (ag). Therefore, the sample consumption per unit area of the target DNA content was only 62.5 zeptomoles (zm), with the volume of 0.25 picolitres (pL). Compared with the fluorescence resonance energy transfer (FRET), the measurement veracity of the interferometric imaging biosensor with weighted spectrum analysis is free to the changes in spotting concentration and DNA length. The detection range was more than 1µm. Moreover, single nucleotide mismatch could be pointed out combined with specific DNA ligation. A mutation experiment for lung cancer detection proved the high selectivity and accurate analysis capability of the presented biosensor. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. On understanding the relationship between structure in the potential surface and observables in classical dynamics: A functional sensitivity analysis approach

    NASA Astrophysics Data System (ADS)

    Judson, Richard S.; Rabitz, Herschel

    1987-04-01

    The relationship between structure in the potential surface and classical mechanical observables is examined by means of functional sensitivity analysis. Functional sensitivities provide maps of the potential surface, highlighting those regions that play the greatest role in determining the behavior of observables. A set of differential equations for the sensitivities of the trajectory components are derived. These are then solved using a Green's function method. It is found that the sensitivities become singular at the trajectory turning points with the singularities going as η-3/2, with η being the distance from the nearest turning point. The sensitivities are zero outside of the energetically and dynamically allowed region of phase space. A second set of equations is derived from which the sensitivities of observables can be directly calculated. An adjoint Green's function technique is employed, providing an efficient method for numerically calculating these quantities. Sensitivity maps are presented for a simple collinear atom-diatom inelastic scattering problem and for two Henon-Heiles type Hamiltonians modeling intramolecular processes. It is found that the positions of the trajectory caustics in the bound state problem determine regions of the highest potential surface sensitivities. In the scattering problem (which is impulsive, so that ``sticky'' collisions did not occur), the positions of the turning points of the individual trajectory components determine the regions of high sensitivity. In both cases, these lines of singularities are superimposed on a rich background structure. Most interesting is the appearance of classical interference effects. The interference features in the sensitivity maps occur most noticeably where two or more lines of turning points cross. The important practical motivation for calculating the sensitivities derives from the fact that the potential is a function, implying that any direct attempt to understand how local potential regions affect the behavior of the observables by repeatedly and systematically altering the potential will be prohibitively expensive. The functional sensitivity method enables one to perform this analysis at a fraction of the computational labor required for the direct method.

  9. Guide to analyzing investment options using TWIGS.

    Treesearch

    Charles R Blinn; Dietmar W. Rose; Monique L. Belli

    1988-01-01

    Describes methods for analyzing economic return of simulated stand management alternatives in TWIGS. Defines and discusses net present value, equivalent annual income, soil expectation value, and real vs. nominal analyses. Discusses risk and sensitivity analysis when comparing alternatives.

  10. 32 CFR 701.117 - Changes to PA systems of records.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... wishing to create a new PA system of records must conduct a risk analysis of the proposed system to consider the sensitivity and use of the records; present and projected threats and vulnerabilities; and...

  11. ECONOMICS AND APPRAISAL OF CONVENTIONAL OIL AND GAS IN THE WESTERN GULF OF MEXICO.

    USGS Publications Warehouse

    Attanasi, E.D.; Haynes, John L.

    1984-01-01

    The oil and gas industry frequently appraises undiscovered oil and gas resources on a regional basis to decide whether to start or continue exploration programs. The appraisals are of little value unless conditioned by estimates of the costs of finding and producing the resources. This paper presents an economic appraisal of undiscovered oil and gas resources in the western Gulf of Mexico. Also presented are a description of the model used to make the assessment, results of a sensitivity analysis, and a discussion of the implications of the results to the industry. The appraisal is shown to be relatively robust to changes in physical and engineering assumptions. Because the number of commercial discoveries was found to be quite sensitive to economic conditions, the analysis has important implications in terms of forecasting future industry drilling and other associated activities in the western Gulf of Mexico.

  12. Plans for a sensitivity analysis of bridge-scour computations

    USGS Publications Warehouse

    Dunn, David D.; Smith, Peter N.

    1993-01-01

    Plans for an analysis of the sensitivity of Level 2 bridge-scour computations are described. Cross-section data from 15 bridge sites in Texas are modified to reflect four levels of field effort ranging from no field surveys to complete surveys. Data from United States Geological Survey (USGS) topographic maps will be used to supplement incomplete field surveys. The cross sections are used to compute the water-surface profile through each bridge for several T-year recurrence-interval design discharges. The effect of determining the downstream energy grade-line slope from topographic maps is investigated by systematically varying the starting slope of each profile. The water-surface profile analyses are then used to compute potential scour resulting from each of the design discharges. The planned results will be presented in the form of exceedance-probability versus scour-depth plots with the maximum and minimum scour depths at each T-year discharge presented as error bars.

  13. Sensitive Multi-Species Emissions Monitoring: Infrared Laser-Based Detection of Trace-Level Contaminants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steill, Jeffrey D.; Huang, Haifeng; Hoops, Alexandra A.

    This report summarizes our development of spectroscopic chemical analysis techniques and spectral modeling for trace-gas measurements of highly-regulated low-concentration species present in flue gas emissions from utility coal boilers such as HCl under conditions of high humidity. Detailed spectral modeling of the spectroscopy of HCl and other important combustion and atmospheric species such as H 2 O, CO 2 , N 2 O, NO 2 , SO 2 , and CH 4 demonstrates that IR-laser spectroscopy is a sensitive multi-component analysis strategy. Experimental measurements from techniques based on IR laser spectroscopy are presented that demonstrate sub-ppm sensitivity levels to thesemore » species. Photoacoustic infrared spectroscopy is used to detect and quantify HCl at ppm levels with extremely high signal-to-noise even under conditions of high relative humidity. Additionally, cavity ring-down IR spectroscopy is used to achieve an extremely high sensitivity to combustion trace gases in this spectral region; ppm level CH 4 is one demonstrated example. The importance of spectral resolution in the sensitivity of a trace-gas measurement is examined by spectral modeling in the mid- and near-IR, and efforts to improve measurement resolution through novel instrument development are described. While previous project reports focused on benefits and complexities of the dual-etalon cavity ring-down infrared spectrometer, here details on steps taken to implement this unique and potentially revolutionary instrument are described. This report also illustrates and critiques the general strategy of IR- laser photodetection of trace gases leading to the conclusion that mid-IR laser spectroscopy techniques provide a promising basis for further instrument development and implementation that will enable cost-effective sensitive detection of multiple key contaminant species simultaneously.« less

  14. Parameters Estimation For A Patellofemoral Joint Of A Human Knee Using A Vector Method

    NASA Astrophysics Data System (ADS)

    Ciszkiewicz, A.; Knapczyk, J.

    2015-08-01

    Position and displacement analysis of a spherical model of a human knee joint using the vector method was presented. Sensitivity analysis and parameter estimation were performed using the evolutionary algorithm method. Computer simulations for the mechanism with estimated parameters proved the effectiveness of the prepared software. The method itself can be useful when solving problems concerning the displacement and loads analysis in the knee joint.

  15. The Effects of Instrumentation on Urine Cytology and CK-20 Analysis for the Detection of Bladder Cancer.

    PubMed

    Wegelin, Olivier; Bartels, Diny W M; Tromp, Ellen; Kuypers, Karel C; van Melick, Harm H E

    2015-10-01

    To evaluate the effects of cystoscopy on urine cytology and additional cytokeratin-20 (CK-20) staining in patients presenting with gross hematuria. For 83 patients presenting with gross hematuria, spontaneous and instrumented paired urine samples were analyzed. Three patients were excluded. Spontaneous samples were collected within 1 hour before cystoscopy, and the instrumented samples were tapped through the cystoscope. Subsequently, patients underwent cystoscopic evaluation and imaging of the urinary tract. If tumor suspicious lesions were found on cystoscopy or imaging, subjects underwent transurethral resection or ureterorenoscopy. Two blinded uropathological reviewers (DB, KK) evaluated 160 urine samples. Reference standards were results of cystoscopy, imaging, or histopathology. Thirty-seven patients (46.3%) underwent transurethral resection or ureterorenoscopy procedures. In 30 patients (37.5%) tumor presence was confirmed by histopathology. The specificity of urine analysis was significantly higher for spontaneous samples than instrumented samples for both cytology alone (94% vs 72%, P = .01) and for cytology combined with CK-20 analysis (98% vs 84%, P = .02). The difference in sensitivity between spontaneous and instrumented samples was not significant for both cytology alone (40% vs 53%) and combined with CK-20 analysis (67% vs 67%). The addition of CK-20 analysis to cytology significantly increases test sensitivity in spontaneous urine cytology (67% vs 40%, P = .03). Instrumentation significantly decreases specificity of urine cytology. This may lead to unnecessary diagnostic procedures. Additional CK-20 staining in spontaneous urine cytology significantly increases sensitivity but did not improve the already high specificity. We suggest performing urine cytology and CK-20 analysis on spontaneously voided urine. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Stereometric parameters change vs. Topographic Change Analysis (TCA) agreement in Heidelberg Retina Tomography III (HRT-3) early detection of clinical significant glaucoma progression.

    PubMed

    Dascalu, A M; Cherecheanu, A P; Stana, D; Voinea, L; Ciuluvica, R; Savlovschi, C; Serban, D

    2014-01-01

    to investigate the sensitivity and specificity of the stereometric parameters change analysis vs. Topographic Change Analysis in early detection of glaucoma progression. 81 patients with POAG were monitored for 4 years (GAT monthly, SAP at every 6 months, optic disc photographs and HRT3 yearly). The exclusion criteria were other optic disc or retinal pathology; topographic standard deviation (TSD>30; inter-test variation of reference height>25 μm. The criterion for structural progression was the following: at least 20 adjacent super-pixels with a clinically significant decrease in height (>5%). 16 patients of the total 81 presented structural progression on TCA. The most useful stereometric parameters for the early detection of glaucoma progression were the following: Rim Area change (sensitivity 100%, specificity 74.2% for a "cut-off " value of -0.05), C/D Area change (sensitivity 85.7%, specificity 71.5% for a "cut off " value of 0.02), C/D linear change (sensitivity 85.7%, specificity 71.5% for a "cut-off " value of 0.02), Rim Volume change (sensitivity 71.4%, specificity 88.8% for a "cut-off " value of -0.04). RNFL Thickness change (<0) was highly sensitive (82%), but less specific for glaucoma progression (45,2%). Changes of the other stereometric parameters have a limited diagnostic value for the early detection of glaucoma progression. TCA is a valuable tool for the assessment of the structural progression in glaucoma patients and its inter-test variability is low. On long-term, the quantitative analysis according to stereometric parameters change is also very important. The most relevant parameters to detect progression are RA, C/D Area, Linear C/D and RV.

  17. Estimating the neutrally buoyant energy density of a Rankine-cycle/fuel-cell underwater propulsion system

    NASA Astrophysics Data System (ADS)

    Waters, Daniel F.; Cadou, Christopher P.

    2014-02-01

    A unique requirement of underwater vehicles' power/energy systems is that they remain neutrally buoyant over the course of a mission. Previous work published in the Journal of Power Sources reported gross as opposed to neutrally-buoyant energy densities of an integrated solid oxide fuel cell/Rankine-cycle based power system based on the exothermic reaction of aluminum with seawater. This paper corrects this shortcoming by presenting a model for estimating system mass and using it to update the key findings of the original paper in the context of the neutral buoyancy requirement. It also presents an expanded sensitivity analysis to illustrate the influence of various design and modeling assumptions. While energy density is very sensitive to turbine efficiency (sensitivity coefficient in excess of 0.60), it is relatively insensitive to all other major design parameters (sensitivity coefficients < 0.15) like compressor efficiency, inlet water temperature, scaling methodology, etc. The neutral buoyancy requirement introduces a significant (∼15%) energy density penalty but overall the system still appears to offer factors of five to eight improvements in energy density (i.e., vehicle range/endurance) over present battery-based technologies.

  18. Use of piezoelectric foil for flow diagnostics

    NASA Technical Reports Server (NTRS)

    Carraway, Debra L.; Bertelrud, Arild

    1989-01-01

    A laboratory investigation was conducted to characterize two piezoelectric-film sensor configurations, a rigidly mounted sensor and a sensor mounted over an air cavity. The sensors are evaluated for sensitivity and frequency response, and methods to optimize data are presented. The cavity-mounted sensor exhibited a superior frequency response and was more sensitive to normal pressure fluctuations and less sensitive to vibrations through the structure. Both configurations were sensitive to large-scale structural vibrations. Flight-test data are shown for cavity-mounted sensors, illustrating practical aspects to consider when designing sensors for application in such harsh environments. The relation of the data to skin friction and maximum shear stress, transition detection, and turbulent viscous layers is derived through analysis of the flight data.

  19. On determining important aspects of mathematical models: Application to problems in physics and chemistry

    NASA Technical Reports Server (NTRS)

    Rabitz, Herschel

    1987-01-01

    The use of parametric and functional gradient sensitivity analysis techniques is considered for models described by partial differential equations. By interchanging appropriate dependent and independent variables, questions of inverse sensitivity may be addressed to gain insight into the inversion of observational data for parameter and function identification in mathematical models. It may be argued that the presence of a subset of dominantly strong coupled dependent variables will result in the overall system sensitivity behavior collapsing into a simple set of scaling and self similarity relations amongst elements of the entire matrix of sensitivity coefficients. These general tools are generic in nature, but herein their application to problems arising in selected areas of physics and chemistry is presented.

  20. Neural Correlates of Moral Sensitivity and Moral Judgment Associated with Brain Circuitries of Selfhood: A Meta-Analysis

    ERIC Educational Resources Information Center

    Han, Hyemin

    2017-01-01

    The present study meta-analyzed 45 experiments with 959 subjects and 463 activation foci reported in 43 published articles that investigated the neural mechanism of moral functions by comparing neural activity between the moral task conditions and non-moral task conditions with the Activation Likelihood Estimation method. The present study…

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Szymanski, J.J.; Amann, J.F.; Baker, K.

    The MEGA experiment is designed to search for the rare decay {mu}{r_arrow}{ital e}{gamma} with a branching ratio sensitivity of {similar_to}5{times}10{sup {minus}13}. Production data have been taken during 1992 and 1993, and the detector is working as expected. Following a complete analysis, the present data set should represent an improvement of 12--15 in sensitivity over the previous limit of {mu}{r_arrow}{ital e}{gamma}. {copyright} {ital 1995} {ital American} {ital Institute} {ital of} {ital Physics}.

  2. Theoretical considerations of some nonlinear aspects of hypersonic panel flutter

    NASA Technical Reports Server (NTRS)

    Mcintosh, S. C., Jr.

    1974-01-01

    A research project to analyze the effects of hypersonic nonlinear aerodynamic loading on panel flutter is reported. The test equipment and procedures for conducting the tests are explained. The effects of aerodynamic linearities on stability were evaluated by determining constant-initial-energy amplitude-sensitive stability boundaries and comparing them with the corresponding linear stability boundaries. An attempt to develop an alternative method of analysis for systems where amplitude-sensitive instability is possible is presented.

  3. Integrated multidisciplinary design optimization using discrete sensitivity analysis for geometrically complex aeroelastic configurations

    NASA Astrophysics Data System (ADS)

    Newman, James Charles, III

    1997-10-01

    The first two steps in the development of an integrated multidisciplinary design optimization procedure capable of analyzing the nonlinear fluid flow about geometrically complex aeroelastic configurations have been accomplished in the present work. For the first step, a three-dimensional unstructured grid approach to aerodynamic shape sensitivity analysis and design optimization has been developed. The advantage of unstructured grids, when compared with a structured-grid approach, is their inherent ability to discretize irregularly shaped domains with greater efficiency and less effort. Hence, this approach is ideally suited for geometrically complex configurations of practical interest. In this work the time-dependent, nonlinear Euler equations are solved using an upwind, cell-centered, finite-volume scheme. The discrete, linearized systems which result from this scheme are solved iteratively by a preconditioned conjugate-gradient-like algorithm known as GMRES for the two-dimensional cases and a Gauss-Seidel algorithm for the three-dimensional; at steady-state, similar procedures are used to solve the accompanying linear aerodynamic sensitivity equations in incremental iterative form. As shown, this particular form of the sensitivity equation makes large-scale gradient-based aerodynamic optimization possible by taking advantage of memory efficient methods to construct exact Jacobian matrix-vector products. Various surface parameterization techniques have been employed in the current study to control the shape of the design surface. Once this surface has been deformed, the interior volume of the unstructured grid is adapted by considering the mesh as a system of interconnected tension springs. Grid sensitivities are obtained by differentiating the surface parameterization and the grid adaptation algorithms with ADIFOR, an advanced automatic-differentiation software tool. To demonstrate the ability of this procedure to analyze and design complex configurations of practical interest, the sensitivity analysis and shape optimization has been performed for several two- and three-dimensional cases. In twodimensions, an initially symmetric NACA-0012 airfoil and a high-lift multielement airfoil were examined. For the three-dimensional configurations, an initially rectangular wing with uniform NACA-0012 cross-sections was optimized; in addition, a complete Boeing 747-200 aircraft was studied. Furthermore, the current study also examines the effect of inconsistency in the order of spatial accuracy between the nonlinear fluid and linear shape sensitivity equations. The second step was to develop a computationally efficient, high-fidelity, integrated static aeroelastic analysis procedure. To accomplish this, a structural analysis code was coupled with the aforementioned unstructured grid aerodynamic analysis solver. The use of an unstructured grid scheme for the aerodynamic analysis enhances the interaction compatibility with the wing structure. The structural analysis utilizes finite elements to model the wing so that accurate structural deflections may be obtained. In the current work, parameters have been introduced to control the interaction of the computational fluid dynamics and structural analyses; these control parameters permit extremely efficient static aeroelastic computations. To demonstrate and evaluate this procedure, static aeroelastic analysis results for a flexible wing in low subsonic, high subsonic (subcritical), transonic (supercritical), and supersonic flow conditions are presented.

  4. Pulse analysis of acoustic emission signals. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Houghton, J. R.

    1976-01-01

    A method for the signature analysis of pulses in the frequency domain and the time domain is presented. Fourier spectrum, Fourier transfer function, shock spectrum and shock spectrum ratio are examined in the frequency domain analysis, and pulse shape deconvolution is developed for use in the time domain analysis. To demonstrate the relative sensitivity of each of the methods to small changes in the pulse shape, signatures of computer modeled systems with analytical pulses are presented. Optimization techniques are developed and used to indicate the best design parameters values for deconvolution of the pulse shape. Several experiments are presented that test the pulse signature analysis methods on different acoustic emission sources. These include acoustic emissions associated with: (1) crack propagation, (2) ball dropping on a plate, (3) spark discharge and (4) defective and good ball bearings.

  5. Rapid and sensitive detection of synthetic cannabinoids AMB-FUBINACA and α-PVP using surface enhanced Raman scattering (SERS)

    NASA Astrophysics Data System (ADS)

    Islam, Syed K.; Cheng, Yin Pak; Birke, Ronald L.; Green, Omar; Kubic, Thomas; Lombardi, John R.

    2018-04-01

    The application of surface enhanced Raman scattering (SERS) has been reported as a fast and sensitive analytical method in the trace detection of the two most commonly known synthetic cannabinoids AMB-FUBINACA and alpha-pyrrolidinovalerophenone (α-PVP). FUBINACA and α-PVP are two of the most dangerous synthetic cannabinoids which have been reported to cause numerous deaths in the United States. While instruments such as GC-MS, LC-MS have been traditionally recognized as analytical tools for the detection of these synthetic drugs, SERS has been recently gaining ground in the analysis of these synthetic drugs due to its sensitivity in trace analysis and its effectiveness as a rapid method of detection. This present study shows the limit of detection of a concentration as low as picomolar for AMB-FUBINACA while for α-PVP, the limit of detection is in nanomolar concentration using SERS.

  6. The Einstein Observatory Extended Medium-Sensitivity Survey. I - X-ray data and analysis

    NASA Technical Reports Server (NTRS)

    Gioia, I. M.; Maccacaro, T.; Schild, R. E.; Wolter, A.; Stocke, J. T.

    1990-01-01

    This paper presents the results of the analysis of the X-ray data and the optical identification for the Einstein Observatory Extended Medium-Sensitivity Survey (EMSS). The survey consists of 835 serendipitous sources detected at or above 4 times the rms level in 1435 imaging proportional counter fields with centers located away from the Galactic plane. Their limiting sensitivities are about (5-300) x 10 to the -14th ergs/sq cm sec in the 0.3-3.5-keV energy band. A total area of 778 square deg of the high-Galactic-latitude sky has been covered. The data have been analyzed using the REV1 processing system, which takes into account the nonuniformities of the detector. The resulting EMSS catalog of X-ray sources is a flux-limited and homogeneous sample of astronomical objects that can be used for statistical studies.

  7. Sensitivity analysis for axis rotation diagrid structural systems according to brace angle changes

    NASA Astrophysics Data System (ADS)

    Yang, Jae-Kwang; Li, Long-Yang; Park, Sung-Soo

    2017-10-01

    General regular shaped diagrid structures can express diverse shapes because braces are installed along the exterior faces of the structures and the structures have no columns. However, since irregular shaped structures have diverse variables, studies to assess behaviors resulting from various variables are continuously required to supplement the imperfections related to such variables. In the present study, materials elastic modulus and yield strength were selected as variables for strength that would be applied to diagrid structural systems in the form of Twisters among the irregular shaped buildings classified by Vollers and that affect the structural design of these structural systems. The purpose of this study is to conduct sensitivity analysis for axial rotation diagrid structural systems according to changes in brace angles in order to identify the design variables that have relatively larger effects and the tendencies of the sensitivity of the structures according to changes in brace angles and axial rotation angles.

  8. Columnar aerosol properties over oceans by combining surface and aircraft measurements: sensitivity analysis.

    PubMed

    Zhang, T; Gordon, H R

    1997-04-20

    We report a sensitivity analysis for the algorithm presented by Gordon and Zhang [Appl. Opt. 34, 5552 (1995)] for inverting the radiance exiting the top and bottom of the atmosphere to yield the aerosol-scattering phase function [P(?)] and single-scattering albedo (omega(0)). The study of the algorithm's sensitivity to radiometric calibration errors, mean-zero instrument noise, sea-surface roughness, the curvature of the Earth's atmosphere, the polarization of the light field, and incorrect assumptions regarding the vertical structure of the atmosphere, indicates that the retrieved omega(0) has excellent stability even for very large values (~2) of the aerosol optical thickness; however, the error in the retrieved P(?) strongly depends on the measurement error and on the assumptions made in the retrieval algorithm. The retrieved phase functions in the blue are usually poor compared with those in the near infrared.

  9. Near-surface compressional and shear wave speeds constrained by body-wave polarization analysis

    NASA Astrophysics Data System (ADS)

    Park, Sunyoung; Ishii, Miaki

    2018-06-01

    A new technique to constrain near-surface seismic structure that relates body-wave polarization direction to the wave speed immediately beneath a seismic station is presented. The P-wave polarization direction is only sensitive to shear wave speed but not to compressional wave speed, while the S-wave polarization direction is sensitive to both wave speeds. The technique is applied to data from the High-Sensitivity Seismograph Network in Japan, and the results show that the wave speed estimates obtained from polarization analysis are compatible with those from borehole measurements. The lateral variations in wave speeds correlate with geological and physical features such as topography and volcanoes. The technique requires minimal computation resources, and can be used on any number of three-component teleseismic recordings, opening opportunities for non-invasive and inexpensive study of the shallowest (˜100 m) crustal structures.

  10. Sensitivity analysis for missing dichotomous outcome data in multi-visit randomized clinical trial with randomization-based covariance adjustment.

    PubMed

    Li, Siying; Koch, Gary G; Preisser, John S; Lam, Diana; Sanchez-Kam, Matilde

    2017-01-01

    Dichotomous endpoints in clinical trials have only two possible outcomes, either directly or via categorization of an ordinal or continuous observation. It is common to have missing data for one or more visits during a multi-visit study. This paper presents a closed form method for sensitivity analysis of a randomized multi-visit clinical trial that possibly has missing not at random (MNAR) dichotomous data. Counts of missing data are redistributed to the favorable and unfavorable outcomes mathematically to address possibly informative missing data. Adjusted proportion estimates and their closed form covariance matrix estimates are provided. Treatment comparisons over time are addressed with Mantel-Haenszel adjustment for a stratification factor and/or randomization-based adjustment for baseline covariables. The application of such sensitivity analyses is illustrated with an example. An appendix outlines an extension of the methodology to ordinal endpoints.

  11. LSENS, a general chemical kinetics and sensitivity analysis code for gas-phase reactions: User's guide

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1993-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS, are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include static system, steady, one-dimensional, inviscid flow, shock initiated reaction, and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method, which works efficiently for the extremes of very fast and very slow reaction, is used for solving the 'stiff' differential equation systems that arise in chemical kinetics. For static reactions, sensitivity coefficients of all dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters can be computed. This paper presents descriptions of the code and its usage, and includes several illustrative example problems.

  12. Detector Development for the MARE Neutrino Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galeazzi, M.; Bogorin, D.; Molina, R.

    2009-12-16

    The MARE experiment is designed to measure the mass of the neutrino with sub-eV sensitivity by measuring the beta decay of {sup 187}Re with cryogenic microcalorimeters. A preliminary analysis shows that, to achieve the necessary statistics, between 10,000 and 50,000 detectors are likely necessary. We have fabricated and characterized Iridium transition edge sensors with high reproducibility and uniformity for such a large scale experiment. We have also started a full scale simulation of the experimental setup for MARE, including thermalization in the absorber, detector response, and optimum filter analysis, to understand the issues related to reaching a sub-eV sensitivity andmore » to optimize the design of the MARE experiment. We present our characterization of the Ir devices, including reproducibility, uniformity, and sensitivity, and we discuss the implementation and capabilities of our full scale simulation.« less

  13. Sensitivity of predicted bioaerosol exposure from open windrow composting facilities to ADMS dispersion model parameters.

    PubMed

    Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H

    2016-12-15

    Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.

  14. Sensitivity analysis of the parameters of an HIV/AIDS model with condom campaign and antiretroviral therapy

    NASA Astrophysics Data System (ADS)

    Marsudi, Hidayat, Noor; Wibowo, Ratno Bagus Edy

    2017-12-01

    In this article, we present a deterministic model for the transmission dynamics of HIV/AIDS in which condom campaign and antiretroviral therapy are both important for the disease management. We calculate the effective reproduction number using the next generation matrix method and investigate the local and global stability of the disease-free equilibrium of the model. Sensitivity analysis of the effective reproduction number with respect to the model parameters were carried out. Our result shows that efficacy rate of condom campaign, transmission rate for contact with the asymptomatic infective, progression rate from the asymptomatic infective to the pre-AIDS infective, transmission rate for contact with the pre-AIDS infective, ARV therapy rate, proportion of the susceptible receiving condom campaign and proportion of the pre-AIDS receiving ARV therapy are highly sensitive parameters that effect the transmission dynamics of HIV/AIDS infection.

  15. Development of computer-aided design system of elastic sensitive elements of automatic metering devices

    NASA Astrophysics Data System (ADS)

    Kalinkina, M. E.; Kozlov, A. S.; Labkovskaia, R. I.; Pirozhnikova, O. I.; Tkalich, V. L.; Shmakov, N. A.

    2018-05-01

    The object of research is the element base of devices of control and automation systems, including in its composition annular elastic sensitive elements, methods of their modeling, calculation algorithms and software complexes for automation of their design processes. The article is devoted to the development of the computer-aided design system of elastic sensitive elements used in weight- and force-measuring automation devices. Based on the mathematical modeling of deformation processes in a solid, as well as the results of static and dynamic analysis, the calculation of elastic elements is given using the capabilities of modern software systems based on numerical simulation. In the course of the simulation, the model was a divided hexagonal grid of finite elements with a maximum size not exceeding 2.5 mm. The results of modal and dynamic analysis are presented in this article.

  16. Global sensitivity analysis of groundwater transport

    NASA Astrophysics Data System (ADS)

    Cvetkovic, V.; Soltani, S.; Vigouroux, G.

    2015-12-01

    In this work we address the model and parametric sensitivity of groundwater transport using the Lagrangian-Stochastic Advection-Reaction (LaSAR) methodology. The 'attenuation index' is used as a relevant and convenient measure of the coupled transport mechanisms. The coefficients of variation (CV) for seven uncertain parameters are assumed to be between 0.25 and 3.5, the highest value being for the lower bound of the mass transfer coefficient k0 . In almost all cases, the uncertainties in the macro-dispersion (CV = 0.35) and in the mass transfer rate k0 (CV = 3.5) are most significant. The global sensitivity analysis using Sobol and derivative-based indices yield consistent rankings on the significance of different models and/or parameter ranges. The results presented here are generic however the proposed methodology can be easily adapted to specific conditions where uncertainty ranges in models and/or parameters can be estimated from field and/or laboratory measurements.

  17. Temperature Compensation Fiber Bragg Grating Pressure Sensor Based on Plane Diaphragm

    NASA Astrophysics Data System (ADS)

    Liang, Minfu; Fang, Xinqiu; Ning, Yaosheng

    2018-06-01

    Pressure sensors are the essential equipments in the field of pressure measurement. In this work, we propose a temperature compensation fiber Bragg grating (FBG) pressure sensor based on the plane diaphragm. The plane diaphragm and pressure sensitivity FBG (PS FBG) are used as the pressure sensitive components, and the temperature compensation FBG (TC FBG) is used to improve the temperature cross-sensitivity. Mechanical deformation model and deformation characteristics simulation analysis of the diaphragm are presented. The measurement principle and theoretical analysis of the mathematical relationship between the FBG central wavelength shift and pressure of the sensor are introduced. The sensitivity and measure range can be adjusted by utilizing the different materials and sizes of the diaphragm to accommodate different measure environments. The performance experiments are carried out, and the results indicate that the pressure sensitivity of the sensor is 35.7 pm/MPa in a range from 0 MPa to 50 MPa and has good linearity with a linear fitting correlation coefficient of 99.95%. In addition, the sensor has the advantages of low frequency chirp and high stability, which can be used to measure pressure in mining engineering, civil engineering, or other complex environment.

  18. A total internal reflection-fluorescence correlation spectroscopy setup with pulsed diode laser excitation

    NASA Astrophysics Data System (ADS)

    Weger, Lukas; Hoffmann-Jacobsen, Kerstin

    2017-09-01

    Fluorescence correlation spectroscopy (FCS) measures fluctuations in a (sub-)femtoliter volume to analyze the diffusive behavior of fluorescent particles. This highly sensitive method has proven to be useful for the analysis of dynamic biological systems as well as in chemistry, physics, and material sciences. It is routinely performed with commercial fluorescence microscopes, which provide a confined observation volume by the confocal technique. The evanescent wave of total internal reflectance (TIR) is used in home-built systems to permit a surface sensitive FCS analysis. We present a combined confocal and TIR-FCS setup which uses economic low-power pulsed diode lasers for excitation. Excitation and detection are coupled to time-correlated photon counting hardware. This allows simultaneous fluorescence lifetime and FCS measurements in a surface-sensitive mode. Moreover, the setup supports fluorescence lifetime correlation spectroscopy at surfaces. The excitation can be easily switched between TIR and epi-illumination to compare the surface properties with those in liquid bulk. The capabilities of the presented setup are demonstrated by measuring the diffusion coefficients of a free dye molecule, a labeled polyethylene glycol, and a fluorescent nanoparticle in confocal as well as in TIR-FCS.

  19. It's Nolan Ryan: A Historiography Teaching Technique.

    ERIC Educational Resources Information Center

    Mackey, Thomas

    1991-01-01

    Presents a plan for teaching historiography through analysis of baseball cards. Explains that students can learn about society, culture, discrimination, and inference. Reports that the lesson increased student interest, motivation, and sensitivity to the importance of historical sources. (DK)

  20. From web search to healthcare utilization: privacy-sensitive studies from mobile data.

    PubMed

    White, Ryen; Horvitz, Eric

    2013-01-01

    We explore relationships between health information seeking activities and engagement with healthcare professionals via a privacy-sensitive analysis of geo-tagged data from mobile devices. We analyze logs of mobile interaction data stripped of individually identifiable information and location data. The data analyzed consist of time-stamped search queries and distances to medical care centers. We examine search activity that precedes the observation of salient evidence of healthcare utilization (EHU) (ie, data suggesting that the searcher is using healthcare resources), in our case taken as queries occurring at or near medical facilities. We show that the time between symptom searches and observation of salient evidence of seeking healthcare utilization depends on the acuity of symptoms. We construct statistical models that make predictions of forthcoming EHU based on observations about the current search session, prior medical search activities, and prior EHU. The predictive accuracy of the models varies (65%-90%) depending on the features used and the timeframe of the analysis, which we explore via a sensitivity analysis. We provide a privacy-sensitive analysis that can be used to generate insights about the pursuit of health information and healthcare. The findings demonstrate how large-scale studies of mobile devices can provide insights on how concerns about symptomatology lead to the pursuit of professional care. We present new methods for the analysis of mobile logs and describe a study that provides evidence about how people transition from mobile searches on symptoms and diseases to the pursuit of healthcare in the world.

  1. Sensitivity analysis for aeroacoustic and aeroelastic design of turbomachinery blades

    NASA Technical Reports Server (NTRS)

    Lorence, Christopher B.; Hall, Kenneth C.

    1995-01-01

    A new method for computing the effect that small changes in the airfoil shape and cascade geometry have on the aeroacoustic and aeroelastic behavior of turbomachinery cascades is presented. The nonlinear unsteady flow is assumed to be composed of a nonlinear steady flow plus a small perturbation unsteady flow that is harmonic in time. First, the full potential equation is used to describe the behavior of the nonlinear mean (steady) flow through a two-dimensional cascade. The small disturbance unsteady flow through the cascade is described by the linearized Euler equations. Using rapid distortion theory, the unsteady velocity is split into a rotational part that contains the vorticity and an irrotational part described by a scalar potential. The unsteady vorticity transport is described analytically in terms of the drift and stream functions computed from the steady flow. Hence, the solution of the linearized Euler equations may be reduced to a single inhomogeneous equation for the unsteady potential. The steady flow and small disturbance unsteady flow equations are discretized using bilinear quadrilateral isoparametric finite elements. The nonlinear mean flow solution and streamline computational grid are computed simultaneously using Newton iteration. At each step of the Newton iteration, LU decomposition is used to solve the resulting set of linear equations. The unsteady flow problem is linear, and is also solved using LU decomposition. Next, a sensitivity analysis is performed to determine the effect small changes in cascade and airfoil geometry have on the mean and unsteady flow fields. The sensitivity analysis makes use of the nominal steady and unsteady flow LU decompositions so that no additional matrices need to be factored. Hence, the present method is computationally very efficient. To demonstrate how the sensitivity analysis may be used to redesign cascades, a compressor is redesigned for improved aeroelastic stability and two different fan exit guide vanes are redesigned for reduced downstream radiated noise. In addition, a framework detailing how the two-dimensional version of the method may be used to redesign three-dimensional geometries is presented.

  2. Data mining methods in the prediction of Dementia: A real-data comparison of the accuracy, sensitivity and specificity of linear discriminant analysis, logistic regression, neural networks, support vector machines, classification trees and random forests

    PubMed Central

    2011-01-01

    Background Dementia and cognitive impairment associated with aging are a major medical and social concern. Neuropsychological testing is a key element in the diagnostic procedures of Mild Cognitive Impairment (MCI), but has presently a limited value in the prediction of progression to dementia. We advance the hypothesis that newer statistical classification methods derived from data mining and machine learning methods like Neural Networks, Support Vector Machines and Random Forests can improve accuracy, sensitivity and specificity of predictions obtained from neuropsychological testing. Seven non parametric classifiers derived from data mining methods (Multilayer Perceptrons Neural Networks, Radial Basis Function Neural Networks, Support Vector Machines, CART, CHAID and QUEST Classification Trees and Random Forests) were compared to three traditional classifiers (Linear Discriminant Analysis, Quadratic Discriminant Analysis and Logistic Regression) in terms of overall classification accuracy, specificity, sensitivity, Area under the ROC curve and Press'Q. Model predictors were 10 neuropsychological tests currently used in the diagnosis of dementia. Statistical distributions of classification parameters obtained from a 5-fold cross-validation were compared using the Friedman's nonparametric test. Results Press' Q test showed that all classifiers performed better than chance alone (p < 0.05). Support Vector Machines showed the larger overall classification accuracy (Median (Me) = 0.76) an area under the ROC (Me = 0.90). However this method showed high specificity (Me = 1.0) but low sensitivity (Me = 0.3). Random Forest ranked second in overall accuracy (Me = 0.73) with high area under the ROC (Me = 0.73) specificity (Me = 0.73) and sensitivity (Me = 0.64). Linear Discriminant Analysis also showed acceptable overall accuracy (Me = 0.66), with acceptable area under the ROC (Me = 0.72) specificity (Me = 0.66) and sensitivity (Me = 0.64). The remaining classifiers showed overall classification accuracy above a median value of 0.63, but for most sensitivity was around or even lower than a median value of 0.5. Conclusions When taking into account sensitivity, specificity and overall classification accuracy Random Forests and Linear Discriminant analysis rank first among all the classifiers tested in prediction of dementia using several neuropsychological tests. These methods may be used to improve accuracy, sensitivity and specificity of Dementia predictions from neuropsychological testing. PMID:21849043

  3. Nursing students' understanding of factors influencing ethical sensitivity: A qualitative study.

    PubMed

    Borhani, Fariba; Abbaszadeh, Abbas; Mohsenpour, Mohaddeseh

    2013-07-01

    Ethical sensitivity is considered as a component of professional competency of nurses. Its effects on improvement of nurses' ethical performance and the therapeutic relationship between nurses and patients have been reported. However, very limited studies have evaluated ethical sensitivity. Since no previous Iranian research has been conducted in this regard, the present study aimed to review nursing students' understanding of effective factors on ethical sensitivity. This qualitative study was performed in Kerman, Iran, during 2009. It used semi-structured individual interviews with eight MSc nursing students to assess their viewpoints. It also included two focus groups. Purposive sampling was continued until data saturation. Data were analyzed using manifest content analysis. The students' understanding of factors influencing ethical sensitivity were summarized in five main themes including individual and spiritual characteristics, education, mutual understanding, internal and external controls, and experience of an immoral act. The findings of this study create a unique framework for sensitization of nurses in professional performance. The application of these factors in human resource management is reinforcement of positive aspects and decrease in negative aspects, in education can use for educational objectives setting, and in research can designing studies based on this framework and making related tools. It is noteworthy that presented classification was influenced by students themselves and mentioned to a kind of learning activity by them.

  4. Linear regression metamodeling as a tool to summarize and present simulation model results.

    PubMed

    Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M

    2013-10-01

    Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

  5. Virtual Patients and Sensitivity Analysis of the Guyton Model of Blood Pressure Regulation: Towards Individualized Models of Whole-Body Physiology

    PubMed Central

    Moss, Robert; Grosse, Thibault; Marchant, Ivanny; Lassau, Nathalie; Gueyffier, François; Thomas, S. Randall

    2012-01-01

    Mathematical models that integrate multi-scale physiological data can offer insight into physiological and pathophysiological function, and may eventually assist in individualized predictive medicine. We present a methodology for performing systematic analyses of multi-parameter interactions in such complex, multi-scale models. Human physiology models are often based on or inspired by Arthur Guyton's whole-body circulatory regulation model. Despite the significance of this model, it has not been the subject of a systematic and comprehensive sensitivity study. Therefore, we use this model as a case study for our methodology. Our analysis of the Guyton model reveals how the multitude of model parameters combine to affect the model dynamics, and how interesting combinations of parameters may be identified. It also includes a “virtual population” from which “virtual individuals” can be chosen, on the basis of exhibiting conditions similar to those of a real-world patient. This lays the groundwork for using the Guyton model for in silico exploration of pathophysiological states and treatment strategies. The results presented here illustrate several potential uses for the entire dataset of sensitivity results and the “virtual individuals” that we have generated, which are included in the supplementary material. More generally, the presented methodology is applicable to modern, more complex multi-scale physiological models. PMID:22761561

  6. MUSiC—An Automated Scan for Deviations between Data and Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Meyer, Arnd

    2010-02-01

    A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.

  7. MUSiC - An Automated Scan for Deviations between Data and Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Arnd

    2010-02-10

    A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.

  8. Uncertainty and sensitivity analysis for two-phase flow in the vicinity of the repository in the 1996 performance assessment for the Waste Isolation Pilot Plant: Undisturbed conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HELTON,JON CRAIG; BEAN,J.E.; ECONOMY,K.

    2000-05-19

    Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment for the Waste Isolation Pilot Plant are presented for two-phase flow the vicinity of the repository under undisturbed conditions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformation are used to investigate brine inflow, gas generation repository pressure, brine saturation and brine and gas outflow. Of the variables under study, repository pressure is potentially the most important due to its influence on spallings and direct brine releases, with the uncertainty in its value being dominated by the extent to whichmore » the microbial degradation of cellulose takes place, the rate at which the corrosion of steel takes place, and the amount of brine that drains from the surrounding disturbed rock zone into the repository.« less

  9. Cavity-Enhanced Absorption Spectroscopy and Photoacoustic Spectroscopy for Human Breath Analysis

    NASA Astrophysics Data System (ADS)

    Wojtas, J.; Tittel, F. K.; Stacewicz, T.; Bielecki, Z.; Lewicki, R.; Mikolajczyk, J.; Nowakowski, M.; Szabra, D.; Stefanski, P.; Tarka, J.

    2014-12-01

    This paper describes two different optoelectronic detection techniques: cavity-enhanced absorption spectroscopy and photoacoustic spectroscopy. These techniques are designed to perform a sensitive analysis of trace gas species in exhaled human breath for medical applications. With such systems, the detection of pathogenic changes at the molecular level can be achieved. The presence of certain gases (biomarkers), at increased concentration levels, indicates numerous human diseases. Diagnosis of a disease in its early stage would significantly increase chances for effective therapy. Non-invasive, real-time measurements, and high sensitivity and selectivity, capable of minimum discomfort for patients, are the main advantages of human breath analysis. At present, monitoring of volatile biomarkers in breath is commonly useful for diagnostic screening, treatment for specific conditions, therapy monitoring, control of exogenous gases (such as bacterial and poisonous emissions), as well as for analysis of metabolic gases.

  10. Automatic network coupling analysis for dynamical systems based on detailed kinetic models.

    PubMed

    Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich

    2005-10-01

    We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.

  11. Mid-L/D Lifting Body Entry Demise Analysis

    NASA Technical Reports Server (NTRS)

    Ling, Lisa

    2017-01-01

    The mid-lift-to-drag ratio (mid-L/D) lifting body is a fully autonomous spacecraft under design at NASA for enabling a rapid return of scientific payloads from the International Space Station (ISS). For contingency planning and risk assessment for the Earth-return trajectory, an entry demise analysis was performed to examine three potential failure scenarios: (1) nominal entry interface conditions with loss of control, (2) controlled entry at maximum flight path angle, and (3) controlled entry at minimum flight path angle. The objectives of the analysis were to predict the spacecraft breakup sequence and timeline, determine debris survival, and calculate the debris dispersion footprint. Sensitivity analysis was also performed to determine the effect of the initial pitch rate on the spacecraft stability and breakup during the entry. This report describes the mid-L/D lifting body and presents the results of the entry demise and sensitivity analyses.

  12. Measurements of 55Fe activity in activated steel samples with GEMPix

    NASA Astrophysics Data System (ADS)

    Curioni, A.; Dinar, N.; La Torre, F. P.; Leidner, J.; Murtas, F.; Puddu, S.; Silari, M.

    2017-03-01

    In this paper we present a novel method, based on the recently developed GEMPix detector, to measure the 55Fe content in samples of metallic material activated during operation of CERN accelerators and experimental facilities. The GEMPix, a gas detector with highly pixelated read-out, has been obtained by coupling a triple Gas Electron Multiplier (GEM) to a quad Timepix ASIC. Sample preparation, measurements performed on 45 samples and data analysis are described. The calibration factor (counts per second per unit specific activity) has been obtained via measurements of the 55Fe activity determined by radiochemical analysis of the same samples. Detection limit and sensitivity to the current Swiss exemption limit are calculated. Comparison with radiochemical analysis shows inconsistency for the sensitivity for only two samples, most likely due to underestimated uncertainties of the GEMPix analysis. An operative test phase of this technique is already planned at CERN.

  13. Direct magnetic field estimation based on echo planar raw data.

    PubMed

    Testud, Frederik; Splitthoff, Daniel Nicolas; Speck, Oliver; Hennig, Jürgen; Zaitsev, Maxim

    2010-07-01

    Gradient recalled echo echo planar imaging is widely used in functional magnetic resonance imaging. The fast data acquisition is, however, very sensitive to field inhomogeneities which manifest themselves as artifacts in the images. Typically used correction methods have the common deficit that the data for the correction are acquired only once at the beginning of the experiment, assuming the field inhomogeneity distribution B(0) does not change over the course of the experiment. In this paper, methods to extract the magnetic field distribution from the acquired k-space data or from the reconstructed phase image of a gradient echo planar sequence are compared and extended. A common derivation for the presented approaches provides a solid theoretical basis, enables a fair comparison and demonstrates the equivalence of the k-space and the image phase based approaches. The image phase analysis is extended here to calculate the local gradient in the readout direction and improvements are introduced to the echo shift analysis, referred to here as "k-space filtering analysis." The described methods are compared to experimentally acquired B(0) maps in phantoms and in vivo. The k-space filtering analysis presented in this work demonstrated to be the most sensitive method to detect field inhomogeneities.

  14. Recent Advances in Multidisciplinary Analysis and Optimization, part 3

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M. (Editor)

    1989-01-01

    This three-part document contains a collection of technical papers presented at the Second NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and Optimization, held September 28-30, 1988 in Hampton, Virginia. The topics covered include: aircraft design, aeroelastic tailoring, control of aeroelastic structures, dynamics and control of flexible structures, structural design, design of large engineering systems, application of artificial intelligence, shape optimization, software development and implementation, and sensitivity analysis.

  15. Recent Advances in Multidisciplinary Analysis and Optimization, part 2

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M. (Editor)

    1989-01-01

    This three-part document contains a collection of technical papers presented at the Second NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and Optimization, held September 28-30, 1988 in Hampton, Virginia. The topics covered include: helicopter design, aeroelastic tailoring, control of aeroelastic structures, dynamics and control of flexible structures, structural design, design of large engineering systems, application of artificial intelligence, shape optimization, software development and implementation, and sensitivity analysis.

  16. Recent Advances in Multidisciplinary Analysis and Optimization, part 1

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M. (Editor)

    1989-01-01

    This three-part document contains a collection of technical papers presented at the Second NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and Optimization, held September 28-30, 1988 in Hampton, Virginia. The topics covered include: helicopter design, aeroelastic tailoring, control of aeroelastic structures, dynamics and control of flexible structures, structural design, design of large engineering systems, application of artificial intelligence, shape optimization, software development and implementation, and sensitivity analysis.

  17. Time-Distance Analysis of Deep Solar Convection

    NASA Technical Reports Server (NTRS)

    Duvall, T. L., Jr.; Hanasoge, S. M.

    2011-01-01

    Recently it was shown by Hanasoge, Duvall, and DeRosa (2010) that the upper limit to convective flows for spherical harmonic degrees l

  18. Analysis of image formation in optical coherence elastography using a multiphysics approach

    PubMed Central

    Chin, Lixin; Curatolo, Andrea; Kennedy, Brendan F.; Doyle, Barry J.; Munro, Peter R. T.; McLaughlin, Robert A.; Sampson, David D.

    2014-01-01

    Image formation in optical coherence elastography (OCE) results from a combination of two processes: the mechanical deformation imparted to the sample and the detection of the resulting displacement using optical coherence tomography (OCT). We present a multiphysics model of these processes, validated by simulating strain elastograms acquired using phase-sensitive compression OCE, and demonstrating close correspondence with experimental results. Using the model, we present evidence that the approximation commonly used to infer sample displacement in phase-sensitive OCE is invalidated for smaller deformations than has been previously considered, significantly affecting the measurement precision, as quantified by the displacement sensitivity and the elastogram signal-to-noise ratio. We show how the precision of OCE is affected not only by OCT shot-noise, as is usually considered, but additionally by phase decorrelation due to the sample deformation. This multiphysics model provides a general framework that could be used to compare and contrast different OCE techniques. PMID:25401007

  19. GLSENS: A Generalized Extension of LSENS Including Global Reactions and Added Sensitivity Analysis for the Perfectly Stirred Reactor

    NASA Technical Reports Server (NTRS)

    Bittker, David A.

    1996-01-01

    A generalized version of the NASA Lewis general kinetics code, LSENS, is described. The new code allows the use of global reactions as well as molecular processes in a chemical mechanism. The code also incorporates the capability of performing sensitivity analysis calculations for a perfectly stirred reactor rapidly and conveniently at the same time that the main kinetics calculations are being done. The GLSENS code has been extensively tested and has been found to be accurate and efficient. Nine example problems are presented and complete user instructions are given for the new capabilities. This report is to be used in conjunction with the documentation for the original LSENS code.

  20. Static and dynamic structural-sensitivity derivative calculations in the finite-element-based Engineering Analysis Language (EAL) system

    NASA Technical Reports Server (NTRS)

    Camarda, C. J.; Adelman, H. M.

    1984-01-01

    The implementation of static and dynamic structural-sensitivity derivative calculations in a general purpose, finite-element computer program denoted the Engineering Analysis Language (EAL) System is described. Derivatives are calculated with respect to structural parameters, specifically, member sectional properties including thicknesses, cross-sectional areas, and moments of inertia. Derivatives are obtained for displacements, stresses, vibration frequencies and mode shapes, and buckling loads and mode shapes. Three methods for calculating derivatives are implemented (analytical, semianalytical, and finite differences), and comparisons of computer time and accuracy are made. Results are presented for four examples: a swept wing, a box beam, a stiffened cylinder with a cutout, and a space radiometer-antenna truss.

  1. Robust Optimization and Sensitivity Analysis with Multi-Objective Genetic Algorithms: Single- and Multi-Disciplinary Applications

    DTIC Science & Technology

    2007-01-01

    multi-disciplinary optimization with uncertainty. Robust optimization and sensitivity analysis is usually used when an optimization model has...formulation is introduced in Section 2.3. We briefly discuss several definitions used in the sensitivity analysis in Section 2.4. Following in...2.5. 2.4 SENSITIVITY ANALYSIS In this section, we discuss several definitions used in Chapter 5 for Multi-Objective Sensitivity Analysis . Inner

  2. Coal resources in environmentally-sensitive lands under federal management

    USGS Publications Warehouse

    Watson, William D.; Tully, John K.; Moser, Edward N.; Dee, David P.; Bryant, Karen; Schall, Richard; Allan, Harold A.

    1995-01-01

    This report presents estimates of coal-bearing acreage and coal tonnage in environmentally-sensitive areas. The analysis was conducted to provide data for rulemaking by the Federal Office of Surface Mining (Watson and others, 1995). The rulemaking clarifies conditions under which coal can be mined in environmentally-sensitive areas. The area of the U.S. is about 2.3 billion acres. Contained within that acreage are certain environmentally-sensitive and unique areas (including parks, forests, and various other Federal land preserves). These areas are afforded special protection under Federal and State law. Altogether these protected areas occupy about 400 million acres. This report assesses coal acreage and coal tonnage in these protected Federal land preserves. Results are presented in the form of 8 map-displays prepared using GIS methods at a national scale. Tables and charts that accompany each map provide estimates of the total acreage in Federal land preserve units that overlap or fall within coal fields, coal-bearing acreage in each unit, and coal tonnage in each unit. Summary charts, compiled from the maps, indicate that about 8% of the Nation's coal reserves are located within environmentally-sensitive Federal land preserves.

  3. New sensitive high-performance liquid chromatography-tandem mass spectrometry method for the detection of horse and pork in halal beef.

    PubMed

    von Bargen, Christoph; Dojahn, Jörg; Waidelich, Dietmar; Humpf, Hans-Ulrich; Brockmeyer, Jens

    2013-12-11

    The accidental or fraudulent blending of meat from different species is a highly relevant aspect for food product quality control, especially for consumers with ethical concerns against species, such as horse or pork. In this study, we present a sensitive mass spectrometrical approach for the detection of trace contaminations of horse meat and pork and demonstrate the specificity of the identified biomarker peptides against chicken, lamb, and beef. Biomarker peptides were identified by a shotgun proteomic approach using tryptic digests of protein extracts and were verified by the analysis of 21 different meat samples from the 5 species included in this study. For the most sensitive peptides, a multiple reaction monitoring (MRM) method was developed that allows for the detection of 0.55% horse or pork in a beef matrix. To enhance sensitivity, we applied MRM(3) experiments and were able to detect down to 0.13% pork contamination in beef. To the best of our knowledge, we present here the first rapid and sensitive mass spectrometrical method for the detection of horse and pork by use of MRM and MRM(3).

  4. Lifestyle Behaviours Add to the Armoury of Treatment Options for Panic Disorder: An Evidence-Based Reasoning

    PubMed Central

    Lambert, Rod

    2015-01-01

    This article presents an evidence-based reasoning, focusing on evidence of an Occupational Therapy input to lifestyle behaviour influences on panic disorder that also provides potentially broader application across other mental health problems (MHP). The article begins from the premise that we are all different. It then follows through a sequence of questions, examining incrementally how MHPs are experienced and classified. It analyses the impact of individual sensitivity at different levels of analysis, from genetic and epigenetic individuality, through neurotransmitter and body system sensitivity. Examples are given demonstrating the evidence base behind the logical sequence of investigation. The paper considers the evidence of how everyday routine lifestyle behaviour impacts on occupational function at all levels, and how these behaviours link to individual sensitivity to influence the level of exposure required to elicit symptomatic responses. Occupational Therapists can help patients by adequately assessing individual sensitivity, and through promoting understanding and a sense of control over their own symptoms. It concludes that present clinical guidelines should be expanded to incorporate knowledge of individual sensitivities to environmental exposures and lifestyle behaviours at an early stage. PMID:26095868

  5. Sensitivity analysis of urban flood flows to hydraulic controls

    NASA Astrophysics Data System (ADS)

    Chen, Shangzhi; Garambois, Pierre-André; Finaud-Guyot, Pascal; Dellinger, Guilhem; Terfous, Abdelali; Ghenaim, Abdallah

    2017-04-01

    Flooding represents one of the most significant natural hazards on each continent and particularly in highly populated areas. Improving the accuracy and robustness of prediction systems has become a priority. However, in situ measurements of floods remain difficult while a better understanding of flood flow spatiotemporal dynamics along with dataset for model validations appear essential. The present contribution is based on a unique experimental device at the scale 1/200, able to produce urban flooding with flood flows corresponding to frequent to rare return periods. The influence of 1D Saint Venant and 2D Shallow water model input parameters on simulated flows is assessed using global sensitivity analysis (GSA). The tested parameters are: global and local boundary conditions (water heights and discharge), spatially uniform or distributed friction coefficient and or porosity respectively tested in various ranges centered around their nominal values - calibrated thanks to accurate experimental data and related uncertainties. For various experimental configurations a variance decomposition method (ANOVA) is used to calculate spatially distributed Sobol' sensitivity indices (Si's). The sensitivity of water depth to input parameters on two main streets of the experimental device is presented here. Results show that the closer from the downstream boundary condition on water height, the higher the Sobol' index as predicted by hydraulic theory for subcritical flow, while interestingly the sensitivity to friction decreases. The sensitivity indices of all lateral inflows, representing crossroads in 1D, are also quantified in this study along with their asymptotic trends along flow distance. The relationship between lateral discharge magnitude and resulting sensitivity index of water depth is investigated. Concerning simulations with distributed friction coefficients, crossroad friction is shown to have much higher influence on upstream water depth profile than street friction coefficients. This methodology could be applied to any urban flood configuration in order to better understand flow dynamics and repartition but also guide model calibration in the light of flow controls.

  6. Modeling Effects of Annealing on Coal Char Reactivity to O 2 and CO 2 , Based on Preparation Conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holland, Troy; Bhat, Sham; Marcy, Peter

    Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive computational fluid dynamics (CFD) simulations are valuable tools in evaluating and deploying oxyfuel and other carbon capture technologies, either as retrofit technologies or for new construction. However, accurate predictive combustor simulations require physically realistic submodels with low computational requirements. A recent sensitivity analysis of a detailed char conversion model (Char Conversion Kinetics (CCK)) found thermal annealing to be an extremely sensitive submodel. In the present work, further analysis of the previous annealing model revealed significant disagreement with numerous datasets from experiments performed after that annealing model was developed. Themore » annealing model was accordingly extended to reflect experimentally observed reactivity loss, because of the thermal annealing of a variety of coals under diverse char preparation conditions. The model extension was informed by a Bayesian calibration analysis. In addition, since oxyfuel conditions include extraordinarily high levels of CO 2, the development of a first-ever CO 2 reactivity loss model due to annealing is presented.« less

  7. Modeling Effects of Annealing on Coal Char Reactivity to O 2 and CO 2 , Based on Preparation Conditions

    DOE PAGES

    Holland, Troy; Bhat, Sham; Marcy, Peter; ...

    2017-08-25

    Oxy-fired coal combustion is a promising potential carbon capture technology. Predictive computational fluid dynamics (CFD) simulations are valuable tools in evaluating and deploying oxyfuel and other carbon capture technologies, either as retrofit technologies or for new construction. However, accurate predictive combustor simulations require physically realistic submodels with low computational requirements. A recent sensitivity analysis of a detailed char conversion model (Char Conversion Kinetics (CCK)) found thermal annealing to be an extremely sensitive submodel. In the present work, further analysis of the previous annealing model revealed significant disagreement with numerous datasets from experiments performed after that annealing model was developed. Themore » annealing model was accordingly extended to reflect experimentally observed reactivity loss, because of the thermal annealing of a variety of coals under diverse char preparation conditions. The model extension was informed by a Bayesian calibration analysis. In addition, since oxyfuel conditions include extraordinarily high levels of CO 2, the development of a first-ever CO 2 reactivity loss model due to annealing is presented.« less

  8. NUMERICAL FLOW AND TRANSPORT SIMULATIONS SUPPORTING THE SALTSTONE FACILITY PERFORMANCE ASSESSMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flach, G.

    2009-02-28

    The Saltstone Disposal Facility Performance Assessment (PA) is being revised to incorporate requirements of Section 3116 of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 (NDAA), and updated data and understanding of vault performance since the 1992 PA (Cook and Fowler 1992) and related Special Analyses. A hybrid approach was chosen for modeling contaminant transport from vaults and future disposal cells to exposure points. A higher resolution, largely deterministic, analysis is performed on a best-estimate Base Case scenario using the PORFLOW numerical analysis code. a few additional sensitivity cases are simulated to examine alternative scenarios andmore » parameter settings. Stochastic analysis is performed on a simpler representation of the SDF system using the GoldSim code to estimate uncertainty and sensitivity about the Base Case. This report describes development of PORFLOW models supporting the SDF PA, and presents sample results to illustrate model behaviors and define impacts relative to key facility performance objectives. The SDF PA document, when issued, should be consulted for a comprehensive presentation of results.« less

  9. Rapid Analysis of Trace Drugs and Metabolites Using a Thermal Desorption DART-MS Configuration.

    PubMed

    Sisco, Edward; Forbes, Thomas P; Staymates, Matthew E; Gillen, Greg

    2016-01-01

    The need to analyze trace narcotic samples rapidly for screening or confirmatory purposes is of increasing interest to the forensic, homeland security, and criminal justice sectors. This work presents a novel method for the detection and quantification of trace drugs and metabolites off of a swipe material using a thermal desorption direct analysis in real time mass spectrometry (TD-DART-MS) configuration. A variation on traditional DART, this configuration allows for desorption of the sample into a confined tube, completely independent of the DART source, allowing for more efficient and thermally precise analysis of material present on a swipe. Over thirty trace samples of narcotics, metabolites, and cutting agents deposited onto swipes were rapidly differentiated using this methodology. The non-optimized method led to sensitivities ranging from single nanograms to hundreds of picograms. Direct comparison to traditional DART with a subset of the samples highlighted an improvement in sensitivity by a factor of twenty to thirty and an increase in reproducibility sample to sample from approximately 45 % RSD to less than 15 % RSD. Rapid extraction-less quantification was also possible.

  10. Prostate Cancer Information Available in Health-Care Provider Offices: An Analysis of Content, Readability, and Cultural Sensitivity.

    PubMed

    Choi, Seul Ki; Seel, Jessica S; Yelton, Brooks; Steck, Susan E; McCormick, Douglas P; Payne, Johnny; Minter, Anthony; Deutchki, Elizabeth K; Hébert, James R; Friedman, Daniela B

    2018-07-01

    Prostate cancer (PrCA) is the most common cancer affecting men in the United States, and African American men have the highest incidence among men in the United States. Little is known about the PrCA-related educational materials being provided to patients in health-care settings. Content, readability, and cultural sensitivity of materials available in providers' practices in South Carolina were examined. A total of 44 educational materials about PrCA and associated sexual dysfunction was collected from 16 general and specialty practices. The content of the materials was coded, and cultural sensitivity was assessed using the Cultural Sensitivity Assessment Tool. Flesch Reading Ease, Flesch-Kincaid Grade Level, and the Simple Measure of Gobbledygook were used to assess readability. Communication with health-care providers (52.3%), side effects of PrCA treatment (40.9%), sexual dysfunction and its treatment (38.6%), and treatment options (34.1%) were frequently presented. All materials had acceptable cultural sensitivity scores; however, 2.3% and 15.9% of materials demonstrated unacceptable cultural sensitivity regarding format and visual messages, respectively. Readability of the materials varied. More than half of the materials were written above a high-school reading level. PrCA-related materials available in health-care practices may not meet patients' needs regarding content, cultural sensitivity, and readability. A wide range of educational materials that address various aspects of PrCA, including treatment options and side effects, should be presented in plain language and be culturally sensitive.

  11. A sensitivity analysis of central flat-plate photovoltaic systems and implications for national photovoltaics program planning

    NASA Technical Reports Server (NTRS)

    Crosetti, M. R.

    1985-01-01

    The sensitivity of the National Photovoltaic Research Program goals to changes in individual photovoltaic system parameters is explored. Using the relationship between lifetime cost and system performance parameters, tests were made to see how overall photovoltaic system energy costs are affected by changes in the goals set for module cost and efficiency, system component costs and efficiencies, operation and maintenance costs, and indirect costs. The results are presented in tables and figures for easy reference.

  12. Uranium Measurement Improvements at the Savannah River Technology Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shick, C. Jr.

    Uranium isotope ratio and isotope dilution methods by mass spectrometry are used to achieve sensitivity, precision and accuracy for various applications. This report presents recent progress made at SRTC in the analysis of minor isotopes of uranium. Comparison of routine measurements of NBL certified uranium (U005a) using the SRTC Three Stage Mass Spectrometer (3SMS) and the SRTC Single Stage Mass Spectrometer (SSMS). As expected, the three stage mass spectrometer yielded superior sensitivity, precision, and accuracy for this application.

  13. Genomic Methods for Clinical and Translational Pain Research

    PubMed Central

    Wang, Dan; Kim, Hyungsuk; Wang, Xiao-Min; Dionne, Raymond

    2012-01-01

    Pain is a complex sensory experience for which the molecular mechanisms are yet to be fully elucidated. Individual differences in pain sensitivity are mediated by a complex network of multiple gene polymorphisms, physiological and psychological processes, and environmental factors. Here, we present the methods for applying unbiased molecular-genetic approaches, genome-wide association study (GWAS), and global gene expression analysis, to help better understand the molecular basis of pain sensitivity in humans and variable responses to analgesic drugs. PMID:22351080

  14. Allergenicity and cross-reactivity of booklice (Liposcelis bostrichophila): a common household insect pest in Japan.

    PubMed

    Fukutomi, Yuma; Kawakami, Yuji; Taniguchi, Masami; Saito, Akemi; Fukuda, Azumi; Yasueda, Hiroshi; Nakazawa, Takuya; Hasegawa, Maki; Nakamura, Hiroyuki; Akiyama, Kazuo

    2012-01-01

    Booklice (Liposcelis bostrichophila) are a common household insect pest distributed worldwide. Particularly in Japan, they infest 'tatami' mats and are the most frequently detected insect among all detectable insects, present at a frequency of about 90% in dust samples. Although it has been hypothesized that they are an important indoor allergen, studies on their allergenicity have been limited. To clarify the allergenicity of booklice and the cross-reactivity of this insect allergen with allergens of other insects, patients sensitized to booklice were identified from 185 Japanese adults with allergic asthma using skin tests and IgE-ELISA. IgE-inhibition analysis, immunoblotting and immunoblotting-inhibition analysis were performed using sera from these patients. Allergenic proteins contributing to specific sensitization to booklice were identified by two-dimensional electrophoresis and two-dimensional immunoblotting. The booklouse-specific IgE antibody was detected in sera from 41 patients (22% of studied patients). IgE inhibition analysis revealed that IgE reactivity to the booklouse allergen in the sera from one third of booklouse-sensitized patients was not inhibited by preincubation with extracts from any other environmental insects in this study. Immunoblotting identified a 26-kD protein from booklouse extract as the allergenic protein contributing to specific sensitization to booklice. The amino acid sequence of peptide fragments of this protein showed no homology to those of previously described allergenic proteins, indicating that this protein is a new allergen. Sensitization to booklice was relatively common and specific sensitization to this insect not related to insect panallergy was indicated in this population. Copyright © 2011 S. Karger AG, Basel.

  15. Highly sensitive image-derived indices of water-stressed plants using hyperspectral imaging in SWIR and histogram analysis

    PubMed Central

    Kim, David M.; Zhang, Hairong; Zhou, Haiying; Du, Tommy; Wu, Qian; Mockler, Todd C.; Berezin, Mikhail Y.

    2015-01-01

    The optical signature of leaves is an important monitoring and predictive parameter for a variety of biotic and abiotic stresses, including drought. Such signatures derived from spectroscopic measurements provide vegetation indices – a quantitative method for assessing plant health. However, the commonly used metrics suffer from low sensitivity. Relatively small changes in water content in moderately stressed plants demand high-contrast imaging to distinguish affected plants. We present a new approach in deriving sensitive indices using hyperspectral imaging in a short-wave infrared range from 800 nm to 1600 nm. Our method, based on high spectral resolution (1.56 nm) instrumentation and image processing algorithms (quantitative histogram analysis), enables us to distinguish a moderate water stress equivalent of 20% relative water content (RWC). The identified image-derived indices 15XX nm/14XX nm (i.e. 1529 nm/1416 nm) were superior to common vegetation indices, such as WBI, MSI, and NDWI, with significantly better sensitivity, enabling early diagnostics of plant health. PMID:26531782

  16. High Sensitivity Combined with Extended Structural Coverage of Labile Compounds via Nanoelectrospray Ionization at Subambient Pressures

    DOE PAGES

    Cox, Jonathan T.; Kronewitter, Scott R.; Shukla, Anil K.; ...

    2014-09-15

    Subambient pressure ionization with nanoelectrospray (SPIN) has proven to be effective in producing ions with high efficiency and transmitting them to low pressures for high sensitivity mass spectrometry (MS) analysis. Here we present evidence that not only does the SPIN source improve MS sensitivity but also allows for gentler ionization conditions. The gentleness of a conventional heated capillary electrospray ionization (ESI) source and the SPIN source was compared by the liquid chromatography mass spectrometry (LC-MS) analysis of colominic acid. Colominic acid is a mixture of sialic acid polymers of different lengths containing labile glycosidic linkages between monomer units necessitating amore » gentle ion source. By coupling the SPIN source with high resolution mass spectrometry and using advanced data processing tools, we demonstrate much extended coverage of sialic acid polymer chains as compared to using the conventional ESI source. Additionally we show that SPIN-LC-MS is effective in elucidating polymer features with high efficiency and high sensitivity previously unattainable by the conventional ESI-LC-MS methods.« less

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karaulanov, Todor; Savukov, Igor; Kim, Young Jin

    We constructed a spin-exchange relaxation-free (SERF) magnetometer with a small angle between the pump and probe beams facilitating a multi-channel design with a flat pancake cell. This configuration provides almost complete overlap of the beams in the cell, and prevents the pump beam from entering the probe detection channel. By coupling the lasers in multi-mode fibers, without an optical isolator or field modulation, we demonstrate a sensitivity of 10 fTmore » $$/\\sqrt{\\text{Hz}}$$ for frequencies between 10 Hz and 100 Hz. In addition to the experimental study of sensitivity, we present a theoretical analysis of SERF magnetometer response to magnetic fields for small-angle and parallel-beam configurations, and show that at optimal DC offset fields the magnetometer response is comparable to that in the orthogonal-beam configuration. Based on the analysis, we also derive fundamental and probe-limited sensitivities for the arbitrary non-orthogonal geometry. The expected practical and fundamental sensitivities are of the same order as those in the orthogonal geometry. As a result, we anticipate that our design will be useful for magnetoencephalography (MEG) and magnetocardiography (MCG) applications.« less

  18. Cost-effectiveness analysis of EGFR mutation testing in patients with non-small cell lung cancer (NSCLC) with gefitinib or carboplatin-paclitaxel.

    PubMed

    Arrieta, Oscar; Anaya, Pablo; Morales-Oyarvide, Vicente; Ramírez-Tirado, Laura Alejandra; Polanco, Ana C

    2016-09-01

    Assess the cost-effectiveness of an EGFR-mutation testing strategy for advanced NSCLC in first-line therapy with either gefitinib or carboplatin-paclitaxel in Mexican institutions. Cost-effectiveness analysis using a discrete event simulation (DES) model to simulate two therapeutic strategies in patients with advanced NSCLC. Strategy one included patients tested for EGFR-mutation and therapy given accordingly. Strategy two included chemotherapy for all patients without testing. All results are presented in 2014 US dollars. The analysis was made with data from the Mexican frequency of EGFR-mutation. A univariate sensitivity analysis was conducted on EGFR prevalence. Progression-free survival (PFS) transition probabilities were estimated on data from the IPASS and simulated with a Weibull distribution, run with parallel trials to calculate a probabilistic sensitivity analysis. PFS of patients in the testing strategy was 6.76 months (95 % CI 6.10-7.44) vs 5.85 months (95 % CI 5.43-6.29) in the non-testing group. The one-way sensitivity analysis showed that PFS has a direct relationship with EGFR-mutation prevalence, while the ICER and testing cost have an inverse relationship with EGFR-mutation prevalence. The probabilistic sensitivity analysis showed that all iterations had incremental costs and incremental PFS for strategy 1 in comparison with strategy 2. There is a direct relationship between the ICER and the cost of EGFR testing, with an inverse relationship with the prevalence of EGFR-mutation. When prevalence is >10 % ICER remains constant. This study could impact Mexican and Latin American health policies regarding mutation detection testing and treatment for advanced NSCLC.

  19. Working Memory Enhances Visual Perception: Evidence from Signal Detection Analysis

    ERIC Educational Resources Information Center

    Soto, David; Wriglesworth, Alice; Bahrami-Balani, Alex; Humphreys, Glyn W.

    2010-01-01

    We show that perceptual sensitivity to visual stimuli can be modulated by matches between the contents of working memory (WM) and stimuli in the visual field. Observers were presented with an object cue (to hold in WM or to merely attend) and subsequently had to identify a brief target presented within a colored shape. The cue could be…

  20. A fractal analysis of protein to DNA binding kinetics using biosensors.

    PubMed

    Sadana, Ajit

    2003-08-01

    A fractal analysis of a confirmative nature only is presented for the binding of estrogen receptor (ER) in solution to its corresponding DNA (estrogen response element, ERE) immobilized on a sensor chip surface [J. Biol. Chem. 272 (1997) 11384], and for the cooperative binding of human 1,25-dihydroxyvitamin D(3) receptor (VDR) to DNA with the 9-cis-retinoic acid receptor (RXR) [Biochemistry 35 (1996) 3309]. Ligands were also used to modulate the first reaction. Data taken from the literature may be modeled by using a single- or a dual-fractal analysis. Relationships are presented for the binding rate coefficient as a function of either the analyte concentration in solution or the fractal dimension that exists on the biosensor surface. The binding rate expressions developed exhibit a wide range of dependence on the degree of heterogeneity that exists on the surface, ranging from sensitive (order of dependence equal to 1.202) to very sensitive (order of dependence equal to 12.239). In general, the binding rate coefficient increases as the degree of heterogeneity or the fractal dimension of the surface increases. The predictive relationships presented provide further physical insights into the reactions occurring on the biosensor surface. Even though these reactions are occurring on the biosensor surface, the relationships presented should assist in understanding and in possibly manipulating the reactions occurring on cellular surfaces.

  1. Skin sensitizers differentially regulate signaling pathways in MUTZ-3 cells in relation to their individual potency

    PubMed Central

    2014-01-01

    Background Due to the recent European legislations posing a ban of animal tests for safety assessment within the cosmetic industry, development of in vitro alternatives for assessment of skin sensitization is highly prioritized. To date, proposed in vitro assays are mainly based on single biomarkers, which so far have not been able to classify and stratify chemicals into subgroups, related to risk or potency. Methods Recently, we presented the Genomic Allergen Rapid Detection (GARD) assay for assessment of chemical sensitizers. In this paper, we show how the genome wide readout of GARD can be expanded and used to identify differentially regulated pathways relating to individual chemical sensitizers. In this study, we investigated the mechanisms of action of a range of skin sensitizers through pathway identification, pathway classification and transcription factor analysis and related this to the reactive mechanisms and potency of the sensitizing agents. Results By transcriptional profiling of chemically stimulated MUTZ-3 cells, 33 canonical pathways intimately involved in sensitization to chemical substances were identified. The results showed that metabolic processes, cell cycling and oxidative stress responses are the key events activated during skin sensitization, and that these functions are engaged differently depending on the reactivity mechanisms of the sensitizing agent. Furthermore, the results indicate that the chemical reactivity groups seem to gradually engage more pathways and more molecules in each pathway with increasing sensitizing potency of the chemical used for stimulation. Also, a switch in gene regulation from up to down regulation, with increasing potency, was seen both in genes involved in metabolic functions and cell cycling. These observed pathway patterns were clearly reflected in the regulatory elements identified to drive these processes, where 33 regulatory elements have been proposed for further analysis. Conclusions This study demonstrates that functional analysis of biomarkers identified from our genomics study of human MUTZ-3 cells can be used to assess sensitizing potency of chemicals in vitro, by the identification of key cellular events, such as metabolic and cell cycling pathways. PMID:24517095

  2. Algorithm for automatic analysis of electro-oculographic data

    PubMed Central

    2013-01-01

    Background Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. Methods The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. Results The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. Conclusion The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics. PMID:24160372

  3. Harnessing Connectivity in a Large-Scale Small-Molecule Sensitivity Dataset.

    PubMed

    Seashore-Ludlow, Brinton; Rees, Matthew G; Cheah, Jaime H; Cokol, Murat; Price, Edmund V; Coletti, Matthew E; Jones, Victor; Bodycombe, Nicole E; Soule, Christian K; Gould, Joshua; Alexander, Benjamin; Li, Ava; Montgomery, Philip; Wawer, Mathias J; Kuru, Nurdan; Kotz, Joanne D; Hon, C Suk-Yee; Munoz, Benito; Liefeld, Ted; Dančík, Vlado; Bittker, Joshua A; Palmer, Michelle; Bradner, James E; Shamji, Alykhan F; Clemons, Paul A; Schreiber, Stuart L

    2015-11-01

    Identifying genetic alterations that prime a cancer cell to respond to a particular therapeutic agent can facilitate the development of precision cancer medicines. Cancer cell-line (CCL) profiling of small-molecule sensitivity has emerged as an unbiased method to assess the relationships between genetic or cellular features of CCLs and small-molecule response. Here, we developed annotated cluster multidimensional enrichment analysis to explore the associations between groups of small molecules and groups of CCLs in a new, quantitative sensitivity dataset. This analysis reveals insights into small-molecule mechanisms of action, and genomic features that associate with CCL response to small-molecule treatment. We are able to recapitulate known relationships between FDA-approved therapies and cancer dependencies and to uncover new relationships, including for KRAS-mutant cancers and neuroblastoma. To enable the cancer community to explore these data, and to generate novel hypotheses, we created an updated version of the Cancer Therapeutic Response Portal (CTRP v2). We present the largest CCL sensitivity dataset yet available, and an analysis method integrating information from multiple CCLs and multiple small molecules to identify CCL response predictors robustly. We updated the CTRP to enable the cancer research community to leverage these data and analyses. ©2015 American Association for Cancer Research.

  4. Algorithm for automatic analysis of electro-oculographic data.

    PubMed

    Pettersson, Kati; Jagadeesan, Sharman; Lukander, Kristian; Henelius, Andreas; Haeggström, Edward; Müller, Kiti

    2013-10-25

    Large amounts of electro-oculographic (EOG) data, recorded during electroencephalographic (EEG) measurements, go underutilized. We present an automatic, auto-calibrating algorithm that allows efficient analysis of such data sets. The auto-calibration is based on automatic threshold value estimation. Amplitude threshold values for saccades and blinks are determined based on features in the recorded signal. The performance of the developed algorithm was tested by analyzing 4854 saccades and 213 blinks recorded in two different conditions: a task where the eye movements were controlled (saccade task) and a task with free viewing (multitask). The results were compared with results from a video-oculography (VOG) device and manually scored blinks. The algorithm achieved 93% detection sensitivity for blinks with 4% false positive rate. The detection sensitivity for horizontal saccades was between 98% and 100%, and for oblique saccades between 95% and 100%. The classification sensitivity for horizontal and large oblique saccades (10 deg) was larger than 89%, and for vertical saccades larger than 82%. The duration and peak velocities of the detected horizontal saccades were similar to those in the literature. In the multitask measurement the detection sensitivity for saccades was 97% with a 6% false positive rate. The developed algorithm enables reliable analysis of EOG data recorded both during EEG and as a separate metrics.

  5. Modeling Nitrogen Dynamics in a Waste Stabilization Pond System Using Flexible Modeling Environment with MCMC

    PubMed Central

    Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V.; Petway, Joy R.

    2017-01-01

    This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH3-N and NO3-N. Results indicate that the integrated FME-GLUE-based model, with good Nash–Sutcliffe coefficients (0.53–0.69) and correlation coefficients (0.76–0.83), successfully simulates the concentrations of ON-N, NH3-N and NO3-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH3-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO3-N simulation, which was measured using global sensitivity. PMID:28704958

  6. Sulcal depth-based cortical shape analysis in normal healthy control and schizophrenia groups

    NASA Astrophysics Data System (ADS)

    Lyu, Ilwoo; Kang, Hakmook; Woodward, Neil D.; Landman, Bennett A.

    2018-03-01

    Sulcal depth is an important marker of brain anatomy in neuroscience/neurological function. Previously, sulcal depth has been explored at the region-of-interest (ROI) level to increase statistical sensitivity to group differences. In this paper, we present a fully automated method that enables inferences of ROI properties from a sulcal region- focused perspective consisting of two main components: 1) sulcal depth computation and 2) sulcal curve-based refined ROIs. In conventional statistical analysis, the average sulcal depth measurements are employed in several ROIs of the cortical surface. However, taking the average sulcal depth over the full ROI blurs overall sulcal depth measurements which may result in reduced sensitivity to detect sulcal depth changes in neurological and psychiatric disorders. To overcome such a blurring effect, we focus on sulcal fundic regions in each ROI by filtering out other gyral regions. Consequently, the proposed method results in more sensitive to group differences than a traditional ROI approach. In the experiment, we focused on a cortical morphological analysis to sulcal depth reduction in schizophrenia with a comparison to the normal healthy control group. We show that the proposed method is more sensitivity to abnormalities of sulcal depth in schizophrenia; sulcal depth is significantly smaller in most cortical lobes in schizophrenia compared to healthy controls (p < 0.05).

  7. Analysis and amelioration about the cross-sensitivity of a high resolution MOEMS accelerometer based on diffraction grating

    NASA Astrophysics Data System (ADS)

    Lu, Qianbo; Bai, Jian; Wang, Kaiwei; Lou, Shuqi; Jiao, Xufen; Han, Dandan

    2016-10-01

    Cross-sensitivity is a crucial parameter since it detrimentally affect the performance of an accelerometer, especially for a high resolution accelerometer. In this paper, a suite of analytical and finite-elements-method (FEM) models for characterizing the mechanism and features of the cross-sensitivity of a single-axis MOEMS accelerometer composed of a diffraction grating and a micromachined mechanical sensing chip are presented, which have not been systematically investigated yet. The mechanism and phenomena of the cross-sensitivity of this type MOEMS accelerometer based on diffraction grating differ quite a lot from the traditional ones owing to the identical sensing principle. By analyzing the models, some ameliorations and the modified design are put forward to suppress the cross-sensitivity. The modified design, achieved by double sides etching on a specific double-substrate-layer silicon-on-insulator (SOI) wafer, is validated to have a far smaller cross-sensitivity compared with the design previously reported in the literature. Moreover, this design can suppress the cross-sensitivity dramatically without compromising the acceleration sensitivity and resolution.

  8. Wavenumber selection based analysis in Raman spectroscopy improves skin cancer diagnostic specificity at high sensitivity levels (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhua; Zeng, Haishan; Kalia, Sunil; Lui, Harvey

    2017-02-01

    Background: Raman spectroscopy is a non-invasive optical technique which can measure molecular vibrational modes within tissue. A large-scale clinical study (n = 518) has demonstrated that real-time Raman spectroscopy could distinguish malignant from benign skin lesions with good diagnostic accuracy; this was validated by a follow-up independent study (n = 127). Objective: Most of the previous diagnostic algorithms have typically been based on analyzing the full band of the Raman spectra, either in the fingerprint or high wavenumber regions. Our objective in this presentation is to explore wavenumber selection based analysis in Raman spectroscopy for skin cancer diagnosis. Methods: A wavenumber selection algorithm was implemented using variably-sized wavenumber windows, which were determined by the correlation coefficient between wavenumbers. Wavenumber windows were chosen based on accumulated frequency from leave-one-out cross-validated stepwise regression or least and shrinkage selection operator (LASSO). The diagnostic algorithms were then generated from the selected wavenumber windows using multivariate statistical analyses, including principal component and general discriminant analysis (PC-GDA) and partial least squares (PLS). A total cohort of 645 confirmed lesions from 573 patients encompassing skin cancers, precancers and benign skin lesions were included. Lesion measurements were divided into training cohort (n = 518) and testing cohort (n = 127) according to the measurement time. Result: The area under the receiver operating characteristic curve (ROC) improved from 0.861-0.891 to 0.891-0.911 and the diagnostic specificity for sensitivity levels of 0.99-0.90 increased respectively from 0.17-0.65 to 0.20-0.75 by selecting specific wavenumber windows for analysis. Conclusion: Wavenumber selection based analysis in Raman spectroscopy improves skin cancer diagnostic specificity at high sensitivity levels.

  9. Potential diagnostic value of serum p53 antibody for detecting colorectal cancer: A meta-analysis.

    PubMed

    Meng, Rongqin; Wang, Yang; He, Liang; He, Yuanqing; Du, Zedong

    2018-04-01

    Numerous studies have assessed the diagnostic value of serum p53 (s-p53) antibody in patients with colorectal cancer (CRC); however, results remain controversial. The present study aimed to comprehensively and quantitatively summarize the potential diagnostic value of s-p53 antibody in CRC. The present study utilized databases, including PubMed and EmBase, systematically regarding s-p53 antibody diagnosis in CRC, accessed on and prior to 31 July 2016. The quality of all the included studies was assessed using quality assessment of studies of diagnostic accuracy (QUADAS). The result of pooled sensitivity, pooled specificity, positive likelihood ratio (PLR) and negative likelihood ratio (NLR) were analyzed and compared with overall accuracy measures using diagnostic odds ratios (DORs) and area under the curve (AUC) analysis. Publication bias and heterogeneity were also assessed. A total of 11 trials that enrolled a combined 3,392 participants were included in the meta-analysis. Approximately 72.73% (8/11) of the included studies were of high quality (QUADAS score >7), and all were retrospective case-control studies. The pooled sensitivity was 0.19 [95% confidence interval (CI), 0.18-0.21] and pooled specificity was 0.93 (95% CI, 0.92-0.94). Results also demonstrated a PLR of 4.56 (95% CI, 3.27-6.34), NLR of 0.78 (95% CI, 0.71-0.85) and DOR of 6.70 (95% CI, 4.59-9.76). The symmetrical summary receiver operating characteristic curve was 0.73. Furthermore, no evidence of publication bias or heterogeneity was observed in the meta-analysis. Meta-analysis data indicated that s-p53 antibody possesses potential diagnostic value for CRC. However, discrimination power was somewhat limited due to the low sensitivity.

  10. Sensitivity analysis of Repast computational ecology models with R/Repast.

    PubMed

    Prestes García, Antonio; Rodríguez-Patón, Alfonso

    2016-12-01

    Computational ecology is an emerging interdisciplinary discipline founded mainly on modeling and simulation methods for studying ecological systems. Among the existing modeling formalisms, the individual-based modeling is particularly well suited for capturing the complex temporal and spatial dynamics as well as the nonlinearities arising in ecosystems, communities, or populations due to individual variability. In addition, being a bottom-up approach, it is useful for providing new insights on the local mechanisms which are generating some observed global dynamics. Of course, no conclusions about model results could be taken seriously if they are based on a single model execution and they are not analyzed carefully. Therefore, a sound methodology should always be used for underpinning the interpretation of model results. The sensitivity analysis is a methodology for quantitatively assessing the effect of input uncertainty in the simulation output which should be incorporated compulsorily to every work based on in-silico experimental setup. In this article, we present R/Repast a GNU R package for running and analyzing Repast Simphony models accompanied by two worked examples on how to perform global sensitivity analysis and how to interpret the results.

  11. BEATBOX v1.0: Background Error Analysis Testbed with Box Models

    NASA Astrophysics Data System (ADS)

    Knote, Christoph; Barré, Jérôme; Eckl, Max

    2018-02-01

    The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.

  12. The National Map: Benefits at what cost?

    USGS Publications Warehouse

    Halsing, D.L.; Theissen, K.M.; Bernknopf, R.L.

    2004-01-01

    The U.S. Geological Survey has conducted a cost-benefit analysis of The National Map, and determined that, during its 30-year projected lifespan, the project will likely bring a net present value of benefits to society of $2.05 billion. Such a survey enhances the United States' ability to access, integrate, and apply geospatial data at global, national, and local scales. This paper gives an overview on the underlying economic model for evaluating program benefits and presents the primary findings as well as a sensitivity analysis assessing the robustness of the results.

  13. Fiber optic sensor based on Mach-Zehnder interferometer for securing entrance areas of buildings

    NASA Astrophysics Data System (ADS)

    Nedoma, Jan; Fajkus, Marcel; Martinek, Radek; Mec, Pavel; Novak, Martin; Bednarek, Lukas; Vasinek, Vladimir

    2017-10-01

    Authors of this article focused on the utilization of fiber optic sensors based on interferometric measurements for securing entrance areas of buildings such as windows and doors. We described the implementation of the fiber-optic interferometer (type Mach-Zehnder) into the window frame or door, sensor sensitivity, analysis of the background noise and methods of signal evaluation. The advantage of presented solution is the use of standard telecommunication fiber standard G.652.D, high sensitivity, immunity of sensor to electromagnetic interference (EMI) and passivity of the sensor regarding power supply. Authors implemented the Graphical User Interface (GUI) which offers the possibility of remote monitoring presented sensing solution.

  14. Changes in reference evapotranspiration and its driving factors in the middle reaches of Yellow River Basin, China.

    PubMed

    She, Dunxian; Xia, Jun; Zhang, Yongyong

    2017-12-31

    Reference evapotranspiration (ET 0 ) is important for agricultural, environmental and other studies, and understanding the attribution of its change is helpful to provide information for irrigation scheduling and water resources management. The present study investigates the attribution of the change of ET 0 at 49 meteorological stations in the middle reaches of Yellow River basin (MRYRB) of China from 1960 to 2012. Results show that annual ET 0 increases from the northwest to the southeast of MRYRB in space. We find that annual ET 0 clearly presents a zigzag change pattern rather than a monotonically change during the whole period. The detected three breakpoints at 1972, 1988 and 1997 divide the whole period into four subperiods. The sensitivity analysis indicates that the ET 0 is the most sensitive to surface solar radiation (Rs), followed by relative humidity (RH) and mean air temperature (T), and the least sensitive to wind speed (u) in our study area. Furthermore, we find that ET 0 is becoming less sensitive to RH and more sensitive to T during 1960-2012. The attributions of the change in ET 0 vary largely at different regions and subperiods. The declined wind speed is the dominant factor, followed by Rs to the ET 0 reduction during 1960-2012. Further analysis shows that Rs and u are the two major contributing factors that control the change of ET 0 at most stations and during most subperiods. Our study confirms that the change of ET 0 is influenced by the complex interactions of climatic factors, and the dominant factor to the change of ET 0 is different in various regions and time periods. The results presented here can provide a reference for agricultural production and water resources management in MRYRB as well as other semi-arid and semi-humid regions. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Finding the bottom and using it

    PubMed Central

    Sandoval, Ruben M.; Wang, Exing; Molitoris, Bruce A.

    2014-01-01

    Maximizing 2-photon parameters used in acquiring images for quantitative intravital microscopy, especially when high sensitivity is required, remains an open area of investigation. Here we present data on correctly setting the black level of the photomultiplier tube amplifier by adjusting the offset to allow for accurate quantitation of low intensity processes. When the black level is set too high some low intensity pixel values become zero and a nonlinear degradation in sensitivity occurs rendering otherwise quantifiable low intensity values virtually undetectable. Initial studies using a series of increasing offsets for a sequence of concentrations of fluorescent albumin in vitro revealed a loss of sensitivity for higher offsets at lower albumin concentrations. A similar decrease in sensitivity, and therefore the ability to correctly determine the glomerular permeability coefficient of albumin, occurred in vivo at higher offset. Finding the offset that yields accurate and linear data are essential for quantitative analysis when high sensitivity is required. PMID:25313346

  16. Novel Image Encryption Scheme Based on Chebyshev Polynomial and Duffing Map

    PubMed Central

    2014-01-01

    We present a novel image encryption algorithm using Chebyshev polynomial based on permutation and substitution and Duffing map based on substitution. Comprehensive security analysis has been performed on the designed scheme using key space analysis, visual testing, histogram analysis, information entropy calculation, correlation coefficient analysis, differential analysis, key sensitivity test, and speed test. The study demonstrates that the proposed image encryption algorithm shows advantages of more than 10113 key space and desirable level of security based on the good statistical results and theoretical arguments. PMID:25143970

  17. Accuracy of transvaginal ultrasound for diagnosis of deep endometriosis in uterosacral ligaments, rectovaginal septum, vagina and bladder: systematic review and meta-analysis.

    PubMed

    Guerriero, S; Ajossa, S; Minguez, J A; Jurado, M; Mais, V; Melis, G B; Alcazar, J L

    2015-11-01

    To review the diagnostic accuracy of transvaginal ultrasound (TVS) in the preoperative detection of endometriosis in the uterosacral ligaments (USL), rectovaginal septum (RVS), vagina and bladder in patients with clinical suspicion of deep infiltrating endometriosis (DIE). An extensive search was performed in MEDLINE (PubMed) and EMBASE for studies published between January 1989 and December 2014. Studies were considered eligible if they reported on the use of TVS for the preoperative detection of endometriosis in the USL, RVS, vagina and bladder in women with clinical suspicion of DIE using the surgical data as a reference standard. Study quality was assessed using the PRISMA guidelines and QUADAS-2 tool. Of the 801 citations identified, 11 studies (n = 1583) were considered eligible and were included in the meta-analysis. For detection of endometriosis in the USL, the overall pooled sensitivity and specificity of TVS were 53% (95%CI, 35-70%) and 93% (95%CI, 83-97%), respectively. The pretest probability of USL endometriosis was 54%, which increased to 90% when suspicion of endometriosis was present after TVS examination. For detection of endometriosis in the RVS, the overall pooled sensitivity and specificity were 49% (95%CI, 36-62%) and 98% (95%CI, 95-99%), respectively. The pretest probability of RVS endometriosis was 24%, which increased to 89% when suspicion of endometriosis was present after TVS examination. For detection of vaginal endometriosis, the overall pooled sensitivity and specificity were 58% (95%CI, 40-74%) and 96% (95%CI, 87-99%), respectively. The pretest probability of vaginal endometriosis was 17%, which increased to 76% when suspicion of endometriosis was present after TVS assessment. Substantial heterogeneity was found for sensitivity and specificity for all these locations. For detection of bladder endometriosis, the overall pooled sensitivity and specificity were 62% (95%CI, 40-80%) and 100% (95%CI, 97-100%), respectively. Moderate heterogeneity was found for sensitivity and specificity for bladder endometriosis. The pretest probability of bladder endometriosis was 5%, which increased to 92% when suspicion of endometriosis was present after TVS assessment. Overall diagnostic performance of TVS for detecting DIE in uterosacral ligaments, rectovaginal septum, vagina and bladder is fair with high specificity. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd.

  18. Using Dynamic Sensitivity Analysis to Assess Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey; Morell, Larry; Miller, Keith

    1990-01-01

    This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.

  19. Optimized blind gamma-ray pulsar searches at fixed computing budget

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pletsch, Holger J.; Clark, Colin J., E-mail: holger.pletsch@aei.mpg.de

    The sensitivity of blind gamma-ray pulsar searches in multiple years worth of photon data, as from the Fermi LAT, is primarily limited by the finite computational resources available. Addressing this 'needle in a haystack' problem, here we present methods for optimizing blind searches to achieve the highest sensitivity at fixed computing cost. For both coherent and semicoherent methods, we consider their statistical properties and study their search sensitivity under computational constraints. The results validate a multistage strategy, where the first stage scans the entire parameter space using an efficient semicoherent method and promising candidates are then refined through a fullymore » coherent analysis. We also find that for the first stage of a blind search incoherent harmonic summing of powers is not worthwhile at fixed computing cost for typical gamma-ray pulsars. Further enhancing sensitivity, we present efficiency-improved interpolation techniques for the semicoherent search stage. Via realistic simulations we demonstrate that overall these optimizations can significantly lower the minimum detectable pulsed fraction by almost 50% at the same computational expense.« less

  20. Emotional sensitization highlights the attentional bias in blood-injection-injury phobics: an ERP study.

    PubMed

    Sarlo, Michela; Buodo, Giulia; Devigili, Andrea; Munafò, Marianna; Palomba, Daniela

    2011-02-18

    The presence of an attentional bias towards disorder-related stimuli has not been consistently demonstrated in blood phobics. The present study was aimed at investigating whether or not an attentional bias, as measured by event-related potentials (ERPs), could be highlighted in blood phobics by inducing cognitive-emotional sensitization through the repetitive presentation of different disorder-related pictures. The mean amplitudes of the N100, P200, P300 and late positive potentials to picture onset were assessed along with subjective ratings of valence and arousal in 13 blood phobics and 12 healthy controls. Blood phobics, but not controls, showed a linear increase of subjective arousal over time, suggesting that cognitive-emotional sensitization did occur. The analysis of cortical responses showed larger N100 and smaller late positive potentials in phobics than in controls in response to mutilations. These findings suggest that cognitive-emotional sensitization induced an attentional bias in blood phobics during picture viewing, involving early selective encoding and late cognitive avoidance of disorder-related stimuli depicting mutilations. © 2010 Elsevier Ireland Ltd. All rights reserved.

  1. Automating calibration, sensitivity and uncertainty analysis of complex models using the R package Flexible Modeling Environment (FME): SWAT as an example

    USGS Publications Warehouse

    Wu, Y.; Liu, S.

    2012-01-01

    Parameter optimization and uncertainty issues are a great challenge for the application of large environmental models like the Soil and Water Assessment Tool (SWAT), which is a physically-based hydrological model for simulating water and nutrient cycles at the watershed scale. In this study, we present a comprehensive modeling environment for SWAT, including automated calibration, and sensitivity and uncertainty analysis capabilities through integration with the R package Flexible Modeling Environment (FME). To address challenges (e.g., calling the model in R and transferring variables between Fortran and R) in developing such a two-language coupling framework, 1) we converted the Fortran-based SWAT model to an R function (R-SWAT) using the RFortran platform, and alternatively 2) we compiled SWAT as a Dynamic Link Library (DLL). We then wrapped SWAT (via R-SWAT) with FME to perform complex applications including parameter identifiability, inverse modeling, and sensitivity and uncertainty analysis in the R environment. The final R-SWAT-FME framework has the following key functionalities: automatic initialization of R, running Fortran-based SWAT and R commands in parallel, transferring parameters and model output between SWAT and R, and inverse modeling with visualization. To examine this framework and demonstrate how it works, a case study simulating streamflow in the Cedar River Basin in Iowa in the United Sates was used, and we compared it with the built-in auto-calibration tool of SWAT in parameter optimization. Results indicate that both methods performed well and similarly in searching a set of optimal parameters. Nonetheless, the R-SWAT-FME is more attractive due to its instant visualization, and potential to take advantage of other R packages (e.g., inverse modeling and statistical graphics). The methods presented in the paper are readily adaptable to other model applications that require capability for automated calibration, and sensitivity and uncertainty analysis.

  2. GSCALite: A Web Server for Gene Set Cancer Analysis.

    PubMed

    Liu, Chun-Jie; Hu, Fei-Fei; Xia, Mengxuan; Han, Leng; Zhang, Qiong; Guo, An-Yuan

    2018-05-22

    The availability of cancer genomic data makes it possible to analyze genes related to cancer. Cancer is usually the result of a set of genes and the signal of a single gene could be covered by background noise. Here, we present a web server named Gene Set Cancer Analysis (GSCALite) to analyze a set of genes in cancers with the following functional modules. (i) Differential expression in tumor vs normal, and the survival analysis; (ii) Genomic variations and their survival analysis; (iii) Gene expression associated cancer pathway activity; (iv) miRNA regulatory network for genes; (v) Drug sensitivity for genes; (vi) Normal tissue expression and eQTL for genes. GSCALite is a user-friendly web server for dynamic analysis and visualization of gene set in cancer and drug sensitivity correlation, which will be of broad utilities to cancer researchers. GSCALite is available on http://bioinfo.life.hust.edu.cn/web/GSCALite/. guoay@hust.edu.cn or zhangqiong@hust.edu.cn. Supplementary data are available at Bioinformatics online.

  3. (U) Second-Order Sensitivity Analysis of Uncollided Particle Contributions to Radiation Detector Responses Using Ray-Tracing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Favorite, Jeffrey A.

    The Second-Level Adjoint Sensitivity System (2nd-LASS) that yields the second-order sensitivities of a response of uncollided particles with respect to isotope densities, cross sections, and source emission rates is derived in Refs. 1 and 2. In Ref. 2, we solved problems for the uncollided leakage from a homogeneous sphere and a multiregion cylinder using the PARTISN multigroup discrete-ordinates code. In this memo, we derive solutions of the 2nd-LASS for the particular case when the response is a flux or partial current density computed at a single point on the boundary, and the inner products are computed using ray-tracing. Both themore » PARTISN approach and the ray-tracing approach are implemented in a computer code, SENSPG. The next section of this report presents the equations of the 1st- and 2nd-LASS for uncollided particles and the first- and second-order sensitivities that use the solutions of the 1st- and 2nd-LASS. Section III presents solutions of the 1st- and 2nd-LASS equations for the case of ray-tracing from a detector point. Section IV presents specific solutions of the 2nd-LASS and derives the ray-trace form of the inner products needed for second-order sensitivities. Numerical results for the total leakage from a homogeneous sphere are presented in Sec. V and for the leakage from one side of a two-region slab in Sec. VI. Section VII is a summary and conclusions.« less

  4. Contrast-enhanced swallow study sensitivity for anastomotic leak detection in post-esophagectomy patients.

    PubMed

    Mejía-Rivera, S; Pérez-Marroquín, S A; Cortés-González, R; Medina-Franco, H

    2018-03-07

    Esophagectomy is a highly invasive surgery and one of its postoperative complications is anastomotic leakage, occurring in 53% of cases. The aim of the present study was to determine the sensitivity of the contrast-enhanced swallow study as a method for diagnosing anastomotic leak in patients that underwent esophagectomy. The present retrospective study included the case records of patients that underwent esophagectomy with reconstruction and cervical anastomosis at the Instituto Nacional de Ciencias Médicas y Nutrición Salvador Zubirán within the time frame of January 1, 2000 and May 31, 2006. Demographic, clinical, and laboratory data emphasizing clinical and radiographic anastomotic leak detection were identified. Descriptive statistics were carried out and contrast-enhanced swallow study sensitivity for diagnosing leakage was calculated. Seventy patients were included in the analysis. The mean age of the patients was 50.6 years, 51 of the patients were men (72.86%), and 19 were women (27.14%). Indications for surgery were benign lesion in 29 patients (41.4%) and malignant lesion in 41 (58.6%). A total of 44.3% of the patients presented with a comorbidity, with diabetes mellitus and high blood pressure standing out. Thirty patients (42.85%) presented with anastomotic leak. Contrast-enhanced swallow study sensitivity for leak detection was 43.33%. The diagnostic sensitivity of the contrast-enhanced swallow study was very low. Therefore, we recommend the discontinuation of its routine use as a method for diagnosing anastomotic leaks. Copyright © 2018 Asociación Mexicana de Gastroenterología. Publicado por Masson Doyma México S.A. All rights reserved.

  5. Development, sensitivity and uncertainty analysis of LASH model

    USDA-ARS?s Scientific Manuscript database

    Many hydrologic models have been developed to help manage natural resources all over the world. Nevertheless, most models have presented a high complexity regarding data base requirements, as well as, many calibration parameters. This has brought serious difficulties for applying them in watersheds ...

  6. Sensitivity Analysis in Sequential Decision Models.

    PubMed

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  7. Critique and sensitivity analysis of the compensation function used in the LMS Hudson River striped bass models. Environmental Sciences Division publication No. 944

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Winkle, W.; Christensen, S.W.; Kauffman, G.

    1976-12-01

    The description and justification for the compensation function developed and used by Lawler, Matusky and Skelly Engineers (LMS) (under contract to Consolidated Edison Company of New York) in their Hudson River striped bass models are presented. A sensitivity analysis of this compensation function is reported, based on computer runs with a modified version of the LMS completely mixed (spatially homogeneous) model. Two types of sensitivity analysis were performed: a parametric study involving at least five levels for each of the three parameters in the compensation function, and a study of the form of the compensation function itself, involving comparison ofmore » the LMS function with functions having no compensation at standing crops either less than or greater than the equilibrium standing crops. For the range of parameter values used in this study, estimates of percent reduction are least sensitive to changes in YS, the equilibrium standing crop, and most sensitive to changes in KXO, the minimum mortality rate coefficient. Eliminating compensation at standing crops either less than or greater than the equilibrium standing crops results in higher estimates of percent reduction. For all values of KXO and for values of YS and KX at and above the baseline values, eliminating compensation at standing crops less than the equilibrium standing crops results in a greater increase in percent reduction than eliminating compensation at standing crops greater than the equilibrium standing crops.« less

  8. A generalized matching law analysis of cocaine vs. food choice in rhesus monkeys: effects of candidate 'agonist-based' medications on sensitivity to reinforcement.

    PubMed

    Hutsell, Blake A; Negus, S Stevens; Banks, Matthew L

    2015-01-01

    We have previously demonstrated reductions in cocaine choice produced by either continuous 14-day phendimetrazine and d-amphetamine treatment or removing cocaine availability under a cocaine vs. food choice procedure in rhesus monkeys. The aim of the present investigation was to apply the concatenated generalized matching law (GML) to cocaine vs. food choice dose-effect functions incorporating sensitivity to both the relative magnitude and price of each reinforcer. Our goal was to determine potential behavioral mechanisms underlying pharmacological treatment efficacy to decrease cocaine choice. A multi-model comparison approach was used to characterize dose- and time-course effects of both pharmacological and environmental manipulations on sensitivity to reinforcement. GML models provided an excellent fit of the cocaine choice dose-effect functions in individual monkeys. Reductions in cocaine choice by both pharmacological and environmental manipulations were principally produced by systematic decreases in sensitivity to reinforcer price and non-systematic changes in sensitivity to reinforcer magnitude. The modeling approach used provides a theoretical link between the experimental analysis of choice and pharmacological treatments being evaluated as candidate 'agonist-based' medications for cocaine addiction. The analysis suggests that monoamine releaser treatment efficacy to decrease cocaine choice was mediated by selectively increasing the relative price of cocaine. Overall, the net behavioral effect of these pharmacological treatments was to increase substitutability of food pellets, a nondrug reinforcer, for cocaine. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  9. Determining the Best-Fit FPGA for a Space Mission: An Analysis of Cost, SEU Sensitivity,and Reliability

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Ken

    2007-01-01

    This viewgraph presentation reviews the selection of the optimum Field Programmable Gate Arrays (FPGA) for space missions. Included in this review is a discussion on differentiating amongst various FPGAs, cost analysis of the various options, the investigation of radiation effects, an expansion of the evaluation criteria, and the application of the evaluation criteria to the selection process.

  10. Screening for Depressive Disorders Using the Mood and Anxiety Symptoms Questionnaire Anhedonic Depression Scale: A Receiver-Operating Characteristic Analysis

    ERIC Educational Resources Information Center

    Bredemeier, Keith; Spielberg, Jeffery M.; Silton, Rebecca Levin; Berenbaum, Howard; Heller, Wendy; Miller, Gregory A.

    2010-01-01

    The present study examined the utility of the anhedonic depression scale from the Mood and Anxiety Symptoms Questionnaire (MASQ-AD scale) as a way to screen for depressive disorders. Using receiver-operating characteristic analysis, we examined the sensitivity and specificity of the full 22-item MASQ-AD scale, as well as the 8- and 14-item…

  11. Effects of aircraft noise on the equilibrium of airport residents: Testing and utilization of a new methodology

    NASA Technical Reports Server (NTRS)

    Francois, J.

    1981-01-01

    The focus of the investigation is centered around two main themes: an analysis of the effects of aircraft noise on the psychological and physiological equilibrium of airport residents; and an analysis of the sources of variability of sensitivity to noise. The methodology used is presented. Nine statistical tables are included, along with a set of conclusions.

  12. Pain sensitivity mediates the relationship between stress and headache intensity in chronic tension-type headache

    PubMed Central

    Cathcart, Stuart; Bhullar, Navjot; Immink, Maarten; Della Vedova, Chris; Hayball, John

    2012-01-01

    BACKGROUND: A central model for chronic tension-type headache (CTH) posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. The prediction from this model that pain sensitivity mediates the relationship between stress and headache activity has not yet been examined. OBJECTIVE: To determine whether pain sensitivity mediates the relationship between stress and prospective headache activity in CTH sufferers. METHOD: Self-reported stress, pain sensitivity and prospective headache activity were measured in 53 CTH sufferers recruited from the general population. Pain sensitivity was modelled as a mediator between stress and headache activity, and tested using a nonparametric bootstrap analysis. RESULTS: Pain sensitivity significantly mediated the relationship between stress and headache intensity. CONCLUSIONS: The results of the present study support the central model for CTH, which posits that stress contributes to headache, in part, by aggravating existing hyperalgesia in CTH sufferers. Implications for the mechanisms and treatment of CTH are discussed. PMID:23248808

  13. Design and characterization of planar capacitive imaging probe based on the measurement sensitivity distribution

    NASA Astrophysics Data System (ADS)

    Yin, X.; Chen, G.; Li, W.; Huthchins, D. A.

    2013-01-01

    Previous work indicated that the capacitive imaging (CI) technique is a useful NDE tool which can be used on a wide range of materials, including metals, glass/carbon fibre composite materials and concrete. The imaging performance of the CI technique for a given application is determined by design parameters and characteristics of the CI probe. In this paper, a rapid method for calculating the whole probe sensitivity distribution based on the finite element model (FEM) is presented to provide a direct view of the imaging capabilities of the planar CI probe. Sensitivity distributions of CI probes with different geometries were obtained. Influencing factors on sensitivity distribution were studied. Comparisons between CI probes with point-to-point triangular electrode pair and back-to-back triangular electrode pair were made based on the analysis of the corresponding sensitivity distributions. The results indicated that the sensitivity distribution could be useful for optimising the probe design parameters and predicting the imaging performance.

  14. High-precision drop shape analysis on inclining flat surfaces: introduction and comparison of this special method with commercial contact angle analysis.

    PubMed

    Schmitt, Michael; Heib, Florian

    2013-10-07

    Drop shape analysis is one of the most important and frequently used methods to characterise surfaces in the scientific and industrial communities. An especially large number of studies, which use contact angle measurements to analyse surfaces, are characterised by incorrect or misdirected conclusions such as the determination of surface energies from poorly performed contact angle determinations. In particular, the characterisation of surfaces, which leads to correlations between the contact angle and other effects, must be critically validated for some publications. A large number of works exist concerning the theoretical and thermodynamic aspects of two- and tri-phase boundaries. The linkage between theory and experiment is generally performed by an axisymmetric drop shape analysis, that is, simulations of the theoretical drop profiles by numerical integration onto a number of points of the drop meniscus (approximately 20). These methods work very well for axisymmetric profiles such as those obtained by pendant drop measurements, but in the case of a sessile drop onto real surfaces, additional unknown and misunderstood effects on the dependence of the surface must be considered. We present a special experimental and practical investigation as another way to transition from experiment to theory. This procedure was developed to be especially sensitive to small variations in the dependence of the dynamic contact angle on the surface; as a result, this procedure will allow the properties of the surface to be monitored with a higher precession and sensitivity. In this context, water drops onto a 111 silicon wafer are dynamically measured by video recording and by inclining the surface, which results in a sequence of non-axisymmetric drops. The drop profiles are analysed by commercial software and by the developed and presented high-precision drop shape analysis. In addition to the enhanced sensitivity for contact angle determination, this analysis technique, in combination with innovative fit algorithms and data presentations, can result in enhanced reproducibility and comparability of the contact angle measurements in terms of the material characterisation in a comprehensible way.

  15. High-precision drop shape analysis on inclining flat surfaces: Introduction and comparison of this special method with commercial contact angle analysis

    NASA Astrophysics Data System (ADS)

    Schmitt, Michael; Heib, Florian

    2013-10-01

    Drop shape analysis is one of the most important and frequently used methods to characterise surfaces in the scientific and industrial communities. An especially large number of studies, which use contact angle measurements to analyse surfaces, are characterised by incorrect or misdirected conclusions such as the determination of surface energies from poorly performed contact angle determinations. In particular, the characterisation of surfaces, which leads to correlations between the contact angle and other effects, must be critically validated for some publications. A large number of works exist concerning the theoretical and thermodynamic aspects of two- and tri-phase boundaries. The linkage between theory and experiment is generally performed by an axisymmetric drop shape analysis, that is, simulations of the theoretical drop profiles by numerical integration onto a number of points of the drop meniscus (approximately 20). These methods work very well for axisymmetric profiles such as those obtained by pendant drop measurements, but in the case of a sessile drop onto real surfaces, additional unknown and misunderstood effects on the dependence of the surface must be considered. We present a special experimental and practical investigation as another way to transition from experiment to theory. This procedure was developed to be especially sensitive to small variations in the dependence of the dynamic contact angle on the surface; as a result, this procedure will allow the properties of the surface to be monitored with a higher precession and sensitivity. In this context, water drops onto a 111 silicon wafer are dynamically measured by video recording and by inclining the surface, which results in a sequence of non-axisymmetric drops. The drop profiles are analysed by commercial software and by the developed and presented high-precision drop shape analysis. In addition to the enhanced sensitivity for contact angle determination, this analysis technique, in combination with innovative fit algorithms and data presentations, can result in enhanced reproducibility and comparability of the contact angle measurements in terms of the material characterisation in a comprehensible way.

  16. Conservative Allowables Determined by a Tsai-Hill Equivalent Criterion for Design of Satellite Composite Parts

    NASA Astrophysics Data System (ADS)

    Pommatau, Gilles

    2014-06-01

    The present paper deals with the industrial application, via a software developed by Thales Alenia Space, of a new failure criterion named "Tsai-Hill equivalent criterion" for composite structural parts of satellites. The first part of the paper briefly describes the main hypothesis and the possibilities in terms of failure analysis of the software. The second parts reminds the quadratic and conservative nature of the new failure criterion, already presented in ESA conference in a previous paper. The third part presents the statistical calculation possibilities of the software, and the associated sensitivity analysis, via results obtained on different composites. Then a methodology, proposed to customers and agencies, is presented with its limitations and advantages. It is then conclude that this methodology is an efficient industrial way to perform mechanical analysis on quasi-isotropic composite parts.

  17. Recent results of synchrotron radiation induced total reflection X-ray fluorescence analysis at HASYLAB, beamline L

    NASA Astrophysics Data System (ADS)

    Streli, C.; Pepponi, G.; Wobrauschek, P.; Jokubonis, C.; Falkenberg, G.; Záray, G.; Broekaert, J.; Fittschen, U.; Peschel, B.

    2006-11-01

    At the Hamburger Synchrotronstrahlungslabor (HASYLAB), Beamline L, a vacuum chamber for synchrotron radiation-induced total reflection X-ray fluorescence analysis, is now available which can easily be installed using the adjustment components for microanalysis present at this beamline. The detector is now in the final version of a Vortex silicon drift detector with 50-mm 2 active area from Radiant Detector Technologies. With the Ni/C multilayer monochromator set to 17 keV extrapolated detection limits of 8 fg were obtained using the 50-mm 2 silicon drift detector with 1000 s live time on a sample containing 100 pg of Ni. Various applications are presented, especially of samples which are available in very small amounts: As synchrotron radiation-induced total reflection X-ray fluorescence analysis is much more sensitive than tube-excited total reflection X-ray fluorescence analysis, the sampling time of aerosol samples can be diminished, resulting in a more precise time resolution of atmospheric events. Aerosols, directly sampled on Si reflectors in an impactor were investigated. A further application was the determination of contamination elements in a slurry of high-purity Al 2O 3. No digestion is required; the sample is pipetted and dried before analysis. A comparison with laboratory total reflection X-ray fluorescence analysis showed the higher sensitivity of synchrotron radiation-induced total reflection X-ray fluorescence analysis, more contamination elements could be detected. Using the Si-111 crystal monochromator also available at beamline L, XANES measurements to determine the chemical state were performed. This is only possible with lower sensitivity as the flux transmitted by the crystal monochromator is about a factor of 100 lower than that transmitted by the multilayer monochromator. Preliminary results of X-ray absorption near-edge structure measurements for As in xylem sap from cucumber plants fed with As(III) and As(V) are reported. Detection limits of 170 ng/l of As in xylem sap were achieved.

  18. A rapid and sensitive method for the simultaneous analysis of aliphatic and polar molecules containing free carboxyl groups in plant extracts by LC-MS/MS

    PubMed Central

    2009-01-01

    Background Aliphatic molecules containing free carboxyl groups are important intermediates in many metabolic and signalling reactions, however, they accumulate to low levels in tissues and are not efficiently ionized by electrospray ionization (ESI) compared to more polar substances. Quantification of aliphatic molecules becomes therefore difficult when small amounts of tissue are available for analysis. Traditional methods for analysis of these molecules require purification or enrichment steps, which are onerous when multiple samples need to be analyzed. In contrast to aliphatic molecules, more polar substances containing free carboxyl groups such as some phytohormones are efficiently ionized by ESI and suitable for analysis by LC-MS/MS. Thus, the development of a method with which aliphatic and polar molecules -which their unmodified forms differ dramatically in their efficiencies of ionization by ESI- can be simultaneously detected with similar sensitivities would substantially simplify the analysis of complex biological matrices. Results A simple, rapid, specific and sensitive method for the simultaneous detection and quantification of free aliphatic molecules (e.g., free fatty acids (FFA)) and small polar molecules (e.g., jasmonic acid (JA), salicylic acid (SA)) containing free carboxyl groups by direct derivatization of leaf extracts with Picolinyl reagent followed by LC-MS/MS analysis is presented. The presence of the N atom in the esterified pyridine moiety allowed the efficient ionization of 25 compounds tested irrespective of their chemical structure. The method was validated by comparing the results obtained after analysis of Nicotiana attenuata leaf material with previously described analytical methods. Conclusion The method presented was used to detect 16 compounds in leaf extracts of N. attenuata plants. Importantly, the method can be adapted based on the specific analytes of interest with the only consideration that the molecules must contain at least one free carboxyl group. PMID:19939243

  19. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  20. Ottawa Ankle Rules and Subjective Surgeon Perception to Evaluate Radiograph Necessity Following Foot and Ankle Sprain

    PubMed Central

    Pires, RES; Pereira, AA; Abreu-e-Silva, GM; Labronici, PJ; Figueiredo, LB; Godoy-Santos, AL; Kfuri, M

    2014-01-01

    Background: Foot and ankle injuries are frequent in emergency departments. Although only a few patients with foot and ankle sprain present fractures and the fracture patterns are almost always simple, lack of fracture diagnosis can lead to poor functional outcomes. Aim: The present study aims to evaluate the reliability of the Ottawa ankle rules and the orthopedic surgeon subjective perception to assess foot and ankle fractures after sprains. Subjects and Methods: A cross-sectional study was conducted from July 2012 to December 2012. Ethical approval was granted. Two hundred seventy-four adult patients admitted to the emergency department with foot and/or ankle sprain were evaluated by an orthopedic surgeon who completed a questionnaire prior to radiographic assessment. The Ottawa ankle rules and subjective perception of foot and/or ankle fractures were evaluated on the questionnaire. Results: Thirteen percent (36/274) patients presented fracture. Orthopedic surgeon subjective analysis showed 55.6% sensitivity, 90.1% specificity, 46.5% positive predictive value and 92.9% negative predictive value. The general orthopedic surgeon opinion accuracy was 85.4%. The Ottawa ankle rules presented 97.2% sensitivity, 7.8% specificity, 13.9% positive predictive value, 95% negative predictive value and 19.9% accuracy respectively. Weight-bearing inability was the Ottawa ankle rule item that presented the highest reliability, 69.4% sensitivity, 61.6% specificity, 63.1% accuracy, 21.9% positive predictive value and 93% negative predictive value respectively. Conclusion: The Ottawa ankle rules showed high reliability for deciding when to take radiographs in foot and/or ankle sprains. Weight-bearing inability was the most important isolated item to predict fracture presence. Orthopedic surgeon subjective analysis to predict fracture possibility showed a high specificity rate, representing a confident method to exclude unnecessary radiographic exams. PMID:24971221

  1. Computational Analysis of Epidermal Growth Factor Receptor Mutations Predicts Differential Drug Sensitivity Profiles toward Kinase Inhibitors.

    PubMed

    Akula, Sravani; Kamasani, Swapna; Sivan, Sree Kanth; Manga, Vijjulatha; Vudem, Dashavantha Reddy; Kancha, Rama Krishna

    2018-05-01

    A significant proportion of patients with lung cancer carry mutations in the EGFR kinase domain. The presence of a deletion mutation in exon 19 or L858R point mutation in the EGFR kinase domain has been shown to cause enhanced efficacy of inhibitor treatment in patients with NSCLC. Several less frequent (uncommon) mutations in the EGFR kinase domain with potential implications in treatment response have also been reported. The role of a limited number of uncommon mutations in drug sensitivity was experimentally verified. However, a huge number of these mutations remain uncharacterized for inhibitor sensitivity or resistance. A large-scale computational analysis of clinically reported 298 point mutants of EGFR kinase domain has been performed, and drug sensitivity profiles for each mutant toward seven kinase inhibitors has been determined by molecular docking. In addition, the relative inhibitor binding affinity toward each drug as compared with that of adenosine triphosphate was calculated for each mutant. The inhibitor sensitivity profiles predicted in this study for a set of previously characterized mutants correlated well with the published clinical, experimental, and computational data. Both the single and compound mutations displayed differential inhibitor sensitivity toward first- and next-generation kinase inhibitors. The present study provides predicted drug sensitivity profiles for a large panel of uncommon EGFR mutations toward multiple inhibitors, which may help clinicians in deciding mutant-specific treatment strategies. Copyright © 2018 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.

  2. Sensitivity analysis of observed reflectivity to ice particle surface roughness using MISR satellite observations

    NASA Astrophysics Data System (ADS)

    Bell, A.; Hioki, S.; Wang, Y.; Yang, P.; Di Girolamo, L.

    2016-12-01

    Previous studies found that including ice particle surface roughness in forward light scattering calculations significantly reduces the differences between observed and simulated polarimetric and radiometric observations. While it is suggested that some degree of roughness is desirable, the appropriate degree of surface roughness to be assumed in operational cloud property retrievals and the sensitivity of retrieval products to this assumption remains uncertain. In an effort to extricate this ambiguity, we will present a sensitivity analysis of space-borne multi-angle observations of reflectivity, to varying degrees of surface roughness. This process is two fold. First, sampling information and statistics of Multi-angle Imaging SpectroRadiometer (MISR) sensor data aboard the Terra platform, will be used to define the most coming viewing observation geometries. Using these defined geometries, reflectivity will be simulated for multiple degrees of roughness using results from adding-doubling radiative transfer simulations. Sensitivity of simulated reflectivity to surface roughness can then be quantified, thus yielding a more robust retrieval system. Secondly, sensitivity of the inverse problem will be analyzed. Spherical albedo values will be computed by feeding blocks of MISR data comprising cloudy pixels over ocean into the retrieval system, with assumed values of surface roughness. The sensitivity of spherical albedo to the inclusion of surface roughness can then be quantified, and the accuracy of retrieved parameters can be determined.

  3. PCR analysis is superior to histology for diagnosis of Whipple's disease mimicking seronegative rheumatic diseases.

    PubMed

    Lehmann, P; Ehrenstein, B; Hartung, W; Dragonas, C; Reischl, U; Fleck, M

    2017-03-01

    The diagnosis of Whipple's disease (WD) is commonly confirmed by histology demonstrating Periodic Acid Schiff (PAS)-positive macrophages in the duodenal mucosa. Analysis of intestinal tissue or other specimens using polymerase chain reaction (PCR) is a more sensitive method. However, the relevance of positive PCR findings is still controversial. Therefore, we evaluated the relevance of histology and PCR findings to establishing the diagnosis of WD in a series of WD patients initially presenting with suspected rheumatic diseases. Between 2006 and 2014, 20 patients with seronegative rheumatic diseases tested positive for Tropheryma whipplei (Tw) by PCR and/or histology and were enrolled in a retrospective analysis of the diagnostic value of both procedures. Seven of the 20 cases (35%) were diagnosed with 'classic' WD as indicated by PAS-positive macrophages. In the remaining 13 patients, the presence of Tw was detected by intestinal (n = 10) or synovial PCR analysis (n = 3). Two of the 20 patients (10%) with evidence of Tw did not respond to antibiotic therapy. They were not considered to suffer from WD. Therefore, relying only on histological findings of intestinal biopsies would have missed 11 (61%) of the 18 patients with WD in our cohort. In comparison, PCR of intestinal biopsies detected Tw-DNA in 14 (93%) of the 15 WD patients evaluated. Patients with a positive histology did not differ from PCR-positive patients with regard to sex, age, or duration of disease, but more often presented with gastrointestinal symptoms. A substantial number of WD patients present without typical intestinal histology findings. Additional PCR analysis of intestinal tissue or synovial fluid increased the sensitivity of the diagnostic evaluation and should be considered particularly in patients presenting with atypical seronegative rheumatic diseases and a high-risk profile for WD.

  4. Altered sensitivity to ellagic acid in neuroblastoma cells undergoing differentiation with 12-O-tetradecanoylphorbol-13-acetate and all-trans retinoic acid.

    PubMed

    Alfredsson, Christina Fjæraa; Rendel, Filip; Liang, Qui-Li; Sundström, Birgitta E; Nånberg, Eewa

    2015-12-01

    Ellagic acid has previously been reported to induce reduced proliferation and activation of apoptosis in several tumor cell lines including our own previous data from non-differentiated human neuroblastoma SH-SY5Y cells. The aim of this study was now to investigate if in vitro differentiation with the phorbol ester 12-O- tetradecanoylphorbol-13-acetate or the vitamin A derivative all-trans retinoic acid altered the sensitivity to ellagic acid in SH-SY5Y cells. The methods used were cell counting and LDH-assay for evaluation of cell number and cell death, flow cytometric analysis of SubG1- and TUNEL-analysis for apoptosis and western blot for expression of apoptosis-associated proteins. In vitro differentiation was shown to reduce the sensitivity to ellagic acid with respect to cell detachment, loss of viability and activation of apoptosis. The protective effect was phenotype-specific and most prominent in all-trans retinoic acid-differentiated cultures. Differentiation-dependent up-regulation of Bcl-2 and integrin expression is introduced as possible protective mechanisms. The presented data also point to a positive correlation between proliferative activity and sensitivity to ellagic-acid-induced cell detachment. In conclusion, the presented data emphasize the need to consider degree of neuronal differentiation and phenotype of neuroblastoma cells when discussing a potential pharmaceutical application of ellagic acid in tumor treatment. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  5. Self-consistent adjoint analysis for topology optimization of electromagnetic waves

    NASA Astrophysics Data System (ADS)

    Deng, Yongbo; Korvink, Jan G.

    2018-05-01

    In topology optimization of electromagnetic waves, the Gâteaux differentiability of the conjugate operator to the complex field variable results in the complexity of the adjoint sensitivity, which evolves the original real-valued design variable to be complex during the iterative solution procedure. Therefore, the self-inconsistency of the adjoint sensitivity is presented. To enforce the self-consistency, the real part operator has been used to extract the real part of the sensitivity to keep the real-value property of the design variable. However, this enforced self-consistency can cause the problem that the derived structural topology has unreasonable dependence on the phase of the incident wave. To solve this problem, this article focuses on the self-consistent adjoint analysis of the topology optimization problems for electromagnetic waves. This self-consistent adjoint analysis is implemented by splitting the complex variables of the wave equations into the corresponding real parts and imaginary parts, sequentially substituting the split complex variables into the wave equations with deriving the coupled equations equivalent to the original wave equations, where the infinite free space is truncated by the perfectly matched layers. Then, the topology optimization problems of electromagnetic waves are transformed into the forms defined on real functional spaces instead of complex functional spaces; the adjoint analysis of the topology optimization problems is implemented on real functional spaces with removing the variational of the conjugate operator; the self-consistent adjoint sensitivity is derived, and the phase-dependence problem is avoided for the derived structural topology. Several numerical examples are implemented to demonstrate the robustness of the derived self-consistent adjoint analysis.

  6. From web search to healthcare utilization: privacy-sensitive studies from mobile data

    PubMed Central

    Horvitz, Eric

    2013-01-01

    Objective We explore relationships between health information seeking activities and engagement with healthcare professionals via a privacy-sensitive analysis of geo-tagged data from mobile devices. Materials and methods We analyze logs of mobile interaction data stripped of individually identifiable information and location data. The data analyzed consist of time-stamped search queries and distances to medical care centers. We examine search activity that precedes the observation of salient evidence of healthcare utilization (EHU) (ie, data suggesting that the searcher is using healthcare resources), in our case taken as queries occurring at or near medical facilities. Results We show that the time between symptom searches and observation of salient evidence of seeking healthcare utilization depends on the acuity of symptoms. We construct statistical models that make predictions of forthcoming EHU based on observations about the current search session, prior medical search activities, and prior EHU. The predictive accuracy of the models varies (65%–90%) depending on the features used and the timeframe of the analysis, which we explore via a sensitivity analysis. Discussion We provide a privacy-sensitive analysis that can be used to generate insights about the pursuit of health information and healthcare. The findings demonstrate how large-scale studies of mobile devices can provide insights on how concerns about symptomatology lead to the pursuit of professional care. Conclusion We present new methods for the analysis of mobile logs and describe a study that provides evidence about how people transition from mobile searches on symptoms and diseases to the pursuit of healthcare in the world. PMID:22661560

  7. Differences in sensitivity to parenting depending on child temperament: A meta-analysis.

    PubMed

    Slagt, Meike; Dubas, Judith Semon; Deković, Maja; van Aken, Marcel A G

    2016-10-01

    Several models of individual differences in environmental sensitivity postulate increased sensitivity of some individuals to either stressful (diathesis-stress), supportive (vantage sensitivity), or both environments (differential susceptibility). In this meta-analysis we examine whether children vary in sensitivity to parenting depending on their temperament, and if so, which model can best be used to describe this sensitivity pattern. We tested whether associations between negative parenting and negative or positive child adjustment as well as between positive parenting and positive or negative child adjustment would be stronger among children higher on putative sensitivity markers (difficult temperament, negative emotionality, surgency, and effortful control). Longitudinal studies with children up to 18 years (k = 105 samples from 84 studies, Nmean = 6,153) that reported on a parenting-by-temperament interaction predicting child adjustment were included. We found 235 independent effect sizes for associations between parenting and child adjustment. Results showed that children with a more difficult temperament (compared with those with a more easy temperament) were more vulnerable to negative parenting, but also profited more from positive parenting, supporting the differential susceptibility model. Differences in susceptibility were expressed in externalizing and internalizing problems and in social and cognitive competence. Support for differential susceptibility for negative emotionality was, however, only present when this trait was assessed during infancy. Surgency and effortful control did not consistently moderate associations between parenting and child adjustment, providing little support for differential susceptibility, diathesis-stress, or vantage sensitivity models. Finally, parenting-by-temperament interactions were more pronounced when parenting was assessed using observations compared to questionnaires. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. Comparison between Surrogate Indexes of Insulin Sensitivity/Resistance and Hyperinsulinemic Euglycemic Glucose Clamps in Rhesus Monkeys

    PubMed Central

    Lee, Ho-Won; Muniyappa, Ranganath; Yan, Xu; Yue, Lilly Q.; Linden, Ellen H.; Chen, Hui; Hansen, Barbara C.

    2011-01-01

    The euglycemic glucose clamp is the reference method for assessing insulin sensitivity in humans and animals. However, clamps are ill-suited for large studies because of extensive requirements for cost, time, labor, and technical expertise. Simple surrogate indexes of insulin sensitivity/resistance including quantitative insulin-sensitivity check index (QUICKI) and homeostasis model assessment (HOMA) have been developed and validated in humans. However, validation studies of QUICKI and HOMA in both rats and mice suggest that differences in metabolic physiology between rodents and humans limit their value in rodents. Rhesus monkeys are a species more similar to humans than rodents. Therefore, in the present study, we evaluated data from 199 glucose clamp studies obtained from a large cohort of 86 monkeys with a broad range of insulin sensitivity. Data were used to evaluate simple surrogate indexes of insulin sensitivity/resistance (QUICKI, HOMA, Log HOMA, 1/HOMA, and 1/Fasting insulin) with respect to linear regression, predictive accuracy using a calibration model, and diagnostic performance using receiver operating characteristic. Most surrogates had modest linear correlations with SIClamp (r ≈ 0.4–0.64) with comparable correlation coefficients. Predictive accuracy determined by calibration model analysis demonstrated better predictive accuracy of QUICKI than HOMA and Log HOMA. Receiver operating characteristic analysis showed equivalent sensitivity and specificity of most surrogate indexes to detect insulin resistance. Thus, unlike in rodents but similar to humans, surrogate indexes of insulin sensitivity/resistance including QUICKI and log HOMA may be reasonable to use in large studies of rhesus monkeys where it may be impractical to conduct glucose clamp studies. PMID:21209021

  9. Advanced techniques for determining long term compatibility of materials with propellants

    NASA Technical Reports Server (NTRS)

    Green, R. L.; Stebbins, J. P.; Smith, A. W.; Pullen, K. E.

    1973-01-01

    A method for the prediction of propellant-material compatibility for periods of time up to ten years is presented. Advanced sensitive measurement techniques used in the prediction method are described. These include: neutron activation analysis, radioactive tracer technique, and atomic absorption spectroscopy with a graphite tube furnace sampler. The results of laboratory tests performed to verify the prediction method are presented.

  10. Systems Architectures for a Tactical Naval Command and Control System

    DTIC Science & Technology

    2009-03-01

    Supplement TST Time-sensitive Targeting TTP Tactics, Techniques, and Procedures WTP Weapons-target pairing xix GLOSSARY Analysis...target pairings ( WTPs ) and are presented to OTC [a]. 24. OTC conducts risk assessment of engagement options [a]. 25. OTC orders confirmed surface...engagement options are generated through weapon- target pairings ( WTPs ) and are presented to OTC [a]. 24. OTC conducts risk assessment of engagement

  11. A value-based medicine cost-utility analysis of idiopathic epiretinal membrane surgery.

    PubMed

    Gupta, Omesh P; Brown, Gary C; Brown, Melissa M

    2008-05-01

    To perform a reference case, cost-utility analysis of epiretinal membrane (ERM) surgery using current literature on outcomes and complications. Computer-based, value-based medicine analysis. Decision analyses were performed under two scenarios: ERM surgery in better-seeing eye and ERM surgery in worse-seeing eye. The models applied long-term published data primarily from the Blue Mountains Eye Study and the Beaver Dam Eye Study. Visual acuity and major complications were derived from 25-gauge pars plana vitrectomy studies. Patient-based, time trade-off utility values, Markov modeling, sensitivity analysis, and net present value adjustments were used in the design and calculation of results. Main outcome measures included the number of discounted quality-adjusted-life-years (QALYs) gained and dollars spent per QALY gained. ERM surgery in the better-seeing eye compared with observation resulted in a mean gain of 0.755 discounted QALYs (3% annual rate) per patient treated. This model resulted in $4,680 per QALY for this procedure. When sensitivity analysis was performed, utility values varied from $6,245 to $3,746/QALY gained, medical costs varied from $3,510 to $5,850/QALY gained, and ERM recurrence rate increased to $5,524/QALY. ERM surgery in the worse-seeing eye compared with observation resulted in a mean gain of 0.27 discounted QALYs per patient treated. The $/QALY was $16,146 with a range of $20,183 to $12,110 based on sensitivity analyses. Utility values ranged from $21,520 to $12,916/QALY and ERM recurrence rate increased to $16,846/QALY based on sensitivity analysis. ERM surgery is a very cost-effective procedure when compared with other interventions across medical subspecialties.

  12. Risk Analysis of a Fuel Storage Terminal Using HAZOP and FTA

    PubMed Central

    Baixauli-Pérez, Mª Piedad

    2017-01-01

    The size and complexity of industrial chemical plants, together with the nature of the products handled, means that an analysis and control of the risks involved is required. This paper presents a methodology for risk analysis in chemical and allied industries that is based on a combination of HAZard and OPerability analysis (HAZOP) and a quantitative analysis of the most relevant risks through the development of fault trees, fault tree analysis (FTA). Results from FTA allow prioritizing the preventive and corrective measures to minimize the probability of failure. An analysis of a case study is performed; it consists in the terminal for unloading chemical and petroleum products, and the fuel storage facilities of two companies, in the port of Valencia (Spain). HAZOP analysis shows that loading and unloading areas are the most sensitive areas of the plant and where the most significant danger is a fuel spill. FTA analysis indicates that the most likely event is a fuel spill in tank truck loading area. A sensitivity analysis from the FTA results show the importance of the human factor in all sequences of the possible accidents, so it should be mandatory to improve the training of the staff of the plants. PMID:28665325

  13. Risk Analysis of a Fuel Storage Terminal Using HAZOP and FTA.

    PubMed

    Fuentes-Bargues, José Luis; González-Cruz, Mª Carmen; González-Gaya, Cristina; Baixauli-Pérez, Mª Piedad

    2017-06-30

    The size and complexity of industrial chemical plants, together with the nature of the products handled, means that an analysis and control of the risks involved is required. This paper presents a methodology for risk analysis in chemical and allied industries that is based on a combination of HAZard and OPerability analysis (HAZOP) and a quantitative analysis of the most relevant risks through the development of fault trees, fault tree analysis (FTA). Results from FTA allow prioritizing the preventive and corrective measures to minimize the probability of failure. An analysis of a case study is performed; it consists in the terminal for unloading chemical and petroleum products, and the fuel storage facilities of two companies, in the port of Valencia (Spain). HAZOP analysis shows that loading and unloading areas are the most sensitive areas of the plant and where the most significant danger is a fuel spill. FTA analysis indicates that the most likely event is a fuel spill in tank truck loading area. A sensitivity analysis from the FTA results show the importance of the human factor in all sequences of the possible accidents, so it should be mandatory to improve the training of the staff of the plants.

  14. Nonindependence and sensitivity analyses in ecological and evolutionary meta-analyses.

    PubMed

    Noble, Daniel W A; Lagisz, Malgorzata; O'dea, Rose E; Nakagawa, Shinichi

    2017-05-01

    Meta-analysis is an important tool for synthesizing research on a variety of topics in ecology and evolution, including molecular ecology, but can be susceptible to nonindependence. Nonindependence can affect two major interrelated components of a meta-analysis: (i) the calculation of effect size statistics and (ii) the estimation of overall meta-analytic estimates and their uncertainty. While some solutions to nonindependence exist at the statistical analysis stages, there is little advice on what to do when complex analyses are not possible, or when studies with nonindependent experimental designs exist in the data. Here we argue that exploring the effects of procedural decisions in a meta-analysis (e.g. inclusion of different quality data, choice of effect size) and statistical assumptions (e.g. assuming no phylogenetic covariance) using sensitivity analyses are extremely important in assessing the impact of nonindependence. Sensitivity analyses can provide greater confidence in results and highlight important limitations of empirical work (e.g. impact of study design on overall effects). Despite their importance, sensitivity analyses are seldom applied to problems of nonindependence. To encourage better practice for dealing with nonindependence in meta-analytic studies, we present accessible examples demonstrating the impact that ignoring nonindependence can have on meta-analytic estimates. We also provide pragmatic solutions for dealing with nonindependent study designs, and for analysing dependent effect sizes. Additionally, we offer reporting guidelines that will facilitate disclosure of the sources of nonindependence in meta-analyses, leading to greater transparency and more robust conclusions. © 2017 John Wiley & Sons Ltd.

  15. Arecibo Pulsar Survey Using ALFA. IV. Mock Spectrometer Data Analysis, Survey Sensitivity, and the Discovery of 40 Pulsars

    NASA Astrophysics Data System (ADS)

    Lazarus, P.; Brazier, A.; Hessels, J. W. T.; Karako-Argaman, C.; Kaspi, V. M.; Lynch, R.; Madsen, E.; Patel, C.; Ransom, S. M.; Scholz, P.; Swiggum, J.; Zhu, W. W.; Allen, B.; Bogdanov, S.; Camilo, F.; Cardoso, F.; Chatterjee, S.; Cordes, J. M.; Crawford, F.; Deneva, J. S.; Ferdman, R.; Freire, P. C. C.; Jenet, F. A.; Knispel, B.; Lee, K. J.; van Leeuwen, J.; Lorimer, D. R.; Lyne, A. G.; McLaughlin, M. A.; Siemens, X.; Spitler, L. G.; Stairs, I. H.; Stovall, K.; Venkataraman, A.

    2015-10-01

    The on-going Arecibo Pulsar-ALFA (PALFA) survey began in 2004 and is searching for radio pulsars in the Galactic plane at 1.4 GHz. Here we present a comprehensive description of one of its main data reduction pipelines that is based on the PRESTO software and includes new interference-excision algorithms and candidate selection heuristics. This pipeline has been used to discover 40 pulsars, bringing the survey’s discovery total to 144 pulsars. Of the new discoveries, eight are millisecond pulsars (MSPs; P\\lt 10 ms) and one is a Fast Radio Burst (FRB). This pipeline has also re-detected 188 previously known pulsars, 60 of them previously discovered by the other PALFA pipelines. We present a novel method for determining the survey sensitivity that accurately takes into account the effects of interference and red noise: we inject synthetic pulsar signals with various parameters into real survey observations and then attempt to recover them with our pipeline. We find that the PALFA survey achieves the sensitivity to MSPs predicted by theoretical models but suffers a degradation for P≳ 100 ms that gradually becomes up to ˜10 times worse for P\\gt 4 {{s}} at {DM}\\lt 150 pc cm-3. We estimate 33 ± 3% of the slower pulsars are missed, largely due to red noise. A population synthesis analysis using the sensitivity limits we measured suggests the PALFA survey should have found 224 ± 16 un-recycled pulsars in the data set analyzed, in agreement with the 241 actually detected. The reduced sensitivity could have implications on estimates of the number of long-period pulsars in the Galaxy.

  16. The relevance of the slope for concentration-effect relations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schobben, H.P.M.; Smit, M.; Schobben, J.H.M.

    1995-12-31

    Risk analysis is mostly based on a comparison of one value for the exposure to a chemical (PEC) and one value for the sensitivity of biota (NEC). This method enables the determination of an effect to be expected, but it is not possible to quantify the magnitude of that effect. Moreover, it is impossible to estimate the effect of a combination of chemicals. Therefore, it is necessary to use a mathematical function to describe the relation between a concentration and the subsequent effect. These relations are typically based on a log normal or log logistic distribution of the sensitivity ofmore » individuals of a species. This distribution is characterized by the median sensitivity (EC{sub 50}) and the variation between the sensitivity of individuals (being a measure for the slope of the relation). Presently the attention is focused on the median, while the slope might be even more important. Relevant exposure concentrations are typically in the range which are found in the left tail of the sensitivity distribution. In this study the slope was determined for 250 chemical-species combinations. The data were derived from original experiments and from literature. The slope is highly dependent on the exposure time; the shorter the exposure time the steeper the slope. If data for a standard exposure time [96 hours] are considered, the total variation in slope can partly be explained by the groups of organisms and chemicals. The slope for heavy metals tends to be less steep as compared to the slope of narcotic organic compounds. The slope for fish and molluscs is steeper than for crustaceans. The results of this study are presently applied in a number of risk analysis studies.« less

  17. Sensitivity, Specificity, and Posttest Probability of Parotid Fine-Needle Aspiration: A Systematic Review and Meta-analysis.

    PubMed

    Liu, C Carrie; Jethwa, Ashok R; Khariwala, Samir S; Johnson, Jonas; Shin, Jennifer J

    2016-01-01

    (1) To analyze the sensitivity and specificity of fine-needle aspiration (FNA) in distinguishing benign from malignant parotid disease. (2) To determine the anticipated posttest probability of malignancy and probability of nondiagnostic and indeterminate cytology with parotid FNA. Independently corroborated computerized searches of PubMed, Embase, and Cochrane Central Register were performed. These were supplemented with manual searches and input from content experts. Inclusion/exclusion criteria specified diagnosis of parotid mass, intervention with both FNA and surgical excision, and enumeration of both cytologic and surgical histopathologic results. The primary outcomes were sensitivity, specificity, and posttest probability of malignancy. Heterogeneity was evaluated with the I(2) statistic. Meta-analysis was performed via a 2-level mixed logistic regression model. Bayesian nomograms were plotted via pooled likelihood ratios. The systematic review yielded 70 criterion-meeting studies, 63 of which contained data that allowed for computation of numerical outcomes (n = 5647 patients; level 2a) and consideration of meta-analysis. Subgroup analyses were performed in studies that were prospective, involved consecutive patients, described the FNA technique utilized, and used ultrasound guidance. The I(2) point estimate was >70% for all analyses, except within prospectively obtained and ultrasound-guided results. Among the prospective subgroup, the pooled analysis demonstrated a sensitivity of 0.882 (95% confidence interval [95% CI], 0.509-0.982) and a specificity of 0.995 (95% CI, 0.960-0.999). The probabilities of nondiagnostic and indeterminate cytology were 0.053 (95% CI, 0.030-0.075) and 0.147 (95% CI, 0.106-0.188), respectively. FNA has moderate sensitivity and high specificity in differentiating malignant from benign parotid lesions. Considerable heterogeneity is present among studies. © American Academy of Otolaryngology-Head and Neck Surgery Foundation 2015.

  18. Sensitivity, Specificity, and Posttest Probability of Parotid Fine-Needle Aspiration: A Systematic Review and Meta-analysis

    PubMed Central

    Liu, C. Carrie; Jethwa, Ashok R.; Khariwala, Samir S.; Johnson, Jonas; Shin, Jennifer J.

    2016-01-01

    Objectives (1) To analyze the sensitivity and specificity of fine-needle aspiration (FNA) in distinguishing benign from malignant parotid disease. (2) To determine the anticipated posttest probability of malignancy and probability of non-diagnostic and indeterminate cytology with parotid FNA. Data Sources Independently corroborated computerized searches of PubMed, Embase, and Cochrane Central Register were performed. These were supplemented with manual searches and input from content experts. Review Methods Inclusion/exclusion criteria specified diagnosis of parotid mass, intervention with both FNA and surgical excision, and enumeration of both cytologic and surgical histopathologic results. The primary outcomes were sensitivity, specificity, and posttest probability of malignancy. Heterogeneity was evaluated with the I2 statistic. Meta-analysis was performed via a 2-level mixed logistic regression model. Bayesian nomograms were plotted via pooled likelihood ratios. Results The systematic review yielded 70 criterion-meeting studies, 63 of which contained data that allowed for computation of numerical outcomes (n = 5647 patients; level 2a) and consideration of meta-analysis. Subgroup analyses were performed in studies that were prospective, involved consecutive patients, described the FNA technique utilized, and used ultrasound guidance. The I2 point estimate was >70% for all analyses, except within prospectively obtained and ultrasound-guided results. Among the prospective subgroup, the pooled analysis demonstrated a sensitivity of 0.882 (95% confidence interval [95% CI], 0.509–0.982) and a specificity of 0.995 (95% CI, 0.960–0.999). The probabilities of nondiagnostic and indeterminate cytology were 0.053 (95% CI, 0.030–0.075) and 0.147 (95% CI, 0.106–0.188), respectively. Conclusion FNA has moderate sensitivity and high specificity in differentiating malignant from benign parotid lesions. Considerable heterogeneity is present among studies. PMID:26428476

  19. High frequency QRS ECG predicts ischemic defects during myocardial perfusion imaging

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Changes in high frequency QRS components of the electrocardiogram (HF QRS ECG) (150-250 Hz) are more sensitive than changes in conventional ST segments for detecting myocardial ischemia. We investigated the accuracy of 12-lead HF QRS ECG in detecting ischemia during adenosine tetrofosmin myocardial perfusion imaging (MPI). 12-lead HF QRS ECG recordings were obtained from 45 patients before and during adenosine technetium-99 tetrofosmin MPI tests. Before the adenosine infusions, recordings of HF QRS were analyzed according to a morphological score that incorporated the number, type and location of reduced amplitude zones (RAZs) present in the 12 leads. During the adenosine infusions, recordings of HF QRS were analyzed according to the maximum percentage changes (in both the positive and negative directions) that occurred in root mean square (RMS) voltage amplitudes within the 12 leads. The best set of prospective HF QRS criteria had a sensitivity of 94% and a specificity of 83% for correctly identifying the MPI result. The sensitivity of simultaneous ST segment changes (18%) was significantly lower than that of any individual HF QRS criterion (P less than 0.00l). Analysis of 12-lead HF QRS ECG is highly sensitive and specific for detecting ischemic perfusion defects during adenosine MPI stress tests and significantly more sensitive than analysis of conventional ST segments.

  20. High frequency QRS ECG predicts ischemic defects during myocardial perfusion imaging

    NASA Technical Reports Server (NTRS)

    Rahman, Atiar

    2006-01-01

    Background: Changes in high frequency QRS components of the electrocardiogram (HF QRS ECG) (150-250 Hz) are more sensitive than changes in conventional ST segments for detecting myocardial ischemia. We investigated the accuracy of 12-lead HF QRS ECG in detecting ischemia during adenosine tetrofosmin myocardial perfusion imaging (MPI). Methods and Results: 12-lead HF QRS ECG recordings were obtained from 45 patients before and during adenosine technetium-99 tetrofosmin MPI tests. Before the adenosine infusions, recordings of HF QRS were analyzed according to a morphological score that incorporated the number, type and location of reduced amplitude zones (RAZs) present in the 12 leads. During the adenosine infusions, recordings of HF QRS were analyzed according to the maximum percentage changes (in both the positive and negative directions) that occurred in root mean square (RMS) voltage amplitudes within the 12 leads. The best set of prospective HF QRS criteria had a sensitivity of 94% and a specificity of 83% for correctly identifying the MPI result. The sensitivity of simultaneous ST segment changes (18%) was significantly lower than that of any individual HF QRS criterion (P<0.001). Conclusions: Analysis of 12-lead HF QRS ECG is highly sensitive and specific for detecting ischemic perfusion defects during adenosine MPI stress tests and significantly more sensitive than analysis of conventional ST segments.

  1. A comparison of solute-transport solution techniques and their effect on sensitivity analysis and inverse modeling results

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2001-01-01

    Five common numerical techniques for solving the advection-dispersion equation (finite difference, predictor corrector, total variation diminishing, method of characteristics, and modified method of characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using discrete, randomly distributed, homogeneous blocks of five sand types. This experimental model provides an opportunity to compare the solution techniques: the heterogeneous hydraulic-conductivity distribution of known structure can be accurately represented by a numerical model, and detailed measurements can be compared with simulated concentrations and total flow through the tank. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation given the different methods of simulating solute transport. The breakthrough curves show that simulated peak concentrations, even at very fine grid spacings, varied between the techniques because of different amounts of numerical dispersion. Sensitivity-analysis results revealed: (1) a high correlation between hydraulic conductivity and porosity given the concentration and flow observations used, so that both could not be estimated; and (2) that the breakthrough curve data did not provide enough information to estimate individual values of dispersivity for the five sands. This study demonstrates that the choice of assigned dispersivity and the amount of numerical dispersion present in the solution technique influence estimated hydraulic conductivity values to a surprising degree.

  2. Key Reliability Drivers of Liquid Propulsion Engines and A Reliability Model for Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Huang, Zhao-Feng; Fint, Jeffry A.; Kuck, Frederick M.

    2005-01-01

    This paper is to address the in-flight reliability of a liquid propulsion engine system for a launch vehicle. We first establish a comprehensive list of system and sub-system reliability drivers for any liquid propulsion engine system. We then build a reliability model to parametrically analyze the impact of some reliability parameters. We present sensitivity analysis results for a selected subset of the key reliability drivers using the model. Reliability drivers identified include: number of engines for the liquid propulsion stage, single engine total reliability, engine operation duration, engine thrust size, reusability, engine de-rating or up-rating, engine-out design (including engine-out switching reliability, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction), propellant specific hazards, engine start and cutoff transient hazards, engine combustion cycles, vehicle and engine interface and interaction hazards, engine health management system, engine modification, engine ground start hold down with launch commit criteria, engine altitude start (1 in. start), Multiple altitude restart (less than 1 restart), component, subsystem and system design, manufacturing/ground operation support/pre and post flight check outs and inspection, extensiveness of the development program. We present some sensitivity analysis results for the following subset of the drivers: number of engines for the propulsion stage, single engine total reliability, engine operation duration, engine de-rating or up-rating requirements, engine-out design, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction, and engine health management system implementation (basic redlines and more advanced health management systems).

  3. Size-exclusive Nanosensor for Quantitative Analysis of Fullerene C60: A Concept Paper

    EPA Science Inventory

    This paper presents the first development of a mass-sensitive nanosensor for the isolation and quantitative analyses of engineered fullerene (C60) nanoparticles, while excluding mixtures of structurally similar fullerenes. Amino-modified beta cyclodextrin (β-CD-NH

  4. A New High-sensitivity solar X-ray Spectrophotometer SphinX:early operations and databases

    NASA Astrophysics Data System (ADS)

    Gburek, Szymon; Sylwester, Janusz; Kowalinski, Miroslaw; Siarkowski, Marek; Bakala, Jaroslaw; Podgorski, Piotr; Trzebinski, Witold; Plocieniak, Stefan; Kordylewski, Zbigniew; Kuzin, Sergey; Farnik, Frantisek; Reale, Fabio

    The Solar Photometer in X-rays (SphinX) is an instrument operating aboard Russian CORONAS-Photon satellite. A short description of this unique instrument will be presented and its unique capabilities discussed. SphinX is presently the most sensitive solar X-ray spectrophotometer measuring solar spectra in the energy range above 1 keV. A large archive of SphinX mea-surements has already been collected. General access to these measurements is possible. The SphinX data repositories contain lightcurves, spectra, and photon arrival time measurements. The SphinX data cover nearly continuously the period since the satellite launch on January 30, 2009 up to the end-of November 2009. Present instrument status, data formats and data access methods will be shown. An overview of possible new science coming from SphinX data analysis will be discussed.

  5. Diagnostic value of 18F-FDG-PET/CT for the evaluation of solitary pulmonary nodules: a systematic review and meta-analysis.

    PubMed

    Ruilong, Zong; Daohai, Xie; Li, Geng; Xiaohong, Wang; Chunjie, Wang; Lei, Tian

    2017-01-01

    To carry out a meta-analysis on the performance of fluorine-18-fluorodeoxyglucose (F-FDG) PET/computed tomography (PET/CT) for the evaluation of solitary pulmonary nodules. In the meta-analysis, we performed searches of several electronic databases for relevant studies, including Google Scholar, PubMed, Cochrane Library, and several Chinese databases. The quality of all included studies was assessed by Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). Two observers independently extracted data of eligible articles. For the meta-analysis, the total sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratios were pooled. A summary receiver operating characteristic curve was constructed. The I-test was performed to assess the impact of study heterogeneity on the results of the meta-analysis. Meta-regression and subgroup analysis were carried out to investigate the potential covariates that might have considerable impacts on heterogeneity. Overall, 12 studies were included in this meta-analysis, including a total of 1297 patients and 1301 pulmonary nodules. The pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio with corresponding 95% confidence intervals (CIs) were 0.82 (95% CI, 0.76-0.87), 0.81 (95% CI, 0.66-0.90), 4.3 (95% CI, 2.3-7.9), and 0.22 (95% CI, 0.16-0.30), respectively. Significant heterogeneity was observed in sensitivity (I=81.1%) and specificity (I=89.6%). Subgroup analysis showed that the best results for sensitivity (0.90; 95% CI, 0.68-0.86) and accuracy (0.93; 95% CI, 0.90-0.95) were present in a prospective study. The results of our analysis suggest that PET/CT is a useful tool for detecting malignant pulmonary nodules qualitatively. Although current evidence showed moderate accuracy for PET/CT in differentiating malignant from benign solitary pulmonary nodules, further work needs to be carried out to improve its reliability.

  6. Evaluation and Improvement of Liquid Propellant Rocket Chugging Analysis Techniques. Part 1: A One-Dimensional Analysis of Low Frequency Combustion Instability in the Fuel Preburner of the Space Shuttle Main Engine. Final Report M.S. Thesis - Aug. 1986

    NASA Technical Reports Server (NTRS)

    Lim, Kair Chuan

    1986-01-01

    Low frequency combustion instability, known as chugging, is consistently experienced during shutdown in the fuel and oxidizer preburners of the Space Shuttle Main Engines. Such problems always occur during the helium purge of the residual oxidizer from the preburner manifolds during the shutdown sequence. Possible causes and triggering mechanisms are analyzed and details in modeling the fuel preburner chug are presented. A linearized chugging model, based on the foundation of previous models, capable of predicting the chug occurrence is discussed and the predicted results are presented and compared to experimental work performed by NASA. Sensitivity parameters such as chamber pressure, fuel and oxidizer temperatures, and the effective bulk modulus of the liquid oxidizer are considered in analyzing the fuel preburner chug. The computer program CHUGTEST is utilized to generate the stability boundary for each sensitivity study and the region for stable operation is identified.

  7. Integrating aerodynamics and structures in the minimum weight design of a supersonic transport wing

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M.; Wrenn, Gregory A.; Dovi, Augustine R.; Coen, Peter G.; Hall, Laura E.

    1992-01-01

    An approach is presented for determining the minimum weight design of aircraft wing models which takes into consideration aerodynamics-structure coupling when calculating both zeroth order information needed for analysis and first order information needed for optimization. When performing sensitivity analysis, coupling is accounted for by using a generalized sensitivity formulation. The results presented show that the aeroelastic effects are calculated properly and noticeably reduce constraint approximation errors. However, for the particular example selected, the error introduced by ignoring aeroelastic effects are not sufficient to significantly affect the convergence of the optimization process. Trade studies are reported that consider different structural materials, internal spar layouts, and panel buckling lengths. For the formulation, model and materials used in this study, an advanced aluminum material produced the lightest design while satisfying the problem constraints. Also, shorter panel buckling lengths resulted in lower weights by permitting smaller panel thicknesses and generally, by unloading the wing skins and loading the spar caps. Finally, straight spars required slightly lower wing weights than angled spars.

  8. Cost of Equity Estimation in Fuel and Energy Sector Companies Based on CAPM

    NASA Astrophysics Data System (ADS)

    Kozieł, Diana; Pawłowski, Stanisław; Kustra, Arkadiusz

    2018-03-01

    The article presents cost of equity estimation of capital groups from the fuel and energy sector, listed at the Warsaw Stock Exchange, based on the Capital Asset Pricing Model (CAPM). The objective of the article was to perform a valuation of equity with the application of CAPM, based on actual financial data and stock exchange data and to carry out a sensitivity analysis of such cost, depending on the financing structure of the entity. The objective of the article formulated in this manner has determined its' structure. It focuses on presentation of substantive analyses related to the core of equity and methods of estimating its' costs, with special attention given to the CAPM. In the practical section, estimation of cost was performed according to the CAPM methodology, based on the example of leading fuel and energy companies, such as Tauron GE and PGE. Simultaneously, sensitivity analysis of such cost was performed depending on the structure of financing the company's operation.

  9. Probing the pH sensitivity of R-phycoerythrin: investigations of active conformational and functional variation.

    PubMed

    Liu, Lu-Ning; Su, Hai-Nan; Yan, Shi-Gan; Shao, Si-Mi; Xie, Bin-Bin; Chen, Xiu-Lan; Zhang, Xi-Ying; Zhou, Bai-Cheng; Zhang, Yu-Zhong

    2009-07-01

    Crystal structures of phycobiliproteins have provided valuable information regarding the conformations and amino acid organizations of peptides and chromophores, and enable us to investigate their structural and functional relationships with respect to environmental variations. In this work, we explored the pH-induced conformational and functional dynamics of R-phycoerythrin (R-PE) by means of absorption, fluorescence and circular dichroism spectra, together with analysis of its crystal structure. R-PE presents stronger functional stability in the pH range of 3.5-10 compared to the structural stability. Beyond this range, pronounced functional and structural changes occur. Crystal structure analysis shows that the tertiary structure of R-PE is fixed by several key anchoring points of the protein. With this specific association, the fundamental structure of R-PE is stabilized to present physiological spectroscopic properties, while local variations in protein peptides are also allowed in response to environmental disturbances. The functional stability and relative structural sensitivity of R-PE allow environmental adaptation.

  10. Performance and sensitivity analysis of the generalized likelihood ratio method for failure detection. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bueno, R. A.

    1977-01-01

    Results of the generalized likelihood ratio (GLR) technique for the detection of failures in aircraft application are presented, and its relationship to the properties of the Kalman-Bucy filter is examined. Under the assumption that the system is perfectly modeled, the detectability and distinguishability of four failure types are investigated by means of analysis and simulations. Detection of failures is found satisfactory, but problems in identifying correctly the mode of a failure may arise. These issues are closely examined as well as the sensitivity of GLR to modeling errors. The advantages and disadvantages of this technique are discussed, and various modifications are suggested to reduce its limitations in performance and computational complexity.

  11. Rapid solution of large-scale systems of equations

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chinthavali, Madhu Sudhan; Wang, Zhiqiang

    This paper presents a detailed parametric sensitivity analysis for a wireless power transfer (WPT) system in electric vehicle application. Specifically, several key parameters for sensitivity analysis of a series-parallel (SP) WPT system are derived first based on analytical modeling approach, which includes the equivalent input impedance, active / reactive power, and DC voltage gain. Based on the derivation, the impact of primary side compensation capacitance, coupling coefficient, transformer leakage inductance, and different load conditions on the DC voltage gain curve and power curve are studied and analyzed. It is shown that the desired power can be achieved by just changingmore » frequency or voltage depending on the design value of coupling coefficient. However, in some cases both have to be modified in order to achieve the required power transfer.« less

  13. A contact-area model for rail-pads connections in 2-D simulations: sensitivity analysis of train-induced vibrations

    NASA Astrophysics Data System (ADS)

    Ferrara, R.; Leonardi, G.; Jourdan, F.

    2013-09-01

    A numerical model to predict train-induced vibrations is presented. The dynamic computation considers mutual interactions in vehicle/track coupled systems by means of a finite and discrete elements method. The rail defects and the case of out-of-round wheels are considered. The dynamic interaction between the wheel-sets and the rail is accomplished by using the non-linear Hertzian model with hysteresis damping. A sensitivity analysis is done to evaluate the variables affecting more the maintenance costs. The rail-sleeper contact is assumed extended to an area-defined contact zone, rather than a single-point assumption which fits better real case studies. Experimental validations show how prediction fits well experimental data.

  14. Neurosensory analysis of tooth sensitivity during at-home dental bleaching: a randomized clinical trial.

    PubMed

    Briso, André Luiz Fraga; Rahal, Vanessa; Azevedo, Fernanda Almeida de; Gallinari, Marjorie de Oliveira; Gonçalves, Rafael Simões; Santos, Paulo Henrique Dos; Cintra, Luciano Tavares Angelo

    2018-01-01

    Objective The objective of this study was to evaluate dental sensitivity using visual analogue scale, a Computerized Visual Analogue Scale (CoVAS) and a neurosensory analyzer (TSA II) during at-home bleaching with 10% carbamide peroxide, with and without potassium oxalate. Materials and Methods Power Bleaching 10% containing potassium oxalate was used on one maxillary hemi-arch of the 25 volunteers, and Opalescence 10% was used on the opposite hemi-arch. Bleaching agents were used daily for 3 weeks. Analysis was performed before treatment, 24 hours later, 7, 14, and 21 days after the start of the treatment, and 7 days after its conclusion. The spontaneous tooth sensitivity was evaluated using the visual analogue scale and the sensitivity caused by a continuous 0°C stimulus was analyzed using CoVAS. The cold sensation threshold was also analyzed using the TSA II. The temperatures obtained were statistically analyzed using ANOVA and Tukey's test (α=5%). Results The data obtained with the other methods were also analyzed. 24 hours, 7 and 14 days before the beginning of the treatment, over 20% of the teeth presented spontaneous sensitivity, the normal condition was restored after the end of the treatment. Regarding the cold sensation temperatures, both products sensitized the teeth (p<0.05) and no differences were detected between the products in each period (p>0.05). In addition, when they were compared using CoVAS, Power Bleaching caused the highest levels of sensitivity in all study periods, with the exception of the 14th day of treatment. Conclusion We concluded that the bleaching treatment sensitized the teeth and the product with potassium oxalate was not able to modulate tooth sensitivity.

  15. Neurosensory analysis of tooth sensitivity during at-home dental bleaching: a randomized clinical trial

    PubMed Central

    Briso, André Luiz Fraga; Rahal, Vanessa; de Azevedo, Fernanda Almeida; Gallinari, Marjorie de Oliveira; Gonçalves, Rafael Simões; dos Santos, Paulo Henrique; Cintra, Luciano Tavares Angelo

    2018-01-01

    Abstract Objective The objective of this study was to evaluate dental sensitivity using visual analogue scale, a Computerized Visual Analogue Scale (CoVAS) and a neurosensory analyzer (TSA II) during at-home bleaching with 10% carbamide peroxide, with and without potassium oxalate. Materials and Methods Power Bleaching 10% containing potassium oxalate was used on one maxillary hemi-arch of the 25 volunteers, and Opalescence 10% was used on the opposite hemi-arch. Bleaching agents were used daily for 3 weeks. Analysis was performed before treatment, 24 hours later, 7, 14, and 21 days after the start of the treatment, and 7 days after its conclusion. The spontaneous tooth sensitivity was evaluated using the visual analogue scale and the sensitivity caused by a continuous 0°C stimulus was analyzed using CoVAS. The cold sensation threshold was also analyzed using the TSA II. The temperatures obtained were statistically analyzed using ANOVA and Tukey's test (α=5%). Results The data obtained with the other methods were also analyzed. 24 hours, 7 and 14 days before the beginning of the treatment, over 20% of the teeth presented spontaneous sensitivity, the normal condition was restored after the end of the treatment. Regarding the cold sensation temperatures, both products sensitized the teeth (p<0.05) and no differences were detected between the products in each period (p>0.05). In addition, when they were compared using CoVAS, Power Bleaching caused the highest levels of sensitivity in all study periods, with the exception of the 14th day of treatment. Conclusion We concluded that the bleaching treatment sensitized the teeth and the product with potassium oxalate was not able to modulate tooth sensitivity. PMID:29742258

  16. Comparative DNA microarray analysis of human monocyte derived dendritic cells and MUTZ-3 cells exposed to the moderate skin sensitizer cinnamaldehyde

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Python, Francois; Goebel, Carsten; Aeby, Pierre

    2009-09-15

    The number of studies involved in the development of in vitro skin sensitization tests has increased since the adoption of the EU 7th amendment to the cosmetics directive proposing to ban animal testing for cosmetic ingredients by 2013. Several studies have recently demonstrated that sensitizers induce a relevant up-regulation of activation markers such as CD86, CD54, IL-8 or IL-1{beta} in human myeloid cell lines (e.g., U937, MUTZ-3, THP-1) or in human peripheral blood monocyte-derived dendritic cells (PBMDCs). The present study aimed at the identification of new dendritic cell activation markers in order to further improve the in vitro evaluation ofmore » the sensitizing potential of chemicals. We have compared the gene expression profiles of PBMDCs and the human cell line MUTZ-3 after a 24-h exposure to the moderate sensitizer cinnamaldehyde. A list of 80 genes modulated in both cell types was obtained and a set of candidate marker genes was selected for further analysis. Cells were exposed to selected sensitizers and non-sensitizers for 24 h and gene expression was analyzed by quantitative real-time reverse transcriptase-polymerase chain reaction. Results indicated that PIR, TRIM16 and two Nrf2-regulated genes, CES1 and NQO1, are modulated by most sensitizers. Up-regulation of these genes could also be observed in our recently published DC-activation test with U937 cells. Due to their role in DC activation, these new genes may help to further refine the in vitro approaches for the screening of the sensitizing properties of a chemical.« less

  17. EML1 (CNG-Modulin) Controls Light Sensitivity in Darkness and under Continuous Illumination in Zebrafish Retinal Cone Photoreceptors

    PubMed Central

    Mehta, Milap; Tserentsoodol, Nomingerel; Postlethwait, John H.; Rebrik, Tatiana I.

    2013-01-01

    The ligand sensitivity of cGMP-gated (CNG) ion channels in cone photoreceptors is modulated by CNG-modulin, a Ca2+-binding protein. We investigated the functional role of CNG-modulin in phototransduction in vivo in morpholino-mediated gene knockdown zebrafish. Through comparative genomic analysis, we identified the orthologue gene of CNG-modulin in zebrafish, eml1, an ancient gene present in the genome of all vertebrates sequenced to date. We compare the photoresponses of wild-type cones with those of cones that do not express the EML1 protein. In the absence of EML1, dark-adapted cones are ∼5.3-fold more light sensitive than wild-type cones. Previous qualitative studies in several nonmammalian species have shown that immediately after the onset of continuous illumination, cones are less light sensitive than in darkness, but sensitivity then recovers over the following 15–20 s. We characterize light sensitivity recovery in continuously illuminated wild-type zebrafish cones and demonstrate that sensitivity recovery does not occur in the absence of EML1. PMID:24198367

  18. EML1 (CNG-modulin) controls light sensitivity in darkness and under continuous illumination in zebrafish retinal cone photoreceptors.

    PubMed

    Korenbrot, Juan I; Mehta, Milap; Tserentsoodol, Nomingerel; Postlethwait, John H; Rebrik, Tatiana I

    2013-11-06

    The ligand sensitivity of cGMP-gated (CNG) ion channels in cone photoreceptors is modulated by CNG-modulin, a Ca(2+)-binding protein. We investigated the functional role of CNG-modulin in phototransduction in vivo in morpholino-mediated gene knockdown zebrafish. Through comparative genomic analysis, we identified the orthologue gene of CNG-modulin in zebrafish, eml1, an ancient gene present in the genome of all vertebrates sequenced to date. We compare the photoresponses of wild-type cones with those of cones that do not express the EML1 protein. In the absence of EML1, dark-adapted cones are ∼5.3-fold more light sensitive than wild-type cones. Previous qualitative studies in several nonmammalian species have shown that immediately after the onset of continuous illumination, cones are less light sensitive than in darkness, but sensitivity then recovers over the following 15-20 s. We characterize light sensitivity recovery in continuously illuminated wild-type zebrafish cones and demonstrate that sensitivity recovery does not occur in the absence of EML1.

  19. Study Pollution Impacts on Upper-Tropospheric Clouds with Aura, CloudSat, and CALIPSO Data

    NASA Technical Reports Server (NTRS)

    Wu, Dong

    2007-01-01

    This viewgraph presentation reviews the impact of pollution on clouds in the Upper Troposphere. Using the data from the Aura Microwave Limb Sounder (MLS), CloudSat, CALIPSO the presentation shows signatures of pollution impacts on clouds in the upper troposphere. The presentation demonstrates the complementary sensitivities of MLS , CloudSat and CALIPSO to upper tropospheric clouds. It also calls for careful analysis required to sort out microphysical changes from dynamical changes.

  20. Spectral negentropy based sidebands and demodulation analysis for planet bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Feng, Zhipeng; Ma, Haoqun; Zuo, Ming J.

    2017-12-01

    Planet bearing vibration signals are highly complex due to intricate kinematics (involving both revolution and spinning) and strong multiple modulations (including not only the fault induced amplitude modulation and frequency modulation, but also additional amplitude modulations due to load zone passing, time-varying vibration transfer path, and time-varying angle between the gear pair mesh lines of action and fault impact force vector), leading to difficulty in fault feature extraction. Rolling element bearing fault diagnosis essentially relies on detection of fault induced repetitive impulses carried by resonance vibration, but they are usually contaminated by noise and therefor are hard to be detected. This further adds complexity to planet bearing diagnostics. Spectral negentropy is able to reveal the frequency distribution of repetitive transients, thus providing an approach to identify the optimal frequency band of a filter for separating repetitive impulses. In this paper, we find the informative frequency band (including the center frequency and bandwidth) of bearing fault induced repetitive impulses using the spectral negentropy based infogram. In Fourier spectrum, we identify planet bearing faults according to sideband characteristics around the center frequency. For demodulation analysis, we filter out the sensitive component based on the informative frequency band revealed by the infogram. In amplitude demodulated spectrum (squared envelope spectrum) of the sensitive component, we diagnose planet bearing faults by matching the present peaks with the theoretical fault characteristic frequencies. We further decompose the sensitive component into mono-component intrinsic mode functions (IMFs) to estimate their instantaneous frequencies, and select a sensitive IMF with an instantaneous frequency fluctuating around the center frequency for frequency demodulation analysis. In the frequency demodulated spectrum (Fourier spectrum of instantaneous frequency) of selected IMF, we discern planet bearing fault reasons according to the present peaks. The proposed spectral negentropy infogram based spectrum and demodulation analysis method is illustrated via a numerical simulated signal analysis. Considering the unique load bearing feature of planet bearings, experimental validations under both no-load and loading conditions are done to verify the derived fault symptoms and the proposed method. The localized faults on outer race, rolling element and inner race are successfully diagnosed.

  1. Ultra-low background mass spectrometry for rare-event searches

    NASA Astrophysics Data System (ADS)

    Dobson, J.; Ghag, C.; Manenti, L.

    2018-01-01

    Inductively Coupled Plasma Mass Spectrometry (ICP-MS) allows for rapid, high-sensitivity determination of trace impurities, notably the primordial radioisotopes 238U and 232Th, in candidate materials for low-background rare-event search experiments. We describe the setup and characterisation of a dedicated low-background screening facility at University College London where we operate an Agilent 7900 ICP-MS. The impact of reagent and carrier gas purity is evaluated and we show that twice-distilled ROMIL-SpATM-grade nitric acid and zero-grade Ar gas delivers similar sensitivity to ROMIL-UpATM-grade acid and research-grade gas. A straightforward procedure for sample digestion and analysis of materials with U/Th concentrations down to 10 ppt g/g is presented. This includes the use of 233U and 230Th spikes to correct for signal loss from a range of sources and verification of 238U and 232Th recovery through digestion and analysis of a certified reference material with a complex sample matrix. Finally, we demonstrate assays and present results from two sample preparation and assay methods: a high-sensitivity measurement of ultra-pure Ti using open digestion techniques, and a closed vessel microwave digestion of a nickel-chromium-alloy using a multi-acid mixture.

  2. Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques

    NASA Astrophysics Data System (ADS)

    Elliott, Louie C.

    This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.

  3. An incremental strategy for calculating consistent discrete CFD sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Korivi, Vamshi Mohan; Taylor, Arthur C., III; Newman, Perry A.; Hou, Gene W.; Jones, Henry E.

    1992-01-01

    In this preliminary study involving advanced computational fluid dynamic (CFD) codes, an incremental formulation, also known as the 'delta' or 'correction' form, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods appear to be needed for future 3D applications; however, because direct solver methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form result in certain difficulties, such as ill-conditioning of the coefficient matrix, which can be overcome when these equations are cast in the incremental form; these and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two laminar sample problems: (1) transonic flow through a double-throat nozzle; and (2) flow over an isolated airfoil.

  4. Reproducing Kernel Particle Method in Plasticity of Pressure-Sensitive Material with Reference to Powder Forming Process

    NASA Astrophysics Data System (ADS)

    Khoei, A. R.; Samimi, M.; Azami, A. R.

    2007-02-01

    In this paper, an application of the reproducing kernel particle method (RKPM) is presented in plasticity behavior of pressure-sensitive material. The RKPM technique is implemented in large deformation analysis of powder compaction process. The RKPM shape function and its derivatives are constructed by imposing the consistency conditions. The essential boundary conditions are enforced by the use of the penalty approach. The support of the RKPM shape function covers the same set of particles during powder compaction, hence no instability is encountered in the large deformation computation. A double-surface plasticity model is developed in numerical simulation of pressure-sensitive material. The plasticity model includes a failure surface and an elliptical cap, which closes the open space between the failure surface and hydrostatic axis. The moving cap expands in the stress space according to a specified hardening rule. The cap model is presented within the framework of large deformation RKPM analysis in order to predict the non-uniform relative density distribution during powder die pressing. Numerical computations are performed to demonstrate the applicability of the algorithm in modeling of powder forming processes and the results are compared to those obtained from finite element simulation to demonstrate the accuracy of the proposed model.

  5. Assessing sensitivity and specificity of the Manchester Triage System in the evaluation of acute coronary syndrome in adult patients in emergency care: a systematic review.

    PubMed

    Nishi, Fernanda Ayache; de Oliveira Motta Maia, Flávia; de Souza Santos, Itamar; de Almeida Lopes Monteiro da Cruz, Dina

    2017-06-01

    Triage is the first assessment and sorting process used to prioritize patients arriving in the emergency department (ED). As a triage tool, the Manchester Triage System (MTS) must have a high sensitivity to minimize the occurrence of under-triage, but must not compromise specificity to avoid the occurrence of overtriage. Sensitivity and specificity of the MTS can be calculated using the frequency of appropriately assigned clinical priority levels for patients presenting to the ED. However, although there are well established criteria for the prioritization of patients with suspected acute coronary syndrome (ACS), several studies have reported difficulties when evaluating patients with this condition. The objective of this review was to synthesize the best available evidence on assessing the sensitivity and specificity of the MTS for screening high-level priority adult patients presenting to the ED with ACS. The current review considered studies that evaluated the use of the MTS in the risk classification of adult patients in the ED. In this review, studies that investigated the priority level, as established by the MTS to screen patients under suspicion of ACS or the sensitivity and specificity of the MTS, for screening patients before the medical diagnosis of ACS were included. This review included both experimental and epidemiological study designs. The results were presented in a narrative synthesis. Six studies were appraised by the independent reviewers. All appraised studies enrolled a consecutive or random sample of patients and presented an overall moderate methodological quality, and all of them were included in this review. A total of 54,176 participants were included in the six studies. All studies were retrospective. Studies included in this review varied in content and data reporting. Only two studies reported sensitivity and specificity values or all the necessary data to calculate sensitivity and specificity. The remaining four studies presented either a sensitivity analysis or the number of true positives and false negatives. However, these four studies were conducted considering only data from patients diagnosed with ACS. Sensitivity values were relatively uniform among the studies: 0.70-0.80. A specificity of 0.59 was reported in the study including only patients with non-traumatic chest pain. On the other hand, in the study that included patients with any complaint, the specificity of MTS to screen patients with ACS was 0.97. The current review demonstrates that the MTS has a moderate sensitivity to evaluate patients with ACS. This may compromise time to treatment in the ED, an important variable in the prognosis of ACS. Atypical presentation of ACS, or high specificity, may also explain the moderate sensitivity demonstrated in this review. However, because of minimal data, it is not possible to confirm this hypothesis. It is difficult to determine the acceptable level of sensitivity or specificity to ensure that a certain triage system is safe.

  6. The art of maturity modeling. Part 2. Alternative models and sensitivity analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waples, D.W.; Suizu, Masahiro; Kamata, Hiromi

    1992-01-01

    The sensitivity of exploration decisions to variations in several input parameters for maturity modeling was examined for the MITI Rumoi well, Hokkaido, Japan. Decisions were almost completely insensitive to uncertainties about formation age and erosional removal across some unconformities, but were more sensitive to changes in removal during unconformities which occurred near maximum paleotemperatures. Exploration decisions were not very sensitive to the choice of a particular kinetic model for hydrocarbon generation. Uncertainties in kerogen type and the kinetics of different kerogen types are more serious than differences among the various kinetic models. Results of modeling using the TTI method weremore » unsatisfactory. Thermal history and timing and amount of hydrocarbon generation estimated or calculated using the TTI method were greatly different from those obtained using a purely kinetic model. The authors strongly recommend use of the kinetic R{sub o} method instead of the TTI method. If they had lacked measured R{sub o} data, subsurface temperature data, or both, their confidence in the modeling results would have been sharply reduced. Conceptual models for predicting heat flow and thermal conductivity are simply too weak at present to allow one to carry out highly meaningful modeling unless the input is constrained by measured data. Maturity modeling therefore requires the use of more, not fewer, measured temperature and maturity data. The use of sensitivity analysis in maturity modeling is very important for understanding the geologic system, for knowing what level of confidence to place on the results, and for determining what new types of data would be most necessary to improve confidence. Sensitivity analysis can be carried out easily using a rapid, interactive maturity-modeling program.« less

  7. Reduction and Uncertainty Analysis of Chemical Mechanisms Based on Local and Global Sensitivities

    NASA Astrophysics Data System (ADS)

    Esposito, Gaetano

    Numerical simulations of critical reacting flow phenomena in hypersonic propulsion devices require accurate representation of finite-rate chemical kinetics. The chemical kinetic models available for hydrocarbon fuel combustion are rather large, involving hundreds of species and thousands of reactions. As a consequence, they cannot be used in multi-dimensional computational fluid dynamic calculations in the foreseeable future due to the prohibitive computational cost. In addition to the computational difficulties, it is also known that some fundamental chemical kinetic parameters of detailed models have significant level of uncertainty due to limited experimental data available and to poor understanding of interactions among kinetic parameters. In the present investigation, local and global sensitivity analysis techniques are employed to develop a systematic approach of reducing and analyzing detailed chemical kinetic models. Unlike previous studies in which skeletal model reduction was based on the separate analysis of simple cases, in this work a novel strategy based on Principal Component Analysis of local sensitivity values is presented. This new approach is capable of simultaneously taking into account all the relevant canonical combustion configurations over different composition, temperature and pressure conditions. Moreover, the procedure developed in this work represents the first documented inclusion of non-premixed extinction phenomena, which is of great relevance in hypersonic combustors, in an automated reduction algorithm. The application of the skeletal reduction to a detailed kinetic model consisting of 111 species in 784 reactions is demonstrated. The resulting reduced skeletal model of 37--38 species showed that the global ignition/propagation/extinction phenomena of ethylene-air mixtures can be predicted within an accuracy of 2% of the full detailed model. The problems of both understanding non-linear interactions between kinetic parameters and identifying sources of uncertainty affecting relevant reaction pathways are usually addressed by resorting to Global Sensitivity Analysis (GSA) techniques. In particular, the most sensitive reactions controlling combustion phenomena are first identified using the Morris Method and then analyzed under the Random Sampling -- High Dimensional Model Representation (RS-HDMR) framework. The HDMR decomposition shows that 10% of the variance seen in the extinction strain rate of non-premixed flames is due to second-order effects between parameters, whereas the maximum concentration of acetylene, a key soot precursor, is affected by mostly only first-order contributions. Moreover, the analysis of the global sensitivity indices demonstrates that improving the accuracy of the reaction rates including the vinyl radical, C2H3, can drastically reduce the uncertainty of predicting targeted flame properties. Finally, the back-propagation of the experimental uncertainty of the extinction strain rate to the parameter space is also performed. This exercise, achieved by recycling the numerical solutions of the RS-HDMR, shows that some regions of the parameter space have a high probability of reproducing the experimental value of the extinction strain rate between its own uncertainty bounds. Therefore this study demonstrates that the uncertainty analysis of bulk flame properties can effectively provide information on relevant chemical reactions.

  8. Neutron activation analysis for antimetabolites. [in food samples

    NASA Technical Reports Server (NTRS)

    1973-01-01

    Determination of metal ion contaminants in food samples is studied. A weighed quantity of each sample was digested in a concentrated mixture of nitric, hydrochloric and perchloric acids to affect complete solution of the food products. The samples were diluted with water and the pH adjusted according to the specific analysis performed. The samples were analyzed by neutron activation analysis, polarography, and atomic absorption spectrophotometry. The solid food samples were also analyzed by neutron activation analysis for increased sensitivity and lower levels of detectability. The results are presented in tabular form.

  9. Tropospheric Ozone Near-Nadir-Viewing IR Spectral Sensitivity and Ozone Measurements from NAST-I

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Smith, William L.; Larar, Allen M.

    2001-01-01

    Infrared ozone spectra from near nadir observations have provided atmospheric ozone information from the sensor to the Earth's surface. Simulations of the NPOESS Airborne Sounder Testbed-Interferometer (NAST-I) from the NASA ER-2 aircraft (approximately 20 km altitude) with a spectral resolution of 0.25/cm were used for sensitivity analysis. The spectral sensitivity of ozone retrievals to uncertainties in atmospheric temperature and water vapor is assessed in order to understand the relationship between the IR emissions and the atmospheric state. In addition, ozone spectral radiance sensitivity to its ozone layer densities and radiance weighting functions reveals the limit of the ozone profile retrieval accuracy from NAST-I measurements. Statistical retrievals of ozone with temperature and moisture retrievals from NAST-I spectra have been investigated and the preliminary results from NAST-I field campaigns are presented.

  10. Behavior sensitivities for control augmented structures

    NASA Technical Reports Server (NTRS)

    Manning, R. A.; Lust, R. V.; Schmit, L. A.

    1987-01-01

    During the past few years it has been recognized that combining passive structural design methods with active control techniques offers the prospect of being able to find substantially improved designs. These developments have stimulated interest in augmenting structural synthesis by adding active control system design variables to those usually considered in structural optimization. An essential step in extending the approximation concepts approach to control augmented structural synthesis is the development of a behavior sensitivity analysis capability for determining rates of change of dynamic response quantities with respect to changes in structural and control system design variables. Behavior sensitivity information is also useful for man-machine interactive design as well as in the context of system identification studies. Behavior sensitivity formulations for both steady state and transient response are presented and the quality of the resulting derivative information is evaluated.

  11. Instrument performance of a radon measuring system with the alpha-track detection technique.

    PubMed

    Tokonami, S; Zhuo, W; Ryuo, H; Yonehara, H; Yamada, Y; Shimo, M

    2003-01-01

    An instrument performance test has been carried out for a radon measuring system made in Hungary. The system measures radon using the alpha-track detection technique. It consists of three parts: the passive detector, the etching unit and the evaluation unit. A CR-39 detector is used as the radiation detector. Alpha-track reading and data analysis are carried out after chemical etching. The following subjects were examined in the present study: (1) radon sensitivity, (2) performance of etching and evaluation processes and (3) thoron sensitivity. The radon sensitivity of 6.9 x 10(-4) mm(-2) (Bq m(-3) d)(-1) was acceptable for practical application. The thoron sensitivity was estimated to be as low as 3.3 x 10(-5) mm(-2) (Bq m(-3) d)(-1) from the experimental study.

  12. Comparison of Two Global Sensitivity Analysis Methods for Hydrologic Modeling over the Columbia River Basin

    NASA Astrophysics Data System (ADS)

    Hameed, M.; Demirel, M. C.; Moradkhani, H.

    2015-12-01

    Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.

  13. Patterns of pulmonary maturation in normal and abnormal pregnancy.

    PubMed

    Goldkrand, J W; Slattery, D S

    1979-03-01

    Fetal pulmonary maturation may be a variable event depending on various feto-maternal environmental and biochemical influences. The patterns of maturation were studied in 211 amniotic fluid samples from 123 patients (normal 55; diabetes 23; Rh sensitization 19; preeclampsia 26). The phenomenon of globule formation from the amniotic fluid lipid extract and is relation to pulmonary maturity was utilized for this analysis. Validation of this technique is presented. A normal curve was constructed from 22 to 42 weeks; gestation and compared to the abnormal pregnancies. Patients with class A, B, and C diabetes and Rh-sensitized pregnancies had delayed pulmonary maturation. Patients with class D diabetes and preclampsia paralleled the normal course of maturation. A discussion of these results and their possible cause is presented.

  14. Study of the measurement of defense style using Bond's Defense Style Questionnaire.

    PubMed

    Nishimura, R

    1998-08-01

    Two hundred and seventy healthy university students were surveyed in December 1995 using Bond's Defense Style Questionnaire (DSQ) to measure the subjects' defense mechanisms. At the same time, a survey using Byrne's R-S Scale (Repression-Sensitization Scale) of the MMPI (Minnesota multiphasic personality inventory) and five psychiatric symptom indexes (anxiety, sense of inadequacy, sensitivity, depression and impulsive anger) selected from the CMI (Cornell Medical Index-Health Questionnaire) was conducted. Three factors were extracted from the DSQ through factor analysis: immature defenses, neurotic defenses, and mature defenses. The results of analysis of variance revealed the following: (i) for anxiety and anxiety related symptoms, both immature defenses and neurotic defenses indicated principal effect; (ii) for impulsive anger and depression, immature defenses presented principal effect; and (iii) for sensitivity and impulsive anger, interaction between a mature defense style and neurotic defense style was noted. The relationship between defense styles and psychiatric symptoms in healthy people is studied in this paper.

  15. Single quantum dot analysis enables multiplexed point mutation detection by gap ligase chain reaction.

    PubMed

    Song, Yunke; Zhang, Yi; Wang, Tza-Huei

    2013-04-08

    Gene point mutations present important biomarkers for genetic diseases. However, existing point mutation detection methods suffer from low sensitivity, specificity, and a tedious assay processes. In this report, an assay technology is proposed which combines the outstanding specificity of gap ligase chain reaction (Gap-LCR), the high sensitivity of single-molecule coincidence detection, and the superior optical properties of quantum dots (QDs) for multiplexed detection of point mutations in genomic DNA. Mutant-specific ligation products are generated by Gap-LCR and subsequently captured by QDs to form DNA-QD nanocomplexes that are detected by single-molecule spectroscopy (SMS) through multi-color fluorescence burst coincidence analysis, allowing for multiplexed mutation detection in a separation-free format. The proposed assay is capable of detecting zeptomoles of KRAS codon 12 mutation variants with near 100% specificity. Its high sensitivity allows direct detection of KRAS mutation in crude genomic DNA without PCR pre-amplification. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Metabolomic Strategies Involving Mass Spectrometry Combined with Liquid and Gas Chromatography.

    PubMed

    Lopes, Aline Soriano; Cruz, Elisa Castañeda Santa; Sussulini, Alessandra; Klassen, Aline

    2017-01-01

    Amongst all omics sciences, there is no doubt that metabolomics is undergoing the most important growth in the last decade. The advances in analytical techniques and data analysis tools are the main factors that make possible the development and establishment of metabolomics as a significant research field in systems biology. As metabolomic analysis demands high sensitivity for detecting metabolites present in low concentrations in biological samples, high-resolution power for identifying the metabolites and wide dynamic range to detect metabolites with variable concentrations in complex matrices, mass spectrometry is being the most extensively used analytical technique for fulfilling these requirements. Mass spectrometry alone can be used in a metabolomic analysis; however, some issues such as ion suppression may difficultate the quantification/identification of metabolites with lower concentrations or some metabolite classes that do not ionise as well as others. The best choice is coupling separation techniques, such as gas or liquid chromatography, to mass spectrometry, in order to improve the sensitivity and resolution power of the analysis, besides obtaining extra information (retention time) that facilitates the identification of the metabolites, especially when considering untargeted metabolomic strategies. In this chapter, the main aspects of mass spectrometry (MS), liquid chromatography (LC) and gas chromatography (GC) are discussed, and recent clinical applications of LC-MS and GC-MS are also presented.

  17. Headspace-SPME-GC/MS as a simple cleanup tool for sensitive 2,6-diisopropylphenol analysis from lipid emulsions and adaptable to other matrices.

    PubMed

    Pickl, Karin E; Adamek, Viktor; Gorges, Roland; Sinner, Frank M

    2011-07-15

    Due to increased regulatory requirements, the interaction of active pharmaceutical ingredients with various surfaces and solutions during production and storage is gaining interest in the pharmaceutical research field, in particular with respect to development of new formulations, new packaging material and the evaluation of cleaning processes. Experimental adsorption/absorption studies as well as the study of cleaning processes require sophisticated analytical methods with high sensitivity for the drug of interest. In the case of 2,6-diisopropylphenol - a small lipophilic drug which is typically formulated as lipid emulsion for intravenous injection - a highly sensitive method in the concentration range of μg/l suitable to be applied to a variety of different sample matrices including lipid emulsions is needed. We hereby present a headspace-solid phase microextraction (HS-SPME) approach as a simple cleanup procedure for sensitive 2,6-diisopropylphenol quantification from diverse matrices choosing a lipid emulsion as the most challenging matrix with regard to complexity. By combining the simple and straight forward HS-SPME sample pretreatment with an optimized GC-MS quantification method a robust and sensitive method for 2,6-diisopropylphenol was developed. This method shows excellent sensitivity in the low μg/l concentration range (5-200μg/l), good accuracy (94.8-98.8%) and precision (intraday-precision 0.1-9.2%, inter-day precision 2.0-7.7%). The method can be easily adapted to other, less complex, matrices such as water or swab extracts. Hence, the presented method holds the potential to serve as a single and simple analytical procedure for 2,6-diisopropylphenol analysis in various types of samples such as required in, e.g. adsorption/absorption studies which typically deal with a variety of different surfaces (steel, plastic, glass, etc.) and solutions/matrices including lipid emulsions. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Improved numerical solutions for chaotic-cancer-model

    NASA Astrophysics Data System (ADS)

    Yasir, Muhammad; Ahmad, Salman; Ahmed, Faizan; Aqeel, Muhammad; Akbar, Muhammad Zubair

    2017-01-01

    In biological sciences, dynamical system of cancer model is well known due to its sensitivity and chaoticity. Present work provides detailed computational study of cancer model by counterbalancing its sensitive dependency on initial conditions and parameter values. Cancer chaotic model is discretized into a system of nonlinear equations that are solved using the well-known Successive-Over-Relaxation (SOR) method with a proven convergence. This technique enables to solve large systems and provides more accurate approximation which is illustrated through tables, time history maps and phase portraits with detailed analysis.

  19. First data from CUORE-0

    DOE PAGES

    Vignati, A. M.; Aguirre, C. P.; Artusa, D. R.; ...

    2015-03-24

    CUORE-0 is an experiment built to test and demonstrate the performance of the upcoming CUORE experiment. Composed of 52 TeO 2 bolometers of 750 g each, it is expected to reach a sensitivity to the 0νββ half-life of 130Te around 3 · 10 24 y in one year of live time. We present the first data, corresponding to an exposure of 7.1 kg y. An analysis of the background indicates that the CUORE sensitivity goal is within reach, validating our techniques to reduce the α radioactivity of the detector.

  20. First data from CUORE-0

    NASA Astrophysics Data System (ADS)

    Vignati, A. M.; Aguirre, C. P.; Artusa, D. R.; Avignone, F. T., III; Azzolini, O.; Balata, M.; Banks, T. I.; Bari, G.; Beeman, J.; Bellini, F.; Bersani, A.; Biassoni, M.; Brofferio, C.; Bucci, C.; Cai, X. Z.; Camacho, A.; Canonica, L.; Cao, X.; Capelli, S.; Carbone, L.; Cardani, L.; Carrettoni, M.; Casali, N.; Chiesa, D.; Chott, N.; Clemenza, M.; Cosmelli, C.; Cremonesi, O.; Creswick, R. J.; Dafinei, I.; Dally, A.; Datskov, V.; De Biasi, A.; Deninno, M. M.; Di Domizio, S.; di Vacri, M. L.; Ejzak, L.; Fang, D. Q.; Farach, H. A.; Faverzani, M.; Fernandes, G.; Ferri, E.; Ferroni, F.; Fiorini, E.; Franceschi, M. A.; Freedman, S. J.; Fujikawa, B. K.; Giachero, A.; Gironi, L.; Giuliani, A.; Goett, J.; Gorla, P.; Gotti, C.; Gutierrez, T. D.; Haller, E. E.; Han, K.; Heeger, K. M.; Hennings-Yeomans, R.; Huang, H. Z.; Kadel, R.; Kazkaz, K.; Keppel, G.; Kolomensky, Yu. G.; Li, Y. L.; Ligi, C.; Lim, K. E.; Liu, X.; Ma, Y. G.; Maiano, C.; Maino, M.; Martinez, M.; Maruyama, R. H.; Mei, Y.; Moggi, N.; Morganti, S.; Napolitano, T.; Nisi, S.; Nones, C.; Norman, E. B.; Nucciotti, A.; O'Donnell, T.; Orio, F.; Orlandi, D.; Ouellet, J. L.; Pallavicini, M.; Palmieri, V.; Pattavina, L.; Pavan, M.; Pedretti; Pessina, G.; Piperno, G.; Pira, C.; Pirro, S.; Previtali, E.; Rampazzo, V.; Rosenfeld, C.; Rusconi, C.; Sala, E.; Sangiorgio, S.; Scielzo, N. D.; Sisti, M.; Smith, A. R.; Taffarello, L.; Tenconi, M.; Terranova, F.; Tian, W. D.; Tomei, C.; Trentalange, S.; Ventura, G.; Wang, B. S.; Wang, H. W.; Wielgus, L.; Wilson, J.; Winslow, L. A.; Wise, T.; Woodcraft, A.; Zanotti, L.; Zarra, C.; Zhu, B. X.; Zucchelli, S.

    CUORE-0 is an experiment built to test and demonstrate the performance of the upcoming CUORE experiment. Com- posed of 52 TeO2 bolometers of 750 g each, it is expected to reach a sensitivity to the 0νββ half-life of 130Te around 3 · 1024 y in one year of live time. We present the first data, corresponding to an exposure of 7.1 kg y. An analysis of the background indicates that the CUORE sensitivity goal is within reach, validating our techniques to reduce the α radioactivity of the detector.

  1. Biosensing Technologies for Mycobacterium tuberculosis Detection: Status and New Developments

    PubMed Central

    Zhou, Lixia; He, Xiaoxiao; He, Dinggeng; Wang, Kemin; Qin, Dilan

    2011-01-01

    Biosensing technologies promise to improve Mycobacterium tuberculosis (M. tuberculosis) detection and management in clinical diagnosis, food analysis, bioprocess, and environmental monitoring. A variety of portable, rapid, and sensitive biosensors with immediate “on-the-spot” interpretation have been developed for M. tuberculosis detection based on different biological elements recognition systems and basic signal transducer principles. Here, we present a synopsis of current developments of biosensing technologies for M. tuberculosis detection, which are classified on the basis of basic signal transducer principles, including piezoelectric quartz crystal biosensors, electrochemical biosensors, and magnetoelastic biosensors. Special attention is paid to the methods for improving the framework and analytical parameters of the biosensors, including sensitivity and analysis time as well as automation of analysis procedures. Challenges and perspectives of biosensing technologies development for M. tuberculosis detection are also discussed in the final part of this paper. PMID:21437177

  2. Approaching the Limit in Atomic Spectrochemical Analysis.

    ERIC Educational Resources Information Center

    Hieftje, Gary M.

    1982-01-01

    To assess the ability of current analytical methods to approach the single-atom detection level, theoretical and experimentally determined detection levels are presented for several chemical elements. A comparison of these methods shows that the most sensitive atomic spectrochemical technique currently available is based on emission from…

  3. Autonomous Mars ascent and orbit rendezvous for earth return missions

    NASA Technical Reports Server (NTRS)

    Edwards, H. C.; Balmanno, W. F.; Cruz, Manuel I.; Ilgen, Marc R.

    1991-01-01

    The details of tha assessment of autonomous Mars ascent and orbit rendezvous for earth return missions are presented. Analyses addressing navigation system assessments, trajectory planning, targeting approaches, flight control guidance strategies, and performance sensitivities are included. Tradeoffs in the analysis and design process are discussed.

  4. Practical applications of surface analytic tools in tribology

    NASA Technical Reports Server (NTRS)

    Ferrante, J.

    1980-01-01

    A brief description of many of the widely used tools is presented. Of this list, those which have the highest applicability for giving elemental and/or compound analysis for problems of interest in tribology along with being truly surface sensitive (that is less than 10 atomic layers) are presented. The latter group is critiqued in detail in terms of strengths and weaknesses. Emphasis is placed on post facto analysis of experiments performed under real conditions (e.g., in air with lubricants). It is further indicated that such equipment could be used for screening and quality control.

  5. Implementation of an experimental program to investigate the performance characteristics of OMEGA navigation

    NASA Technical Reports Server (NTRS)

    Baxa, E. G., Jr.

    1974-01-01

    A theoretical formulation of differential and composite OMEGA error is presented to establish hypotheses about the functional relationships between various parameters and OMEGA navigational errors. Computer software developed to provide for extensive statistical analysis of the phase data is described. Results from the regression analysis used to conduct parameter sensitivity studies on differential OMEGA error tend to validate the theoretically based hypothesis concerning the relationship between uncorrected differential OMEGA error and receiver separation range and azimuth. Limited results of measurement of receiver repeatability error and line of position measurement error are also presented.

  6. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool

    PubMed Central

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-01-01

    Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080

  7. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.

    PubMed

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-08-15

    It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.

  8. Comparing methods for analysis of biomedical hyperspectral image data

    NASA Astrophysics Data System (ADS)

    Leavesley, Silas J.; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter F.; Annamdevula, Naga S.; Rich, Thomas C.

    2017-02-01

    Over the past 2 decades, hyperspectral imaging technologies have been adapted to address the need for molecule-specific identification in the biomedical imaging field. Applications have ranged from single-cell microscopy to whole-animal in vivo imaging and from basic research to clinical systems. Enabling this growth has been the availability of faster, more effective hyperspectral filtering technologies and more sensitive detectors. Hence, the potential for growth of biomedical hyperspectral imaging is high, and many hyperspectral imaging options are already commercially available. However, despite the growth in hyperspectral technologies for biomedical imaging, little work has been done to aid users of hyperspectral imaging instruments in selecting appropriate analysis algorithms. Here, we present an approach for comparing the effectiveness of spectral analysis algorithms by combining experimental image data with a theoretical "what if" scenario. This approach allows us to quantify several key outcomes that characterize a hyperspectral imaging study: linearity of sensitivity, positive detection cut-off slope, dynamic range, and false positive events. We present results of using this approach for comparing the effectiveness of several common spectral analysis algorithms for detecting weak fluorescent protein emission in the midst of strong tissue autofluorescence. Results indicate that this approach should be applicable to a very wide range of applications, allowing a quantitative assessment of the effectiveness of the combined biology, hardware, and computational analysis for detecting a specific molecular signature.

  9. Sensitivity analysis of a wing aeroelastic response

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Eldred, Lloyd B.; Barthelemy, Jean-Francois M.

    1991-01-01

    A variation of Sobieski's Global Sensitivity Equations (GSE) approach is implemented to obtain the sensitivity of the static aeroelastic response of a three-dimensional wing model. The formulation is quite general and accepts any aerodynamics and structural analysis capability. An interface code is written to convert one analysis's output to the other's input, and visa versa. Local sensitivity derivatives are calculated by either analytic methods or finite difference techniques. A program to combine the local sensitivities, such as the sensitivity of the stiffness matrix or the aerodynamic kernel matrix, into global sensitivity derivatives is developed. The aerodynamic analysis package FAST, using a lifting surface theory, and a structural package, ELAPS, implementing Giles' equivalent plate model are used.

  10. Flows of dioxins and furans in coastal food webs: inverse modeling, sensitivity analysis, and applications of linear system theory.

    PubMed

    Saloranta, Tuomo M; Andersen, Tom; Naes, Kristoffer

    2006-01-01

    Rate constant bioaccumulation models are applied to simulate the flow of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in the coastal marine food web of Frierfjorden, a contaminated fjord in southern Norway. We apply two different ways to parameterize the rate constants in the model, global sensitivity analysis of the models using Extended Fourier Amplitude Sensitivity Test (Extended FAST) method, as well as results from general linear system theory, in order to obtain a more thorough insight to the system's behavior and to the flow pathways of the PCDD/Fs. We calibrate our models against observed body concentrations of PCDD/Fs in the food web of Frierfjorden. Differences between the predictions from the two models (using the same forcing and parameter values) are of the same magnitude as their individual deviations from observations, and the models can be said to perform about equally well in our case. Sensitivity analysis indicates that the success or failure of the models in predicting the PCDD/F concentrations in the food web organisms highly depends on the adequate estimation of the truly dissolved concentrations in water and sediment pore water. We discuss the pros and cons of such models in understanding and estimating the present and future concentrations and bioaccumulation of persistent organic pollutants in aquatic food webs.

  11. Esophageal cancer detection based on tissue surface-enhanced Raman spectroscopy and multivariate analysis

    NASA Astrophysics Data System (ADS)

    Feng, Shangyuan; Lin, Juqiang; Huang, Zufang; Chen, Guannan; Chen, Weisheng; Wang, Yue; Chen, Rong; Zeng, Haishan

    2013-01-01

    The capability of using silver nanoparticle based near-infrared surface enhanced Raman scattering (SERS) spectroscopy combined with principal component analysis (PCA) and linear discriminate analysis (LDA) to differentiate esophageal cancer tissue from normal tissue was presented. Significant differences in Raman intensities of prominent SERS bands were observed between normal and cancer tissues. PCA-LDA multivariate analysis of the measured tissue SERS spectra achieved diagnostic sensitivity of 90.9% and specificity of 97.8%. This exploratory study demonstrated great potential for developing label-free tissue SERS analysis into a clinical tool for esophageal cancer detection.

  12. Analysis of airfoil leading edge separation bubbles

    NASA Technical Reports Server (NTRS)

    Carter, J. E.; Vatsa, V. N.

    1982-01-01

    A local inviscid-viscous interaction technique was developed for the analysis of low speed airfoil leading edge transitional separation bubbles. In this analysis an inverse boundary layer finite difference analysis is solved iteratively with a Cauchy integral representation of the inviscid flow which is assumed to be a linear perturbation to a known global viscous airfoil analysis. Favorable comparisons with data indicate the overall validity of the present localized interaction approach. In addition numerical tests were performed to test the sensitivity of the computed results to the mesh size, limits on the Cauchy integral, and the location of the transition region.

  13. Ionic solution and nanoparticle assisted MALDI-MS as bacterial biosensors for rapid analysis of yogurt.

    PubMed

    Lee, Chia-Hsun; Gopal, Judy; Wu, Hui-Fen

    2012-01-15

    Bacterial analysis from food samples is a highly challenging task because food samples contain intensive interferences from proteins and carbohydrates. Three different conditions of yogurt were analyzed: (1) the fresh yogurt immediately after purchasing, (2) the yogurt after expiry date stored in the refrigerator and (3) the yogurt left outside, without refrigeration. The shelf lives of both these yogurt was compared in terms of the decrease in bacterial signals. AB which initially contained 10(9) cells/mL drastically reduced to 10(7) cells/mL. However, Lin (Feng-Yin) yogurt which initially (fresh) had 10(8) cells/mL, even after two weeks beyond the expiry period showed no marked drop in bacterial count. Conventional MALDI-MS analysis showed limited sensitivity for analysis of yogurt bacteria amidst the complex milk proteins present in yogurt. A cost effective ionic solution, CrO(4)(2-) solution was used to enable the successful detection of bacterial signals (40-fold increased in sensitivity) selectively without the interference of the milk proteins. 0.035 mg of Ag nanoparticles (NPs) were also found to improve the detection of bacteria 2-6 times in yogurt samples. The current approach can be further applied as a rapid, sensitive and effective platform for bacterial analysis from food. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Asymmetric exponential amplification reaction on a toehold/biotin featured template: an ultrasensitive and specific strategy for isothermal microRNAs analysis

    PubMed Central

    Chen, Jun; Zhou, Xueqing; Ma, Yingjun; Lin, Xiulian; Dai, Zong; Zou, Xiaoyong

    2016-01-01

    The sensitive and specific analysis of microRNAs (miRNAs) without using a thermal cycler instrument is significant and would greatly facilitate biological research and disease diagnostics. Although exponential amplification reaction (EXPAR) is the most attractive strategy for the isothermal analysis of miRNAs, its intrinsic limitations of detection efficiency and inevitable non-specific amplification critically restrict its use in analytical sensitivity and specificity. Here, we present a novel asymmetric EXPAR based on a new biotin/toehold featured template. A biotin tag was used to reduce the melting temperature of the primer/template duplex at the 5′ terminus of the template, and a toehold exchange structure acted as a filter to suppress the non-specific trigger of EXPAR. The asymmetric EXPAR exhibited great improvements in amplification efficiency and specificity as well as a dramatic extension of dynamic range. The limit of detection for the let-7a analysis was decreased to 6.02 copies (0.01 zmol), and the dynamic range was extended to 10 orders of magnitude. The strategy enabled the sensitive and accurate analysis of let-7a miRNA in human cancer tissues with clearly better precision than both standard EXPAR and RT-qPCR. Asymmetric EXPAR is expected to have an important impact on the development of simple and rapid molecular diagnostic applications for short oligonucleotides. PMID:27257058

  15. Cholinesterase activity of muscle tissue from freshwater fishes: characterization and sensitivity analysis to the organophosphate methyl-paraoxon.

    PubMed

    Lopes, Renato Matos; Filho, Moacelio Veranio Silva; de Salles, João Bosco; Bastos, Vera Lúcia Freire Cunha; Bastos, Jayme Cunha

    2014-06-01

    The biochemical characterization of cholinesterases (ChE) from different teleost species has been a critical step in ensuring the proper use of ChE activity levels as biomarkers in environmental monitoring programs. In the present study, ChE from Oreochromis niloticus, Piaractus mesopotamicus, Leporinus macrocephalus, and Prochilodus lineatus was biochemically characterized by specific substrates and inhibitors. Moreover, muscle tissue ChE sensitivity to the organophosphate pesticide methyl-paraoxon was evaluated by determining the inhibition kinetic constants for its progressive irreversible inhibition by methyl-paraoxon as well as the 50% inhibitory concentration (IC50) for 30 min for each species. The present results indicate that acetylcholinesterase (AChE) must be present in the muscle from P. mesopotamicus, L. macrocephalus, and P. lineatus and that O. niloticus possesses an atypical cholinesterase or AChE and butyrylcholinesterase (BChE). Furthermore, there is a large difference regarding the sensitivity of these enzymes to methyl-paraoxon. The determined IC50 values for 30 min were 70 nM (O. niloticus), 258 nM (P. lineatus), 319 nM (L. macrocephalus), and 1578 nM (P. mesopotamicus). The results of the present study also indicate that the use of efficient methods for extracting these enzymes, their kinetic characterization, and determination of sensitivity differences between AChE and BChE to organophosphate compounds are essential for the determination of accurate ChE activity levels for environmental monitoring programs. © 2014 SETAC.

  16. Sensitivity analysis of a new dual-porosity hydroloigcal model coupled with the SOSlope model for the numerical simulations of rainfall triggered shallow landslides.

    NASA Astrophysics Data System (ADS)

    Schwarz, Massimiliano; Cohen, Denis

    2017-04-01

    Morphology and extent of hydrological pathways, in combination with the spatio-temporal variability of rainfall events and the heterogeneities of hydro-mechanical properties of soils, has a major impact on the hydrological conditions that locally determine the triggering of shallow landslides. The coupling of these processes at different spatial scales is an enormous challenge for slope stability modeling at the catchment scale. In this work we present a sensitivity analysis of a new dual-porosity hydrological model implemented in the hydro-mechanical model SOSlope for the modeling of shallow landslides on vegetated hillslopes. The proposed model links the calculation of the saturation dynamic of preferential flow-paths based on hydrological and topographical characteristics of the landscape to the hydro-mechanical behavior of the soil along a potential failure surface due to the changes of soil matrix saturation. Furthermore, the hydro-mechanical changes of soil conditions are linked to the local stress-strain properties of the (rooted-)soil that ultimately determine the force redistribution and related deformations at the hillslope scale. The model considers forces to be redistributed through three types of solicitations: tension, compression, and shearing. The present analysis shows how the conditions of deformation due to the passive earth pressure mobilized at the toe of the landslide are particularly important in defining the timing and extension of shallow landslides. The model also shows that, in densely rooted hillslopes, lateral force redistribution under tension through the root-network may substantially contribute to stabilizing slopes, avoiding crack formation and large deformations. The results of the sensitivity analysis are discussed in the context of protection forest management and bioengineering techniques.

  17. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1993-01-01

    In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.

  18. Development of a new semi-analytical model for cross-borehole flow experiments in fractured media

    USGS Publications Warehouse

    Roubinet, Delphine; Irving, James; Day-Lewis, Frederick D.

    2015-01-01

    Analysis of borehole flow logs is a valuable technique for identifying the presence of fractures in the subsurface and estimating properties such as fracture connectivity, transmissivity and storativity. However, such estimation requires the development of analytical and/or numerical modeling tools that are well adapted to the complexity of the problem. In this paper, we present a new semi-analytical formulation for cross-borehole flow in fractured media that links transient vertical-flow velocities measured in one or a series of observation wells during hydraulic forcing to the transmissivity and storativity of the fractures intersected by these wells. In comparison with existing models, our approach presents major improvements in terms of computational expense and potential adaptation to a variety of fracture and experimental configurations. After derivation of the formulation, we demonstrate its application in the context of sensitivity analysis for a relatively simple two-fracture synthetic problem, as well as for field-data analysis to investigate fracture connectivity and estimate fracture hydraulic properties. These applications provide important insights regarding (i) the strong sensitivity of fracture property estimates to the overall connectivity of the system; and (ii) the non-uniqueness of the corresponding inverse problem for realistic fracture configurations.

  19. Uncertainties propagation and global sensitivity analysis of the frequency response function of piezoelectric energy harvesters

    NASA Astrophysics Data System (ADS)

    Ruiz, Rafael O.; Meruane, Viviana

    2017-06-01

    The goal of this work is to describe a framework to propagate uncertainties in piezoelectric energy harvesters (PEHs). These uncertainties are related to the incomplete knowledge of the model parameters. The framework presented could be employed to conduct prior robust stochastic predictions. The prior analysis assumes a known probability density function for the uncertain variables and propagates the uncertainties to the output voltage. The framework is particularized to evaluate the behavior of the frequency response functions (FRFs) in PEHs, while its implementation is illustrated by the use of different unimorph and bimorph PEHs subjected to different scenarios: free of uncertainties, common uncertainties, and uncertainties as a product of imperfect clamping. The common variability associated with the PEH parameters are tabulated and reported. A global sensitivity analysis is conducted to identify the Sobol indices. Results indicate that the elastic modulus, density, and thickness of the piezoelectric layer are the most relevant parameters of the output variability. The importance of including the model parameter uncertainties in the estimation of the FRFs is revealed. In this sense, the present framework constitutes a powerful tool in the robust design and prediction of PEH performance.

  20. A Bayesian technique for improving the sensitivity of the atmospheric neutrino L/E analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blake, A. S. T.; Chapman, J. D.; Thomson, M. A.

    Tmore » his paper outlines a method for improving the precision of atmospheric neutrino oscillation measurements. One experimental signature for these oscillations is an observed deficit in the rate of ν μ charged-current interactions with an oscillatory dependence on L ν / E ν , where L ν is the neutrino propagation distance and E mrow is="true"> ν is the neutrino energy. For contained-vertex atmospheric neutrino interactions, the L ν / E ν resolution varies significantly from event to event. he precision of the oscillation measurement can be improved by incorporating information on L ν / E ν resolution into the oscillation analysis. In the analysis presented here, a Bayesian technique is used to estimate the L ν / E ν resolution of observed atmospheric neutrinos on an event-by-event basis. By separating the events into bins of L ν / E ν resolution in the oscillation analysis, a significant improvement in oscillation sensitivity can be achieved.« less

Top